• sime
    1k
    Anselms's ontological argument is mine, in spite of it's theological pretenses, for it is an example of a logically valid constructive argument that is 'necessarily true' but nevertheless draws a false conclusion about the world outside of logic, in spite of the argument insisting that it is referring to the outside world!

    As I see it, the argument is but one of infinitely many examples of a logically valid but false arguments, that presents negative evidence with regards to the epistemological utility of constructive logic, and thus in turn presenting negative evidence regarding the epistemological utility of a priori philosophical arguments, such as transcendental arguments. In other words, even ideal reasoners can be expected to draw rationally "correct" yet empirically false conclusions about the world. In which case, what is the point of AI and cognitive science?
  • wonderer1
    1.8k
    Anselms's ontological argument is mine, in spite of it's theological pretenses, for it is an example of a logically valid constructive argument that is 'necessarily true' but nevertheless draws a false conclusion about the world outside of logic, in spite of the argument insisting that it is referring to the outside world!sime

    Interesting example!

    In other words, even ideal reasoners can be expected to draw rationally "correct" yet empirically false conclusions about the world. In which case, what is the point of AI and cognitive science?sime

    There is more to an ideal of reasoning than the ability to apply logic in a valid way. There is also the pattern recognition applied to diverse empirical observations that allow for recognition of false premises. For example the "training set" which is hugely important to the results yielded by modern AI.
  • sime
    1k
    There is more to an ideal of reasoning than the ability to apply logic in a valid way. There is also the pattern recognition applied to diverse empirical observations that allow for recognition of false premises. For example the "training set" which is hugely important to the results yielded by modern AI.wonderer1

    Yes, very much so. The successes of Machine Learning generalisation are entirely the consequence of ML models evolving over time so as to fit the facts being modeled, as opposed to the generalisation performance of ML being the consequence of a priori and constructive mathematical reasoning, as if purely mathematical reasoning could predict in advance the unknown facts being modeled.

    And yet many popular textbooks on ML written around the turn of the millennium presented the subject as if successful generalisation performance could be mathematically justified in advance on the basis of a priori philosophical principles such as Occam's Razor, Non-informative prior selection, Maximum Entropy and so on. Notably those books only very briefly mentioned, if at all, Wolpert's No-Free lunch theorems that put paid to the idea of ML being a theory of induction.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.