• mosesquine
    95
    Fodor said that English sentences have no meaning. Fodor further claims that no one thinks in English. This follows that thoughts and meanings are symbolic. What make English sentences meaningful are symbolic (or semantic) structures. Does everyone think in symbolic processes? Is there any objection to the theory?
  • Wayfarer
    22.8k
    It would be good to provide a citation for those claims. I am a bit familiar with Fodor, but I can't really see the point of the first sentence, because if it's true then it falls victim to the Cretan paradox and is therefore meaningless in its own terms. (In other words, if it is true it is meaningless, if it is not meaningless it's not true.)

    Furthermore, I know that I frequently think in English. When I'm composing this text, the words I intend to use appear before the mind's eye, so to speak. Sometimes I have to pause and consider what to write next, but when I do, English is invariably a part of that. Of course, there are also many aspects of thinking that take place symbollically and probably even unconsciously, in the sense of not being available to conscious introspection. But English is certainly central to it. I suppose, were I a mathematician, I would also be thinking in terms of numbers and mathematical symbols and relations. I can't see how one could not do that, and still be doing mathematics.

    And overall, surely sentence construction, or any kind of discourse or narrative, relies on meaning, which in turn provides the referents for symbolic terms.
  • mosesquine
    95

    You can find Fodor's claim in 'LOT 2: The Language of Thought Revisited'. It's page 73.
  • mosesquine
    95

    Fodor's claim is understood as which Mentalese is prior to natural languages. English sentences have no meaning, but Mentalese gives meaning to them.
  • Terrapin Station
    13.8k
    I'd have to read Fodor in more detail, but what he appears to be saying there is that meaning isn't literally contained in a sentence like "Paderewski is tall," whether we're talking about the sentence as text or as something spoken or whatever. And he also appears to be saying that "Paderewski is tall" as a set of letters or sounds or whatever also isn't sufficient for thought.

    And I definitely agree with both of those things.

    It's probably a bit misleading to simply present it as Fodor saying "English sentences have no meaning," however. And even in that passage, he's specifying "(surface) English" at one point, and at another point he says, "strictly speaking," where he seems to be saying "literally . . ."
  • apokrisis
    7.3k
    Fodor's claim is understood as which Mentalese is prior to natural languages. English sentences have no meaning, but Mentalese gives meaning to them.mosesquine

    The problem with this kind of old fashioned cognitivism is that it is based on a simplistic computational model of mind and reasoning where states of information are mapped to states of information.

    The idea is that - just as with a mechanical computer or Turing machine - input arrives, it gets crunched, an output state is displayed. So thought and conciousness is seen as data processing that results in states of representation ... setting up the homuncular question of who gets to see, understand and consciously experience these output states?

    The best counter position to this computationalism is an enactive or ecological view of the mind/brain, especially when fleshed out by a biosemiotic understanding of language use.

    Basically it starts with an inversion of computationalism. Instead of minds converting inputs to outputs, minds begin by predicting their output state and using that to ignore as much input as possible.

    If you already know that the door knob isn't squishy jelly, the door swings away from you rather than slides up into the roof space, and beyond is the bedroom you've seen a million times rather than the far side of the moon or something else unpredictable, then you can ignore pretty much everything to get from a to b. You filter the world in a way that only leaves a tiny residue for a secondary, more intense, attentional processing.

    So it starts with a back to front logic. The brain begins in a state of representation - a representation of the world already being ignored and handled automatically in a flow of action.

    In terms of computation, it is a forward modelling or constraints based process. A case of Bayesian inference. The brain is looking at the word in terms of its predictable regularities or signs. By filtering out the noise of the world in predictive fashion, signal is produced automatically as "that which wasn't so predictable".

    If you are wandering in the woods, you aren't really taking in the tres and leaves. They become an averaged, expectable, flow of sensation. But a sudden animal like movement or noise will catch your attention. And any bark, squeak or squawk will really pop out as a sign of something demanding closer attentional focus - an effort to constrain your resulting uncertainty by seeking further information.

    So you don't need mentalese to construct chains of thought. The brain is going to have a natural flow of predictions that then focus attention on the unexpected. And that focus in turn only has to reduce future states of uncertainty.

    A bird squawk in the forest may be dismissed as soon as it is understood as a sign of what it further predicts. But a bark may create ongoing uncertainty.

    Anyway, the first thing is to turn the basic notion of information processing on its head, Brains operate as uncertainty minimising devices. They don't start with nothing and build up to something. They start with a well founded guess and wait for the world to force an adjustment. And even the adjustments come fast and easy if the world is being read in terms of signs for which there are already established habits of action.

    If a dog sees its owner make a move to put on his shoes, it may recognise the sign that spells walkies.

    Which brings us to language and how it functions as a further level of syntactically structured constraints on human thought processes.

    The animal brain is designed to work in the flow of the moment, seamlessly predicting its world by reading the available signs and so always arriving at a state of mind that has minimal uncertainty.

    But words can act as symbols - a higher, more abstract and spatiotemporally displaced, level of sign than indexical or iconic signs. So words can be used to produce constrained states of mind that are available "off line", or freely at any place or time.

    If I mention "that koala bear the size of a buffalo swinging on a palm tree", then you (and I) can form an anticipatory image of such a thing. Suddenly it would be a small surprise to actually be in the presence of such an experience. Only details would differ - like the facial expression of the giant koala as it swings, or the amount the palm tree bends.

    So the point is that an anticipatory model of brain computation does most of the heavy lifting. You don't need a mentalese that pre-computes the thoughts that then get translated into overt speech. There just is no such input/output computation going on. Instead, like any animal, we can do a great job of predicting the world and reacting appropriately on an uncertainty minimising basis.

    And then along comes the secondary skill of structured language which does have a computation-like grammatical structure. On top of a feedforward mind, we impose a cultural habit of explaining our thoughts in logical, sequential, fashion. We imposed a causal tale of subjects, objects and verbs - short tales of who did what to whom - on the events of the world.

    So the brain computes in one direction. Language then encodes computation in the opposite direction. It allows us to do the something new of constructing piece by piece, word by word, a reasoned chain of thought or state of mental imagery in a suitably trained mind.

    So talk of mentalese sort of intuitively gets at the fact that there is a rightful brain level of "computation" going on. There is thought separate from the words.

    But the cognitivism or representationalism of Fodor's era got it utterly wrong in thinking the brain was an input crunching machine. The cognitivists rejected the connectionists and neural networkers who argued the brain instead had this feed forward, Bayesian reasoning, design.

    But then the connectionists were weak on the difference that language makes. So it takes a full modern biosemiotic approach to the issues to get both sides of this story tied together.
  • unenlightened
    9.2k
    Thanks for that superbly clear and concise exposition.
  • apokrisis
    7.3k
    Thanks for your appreciation.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.