• T Clark
    13k
    @fdrake @Baden

    See the last few posts. I have asked @Enrique several times to stop posting his unsupported theories that are inconsistent with the subject of this thread as expressed in the OP and he refuses to stop.

    I would appreciate your help.
  • Enrique
    842


    Alright, if you want to blow off experts who are trying to clarify and bs instead that's not gonna be my problem. Proceed with your stuff.
  • T Clark
    13k
    Alright, if you want to blow off experts who are trying to clarify and bs instead that's not gonna be my problem. Proceed with your stuff.Enrique

    Thank you.
  • Gnomon
    3.5k
    If you have specific, credible, referenced, scientific information that describes or explains mental processes, please post it. That's what this thread is about.T Clark
    Since I'm not a practicing scientist, I don't presume to provide "specific, credible, referenced, scientific information". So, as an amateur philosopher, on a philosophy forum, I have to limit my posts to philosophical theorizing & speculation.

    However, in my blog posts, I do include copious references to the informed opinions of professional scientists. And Information Theory is on the cutting edge of Mind research. I thought that might be relevant to a thread on the underlying causes of mental processes. But I now see that the OP assumes a narrow definition of what constitutes Science. So, I'll tune out. :smile:
  • Joshs
    5.2k


    According to Pinker,

    Human grammar is an example of a “discrete combinatorial system.” A finite number of discrete elements (in this case, words) are sampled, combined, and permuted to create larger structures (in this case, sentences) with properties that are quite distinct from those of their elements.T Clark

    I dont think Pinker’s approach is strictly compatible with Damasio. The latter understands mental phenomena from an embodied standpoint in which affectivity is inseparably intertwined with cognition. Pinker holds to a model of cognition rooted in Enlightenment rationalism. I recommend psycholinguist George Lakoff’s work in language development, and his book with Mark Johnson, ‘Philosophy in the Flesh’.

    Here’s Lakoff on Pinker:

    “For a quarter of a century, Steven Pinker and I have been on opposite sides of a major intellectual and scientific divide concerning the nature of language and the mind. Until this review, the divide was confined to the academic world. But, recently, the issue of the nature of mind and language has come into politics in a big way. We can no longer conduct twenty-first-century politics with a seventeenth-century understanding of the mind. The political issues in this country and the world are just too important.

    Pinker, a respected professor at Harvard, has been the most articulate spokesman for the old theory. In language, it is Noam Chomsky’s claim that language consists in (as Pinker puts it) “an autonomous module of syntactic rules.” What this means is that language is just a matter of abstract symbols, having nothing to do with what the symbols mean, how they are used to communicate, how the brain processes thought and language, or any aspect of human experience — cultural or personal. I have been on the other side, providing evidence over many years that all of those considerations enter into language, and recent evidence from the cognitive and neural sciences indicates that language involves bringing all these capacities together. The old view is losing ground as we learn more.

    In thinking, the old view comes originally from Rene Descartes’s seventeenth-century rationalism. A view of thought as symbolic logic was formalized by Bertrand Russell and Gottlob Frege around the turn of the twentieth century, and a rationalist interpretation was revived by Chomsky in the 1950s. In that view, thought is a matter of (as Pinker puts it) “old-fashioned … universal disembodied reason.” Here, reason is seen as the manipulation of meaningless symbols, as in symbolic logic. The new view holds that reason is embodied in a nontrivial way. The brain gives rise to thought in the form of conceptual frames, image-schemas, prototypes, conceptual metaphors, and conceptual blends. The process of thinking is not algorithmic symbol manipulation, but rather neural computation, using brain mechanisms. Jerome Feldman’s recent MIT Press book, From Molecule to Metaphor, discusses such mechanisms. Contrary to Descartes, reason uses these mechanisms, not formal logic. Reason is mostly unconscious, and as Antonio Damasio has written in Descartes’ Error, rationality requires emotion.”
  • apokrisis
    6.8k
    I think I have some idea what he's talking about, but I didn't dig in to it in my response to him.T Clark

    Let me try again in even simpler terms using the concepts of computational processes.

    The brain models a self~world relation. That is why consciousness feels like something - the something that is finding yourself as a self in its world.

    This is all based on some embodied neural process. The brain has to be structured in ways that achieve this job. There is some kind of computational architecture. A data processing approach seems fully justified as the nervous system is built of neurons that simply "fire". And somehow this encodes all that we think, feel, see, smell, do.

    Neuroscience started out with a model of the nervous system as a collection of reflex circuits - a Hebbian network of acquired habit. In neurobiology class, we all had to repeat Helmholtz's pioneering proof of how electrical stimulation of a dead frogs spinal nerve would make its leg twitch (and how electric shocks to a live rats feet could train it in Skinnerian conditioning fashion).

    So psychology began as an exploration of this kind of meat machine. The mind was a set of neural habits that connected an organism to its world as a process of laying down a complexity of reaction pathways.

    But then along comes Turing's theory of universal computation that reduces all human cognitive structure to a simple parable of symbol processing – the mechanics of a tape and gate information processing device. In contrast to a story of learnt neural habits that are biologically embodied, the idea of universal computation is deeply mathematical and as physically disembodied and Platonic as you can get.

    But the computational metaphor took off. It took over cognitive psychology and philosophy of mind, especially once computer scientists got involved and it all became the great artificial intelligence scam of the 1970s/1980s. The mind understood as a symbol processing machine.

    The computational paradigm boils down to a simple argument. Data input gets crunched into data output. Somehow information enters the nervous system, gets processed via a collection of specialised cognitive modules, and then all that results – hands starting to wave furiously at this point - in a consciously experienced display.

    So good old fashioned cogsci builds in Cartesian dualism. Computationalism of the Turing machine kind can certainly transform mechanical inputs into mechanical outputs. But only in a disembodied syntactic sense. Semantics – being excluded from the start – are never recovered. If the organism functions as a computer, it can only be as a mindless and purposeless zombie.

    But even while the symbol processing metaphor dominated the popular conception of how to think about neurobiology, the earlier embodied understanding of cognition puttered along in the background. Neural networkers, for example, continued to focus on machine architectures which might capture the essence of what brains actually do in relating a self to a world - the basic organismic or semiotic loop.

    In data processing terms, you can recognise the flip. Instead of data in/data crunched/data outputted, the organismic version of a computational cycle is based on making a prediction that anticipates a state of input so that that input can be in fact cancelled away. The computational task is homeostatic – to avoid having to notice or learn anything new. The ideal is to be able to deal with the world at the level of already learnt and honed unthinking reflex. To simply assimilate the ever changing world into an untroubled flow of self.

    Of course, life always surprises us in big and small ways. And we are able to pick that up quickly because we were making predictions about its most likely state. So we have a machinery of attentional mop-up that kicks in when the machinery of unthinking habit finds itself caught short.

    But embodied cognition is the inverse of disembodied cognition. Instead of data input being turned into data output, it is data output being generated with enough precision to cancel away all the expected arriving data input.

    For one paradigm, it is all about the construction of a state of mental display – with all the Cartesian dualism that results from that. For the other paradigm, it is all about avoiding needing to be "consciously" aware of the world by being so well adapted to your world that you already knew what was going to happen in advance.

    Erasing information, forgetting events, not reacting in new ways. These are all the hallmarks of a well-adapted nervous system.

    Of course, this biological realism runs completely counter to the standard cultural conception of mind and consciousness. And this is because humans are socially constructed creatures trying to run a completely different script.

    It is basic to our sociology that we represent ourselves as brightly self-aware actors within a larger social drama. We have to be feeling all these feelings, thinking all these thoughts, to play the role of a "self-actualising, self-regulating, self-introspecting" human being.

    We can't just go with the flow, as our biology is set up to do. We have to nurture the further habit of noticing everything about "ourselves" so that we can play the part of being an actor in a human drama. We have to be self-conscious of everything that might naturally just slip past and so actually create the disembodied Cartesian display that allows us to be selves watching selves doing the stuff that "comes naturally", then jumping in with guilt or guile to edit the script in some more socially approved manner.

    So there is the neurobiology of mind which just paints a picture of a meat machine acquiring pragmatic habits and doing its level homeostatic best not to have to change, just go with its established flow.

    And this unexciting conception of the human condition is matched with a social constructionist tradition in psychology that offers an equally prosaic diagnosis where everything that is so special about homo sapiens is just a new level of social semiosis – the extension of the habitual mind so that it becomes a new super-organismic level of unthinking, pragmatic, flow.

    But no one writes best-sellers to popularise this kind of science. It is not the image of humanity that people want to hear about. Indeed, it would undermine the very machinery of popular culture itself – the Romantic and Enlightened conception of humans as Cartesian creatures. Half angel, half beast. A social drama of the self that you can't take your eyes off for a second.

    So if you have set your task to be the one of understanding the science of the mind, then you can see how much cultural deprogramming you probably have to go through to even recognise what might constitute a good book to discuss.

    But my rough summary is that circa-1900s, a lot of people were getting it right about cognition as embodied semiosis. Then from the 1950s, the science got tangled up in computer metaphors and ideology about cognition as disembodied mechanism. And from about 2000, there was a swing back to embodied cognition again. The enactive turn.

    So you could chop out anything written or debated between the 1950s to 2000s and miss nothing crucial. :razz:
  • T Clark
    13k
    I dont think Pinker’s approach is strictly compatible with Damasio.Joshs

    I don't think there was any conflict, or even much overlap, between the ideas of Pinker and Damasio that I wrote about. Pinker didn't really talk about reason at all in "The Language Instinct," just language. I didn't see anything I would characterize as "Enlightenment rationalism." I haven't read his other books. I focused close in on one subject in Damasio, the proto-self, because I specifically wanted to avoid talking about consciousness. So I didn't address his thoughts about reason. Even if there were conflict, I was never trying to provide a comprehensive, consistent view of mind. I tried to make that clear in the OP.

    I appreciate you providing Lakoff's comments, although not much of what he has written seems to have much to do with language. As I noted, there is little discussion about reason in "The Language Instinct," and what there was wasn't included in the part I wrote about.
  • T Clark
    13k
    Let me try again in even simpler terms using the concepts of computational processes.apokrisis

    Well, I've read your post three times and still don't know what to make of it. It's not all that unusual when I'm dealing with you. The main problem is that I don't know how to incorporate what you've written into what I've said previously or vise versa. I probably need to do some more reading before I'll be able to do that.

    the Romantic and Enlightened conception of humans as Cartesian creatures. Half angel, half beast. A social drama of the self that you can't take your eyes off for a second.apokrisis

    That doesn't sound like anything I read in Pinker's book.
  • apokrisis
    6.8k
    That doesn't sound like anything I read in Pinker's book.T Clark

    When was it written? :smile:

    I got into the socially constructed aspects of the human mind just a few years before evolutionary psychology came rolling in over the top of everyone with its genocentric presumptions about the “higher faculties”.

    So there are parts of the Chomskyian school I am sympathetic to - such as it’s structuralist bent. And then other parts it misses the boat in classic “evolved mental facilities” fashion. Go back to the 1920s and Vygotsky and Luria laid out the socially constructed nature of these.

    That is why I keep insisting on semiotics as the unifying view. Nothing can make sense until you realise that genes, neurons, words and numbers are all just increasing abstracted versions of the one general self-world modelling relation.

    Life and mind have a single explanation. And it even explains social, political and moral structure.

    There is a big prize at the end of this trail. And it ain’t something so trite as explaining the explanatory gap in everyone’s “consciousness” theories.

    My objection to your approach is that it presumes that a lot of patient detail will assemble some secure understanding about “how the brain works”.

    But the problem is so much bigger. It is about understanding the deep structure of the very thing of an organism. You can’t even see what counts as the right detail without having the right big picture.
  • Tom Storm
    8.3k
    It is about understanding the deep structure of the very thing of an organism. You can’t even see what counts as the right detail without having the right big picture.apokrisis

    You are probably right. Which is kind of why I have never bothered to worry about details. The chances of determining or identifying the correct big picture seem remote to me and likely will make no real difference to my day-to-day life. Interesting to hear snippets from true believer's however.
  • apokrisis
    6.8k
    So you don’t invest an effort in either the telling detail or the big picture, yet you are happy to stand to one side and make condescending noises.

    Right. gotcha. :up:
  • Tom Storm
    8.3k
    yet you are happy to stand to one side and make condescending noises.apokrisis

    Interesting take on my response. Do you take me as condescending? Not my intention.
  • apokrisis
    6.8k
    Thanks.

    Not my intentionTom Storm

    Great. But talking of “true believer snippets” sure sounds that way.
  • Tom Storm
    8.3k


    So I'm agreeing with your broader point that we can’t even see what counts as the right detail without having the right big picture and I'm expressing skepticism in my ability to identify what matters in this vast and highly technical subject. For this reason, I have never tried to study or incorporate any advanced theory of mentation or cognitive processes and am satisfied to behold glimmers from others more assertive than I am.
  • T Clark
    13k
    When was it written?apokrisis

    To start, I want to make it clear you are doing exactly what I asked for in the OP. As I wrote:

    I’d like to discuss what the proper approach to thinking about the mind is.T Clark

    That's what you're doing and I appreciate it.

    "The Language Instinct" was written in 1994, but was republished in 2008. I guess I assume it wouldn't be reprinted if Pinker didn't still stand behind it.

    I've learned a lot from you over the years. For example, more and more often I find myself thinking about constraints from above as being as important as synthesis from below in all sorts of situations where there is a hierarchy of effects. I never would have been able to grasp that, even as much as I do, if I hadn't worked first to try to understand the bottom up way of seeing things.

    Ditto with what we're talking about here. As I noted, I have a hard time buying the semiosis argument. It sounds and feels too much like the whole mathematical universe schtick - mistaking a metaphysical metaphor for science. Maybe I'll come around eventually.

    Obviously you know more about this than I do. I don't think you're wrong, but I don't understand about 80% of what you're talking about. I won't be able to figure it out by just listening to you and ignoring what other people say. You pissing on Pinker and others like him doesn't make your arguments more convincing.
  • apokrisis
    6.8k
    You pissing on Pinker and others like him doesn't make your arguments more convincing.T Clark

    I gave Pinker a fairly favourable review of his Words and Rules when I reviewed it for the Guardian. But I wasn’t impressed much by the Language Instinct. And I found How the Mind Works too trite to read.

    So my view was that he was fine so long as he stuck close to his research. But he was just a bandwagon jumper when it came to the culture wars of the time.

    I think I might have reviewed Damasio too for the Guardian. I did for someone.

    There are a ton of books I could recommend. Some are even quite fun like Tor Nørretranders 1991 book The User Illusion.
  • bert1
    1.8k
    Why can't this happen in the dark:

    The brain models a self~world relation. That is why consciousness feels like something - the something that is finding yourself as a self in its world.apokrisis

    But this can:

    The computational paradigm boils down to a simple argument. Data input gets crunched into data output. Somehow information enters the nervous system, gets processed via a collection of specialised cognitive modules, and then all that results – hands starting to wave furiously at this point - in a consciously experienced display.apokrisis
  • T Clark
    13k
    There are a ton of books I could recommend. Some are even quite fun like Tor Nørretranders 1991 book The User Illusion.apokrisis

    Thanks. I'll take a look.
  • Joshs
    5.2k


    I appreciate you providing Lakoff's comments, although not much of what he has written seems to have much to do with language. As I noted, there is little discussion about reason in "The Language Instinct," and what there was wasn't included in the part I wrote about.T Clark

    Descartes argues that humans are rational animals, and the faculty that guarantees our rationality is an innate , God-given capacity to organize thought in rational terms.
    In the past few centuries, notions of innatism have evolved away from a religious grounding in favor of empirical explanations. While the concept of instinct is so general as to mean almost anything, the specific way in which Pinker uses it in relation to language is something he inherited from Chomsky. Chomsky posited an innate , and therefore universal , computational module that he called transformational grammar. In other words , there is a ‘rational’ logic of grammar , and this rationality is the product of an innate structure syntactically organizing words into sentences . In this way, Pinker and Chomsky are heirs of Enlightenment Rationalism. Chomsky has said as much himself.

    My quote from Lakoff was intended to show that embodied approaches to language tend to reject Pinker’s claim that innate grammar structures exist. They say there is no language instinct , but rather innate capacities for complex cognition , out of which language emerged in different ways in different cultures.
    As far as the relation between Damasio and Lakoff is concerned, you are right that Damasio does not deal with the origin of language. But given his credentials as a an embodied neurocognitivist, it is highly unlikely that innate computational grammar structures are consistent with his general approach , which is anti-rationalistic.
  • apokrisis
    6.8k
    Why can't this happen in the darkbert1

    But as I pointed out, the modelling relation approach to neural information processing says the brain’s aim is to turn the lights out. It targets a level of reality prediction where it’s forward model can cancel the arriving sensory input.

    Efficiency is about being able to operate on unthinking and unremembered automatic pilot. You can drive familiar busy routes with complete blankness about the world around as all the actions are just done and forgotten in a skilled habitual way.

    So like the fridge door, the light only comes on when a gap grows between the self and the world in this holistic self-world modelling relation.

    The input-output computer model has the opposite problem of treating the creation of light as the central mystery. All that data processing to produce a representation, and yet still a problem of who is witnessing the display.

    The modelling relation approach says both light and dark are what are being created. It is about the production of a sharp contrast out of a state of vagueness - the Jamesian blooming, buzzing, confusion of the unstructured infant brain.

    So the question of why there is light is answered by the reciprocal question of how there can be dark. And the answer in terms of how the brain handles habitual and attentional level processing is just everyday neuroscience.

    The input-output model of data processing can’t produce light because it can’t produce dark either. It is not producing any contrast at all to speak of.
  • Tom Storm
    8.3k
    They say there is no language instinct , but rather , out of which language emerged in different ways in different cultures.Joshs

    Quick question: I find it hard to understand what the nuances of difference are between 'innate capacities for complex cognition' and an 'innate , and therefore universal , computational module'. Sounds like different language for a similar phenomenon. Can you clarify this in a few sentences? I understand that Chomsky's view provides a rationalist logic but how is Lakoff's view antithetical to this?
  • apokrisis
    6.8k
    They say there is no language instinct , but rather innate capacities for complex cognitionJoshs

    Sounds like an argument over whether a donut is a cake or a biscuit. :lol:

    The linguistic wars talk past the issue in being hand-wavingly simplistic. From the semiotic point of view, what really mattered in language evolution was the development of a vocal tract which imposed a new kind of serial motor constraint on the standard hierarchical or recursive architecture of frontal lobe motor planning.

    Tool use had already started the process because knapping a flint demands a “grammar” of chipping away at a rock in serial fashion to achieve a pictured goal. Dexterity is about breaking down sophisticated intent into a long sequence of precision actions.

    Is making a hand axe a “tool instinct” or “complex cognition”? Or is it really an intersection of nature and nurture where other things - like an opposable thumb, a lateralisation of hand dominance, a bulking up of prefrontal motor planning cortex - all combine so as to impose a strong serial demand on the brain’s general purpose recursive neural circuitry?

    Language likewise would have most likely evolved due to the “lucky accident” of changes to the vocal tract imposing a new kind of constraint on social vocalisation. In my view, Darwin’s singing ape hypothesis was right after all.

    Homo was evolving as an intensely social tool-using creature. Vocalisation would have been still under “emotional” limbic control - the hoots and grunts chimpanzees use to great communicative effect. And even today, human swear words and emotional noises are more the product of anterior cingulate habit than prefrontal intent. Emitted rather than articulated.

    But something must have driven H erectus towards a sophisticated capacity for articulate vocalisation - sing-song noises - requiring the connected adaptations of a radical restructuring of the vocal tract and a matching tweaking of the brain’s vocal control network.

    The big accident was then that a serial constraint on hierarchical motor planning could be turned into a new level of semiotic encoding.

    Genes are likewise a serial constraint on hierarchical order. A 1D DNA sequence can represent a 3D protein. A physical molecule can be encoded as a string of bits. This was the lucky semiotic accident that allowed life to evolve.

    Language became the “genes” for the socially constructed human mind because once vocalisation was strait-jacketed into sing-song sequences - proto-words organised by proto-rules - it became a short step to a facility for articulation becoming properly semiotic. An abstract symbol system that could construct shareable states of understanding.

    So while Chomsky’s disciples work themselves into a lather over specific instinct vs general “cognitive complexity”, as usual the interesting story is in the production of dialectical contrast.

    It was how vocalisation came to be dichotomised into a serial constraint on hierarchical action that is the evolutionary question. And then how this new level of encoding blossomed into the further dialectic that is the human self in its human world.

    Language transformed the mentality of Homo sapiens in social constructionist fashion. Again, this was widely understood in about 1900, yet almost entirely forgotten by the 1970s or so. The computer as a metaphor for brain function had completely taken over the public debate.

    You became either a Pinker having to claim a language faculty was part of the universal hardware, or a Lakoff claiming it was just another app you might not choose to download.
  • bongo fury
    1.6k
    Why can't this happen in the dark
    — bert1

    But as I pointed out, the modelling relation approach to neural information processing says the brain’s aim is to turn the lights out. It targets a level of reality prediction where it’s forward model can cancel the arriving sensory input.
    apokrisis

    I wonder, am trying to make out, if this answer uses the same lighting metaphor as the question. Because I'm fascinated by the metaphor.

    I think the question is, why can't a (super impressive, say mammal-imitating) neural network type machine be a zombie, just like a similarly impressive but old-style symbolic computer/android?

    Putting this as a question of whether the world is lit up for the machine in question seems a powerful intuition pump for consciousness. I've found myself spontaneously invoking it when considering discussions of zombies, and of the alleged difference between primary and secondary properties.

    Possibly @apokrisis is following that reading, and saying that, paradoxically, consciousness happens as the organism strives to avoid it.

    I'm not sure if that would convince the questioner, who might object that it merely describes a kind of attention-management that is easily enough ascribed to a zombie.

    But as I say, I'm not sure if that is the intended answer.

    As someone who insinuates that the intuition is wrong, even if easily pumped, I of course ought to offer an alternative. Ok, maybe something like, a machine that's one us (one of our self-regarding ilk, properly called conscious) constantly reaches for pictures and sounds that would efficaciously compare and classify the illumination events and sound events that it encounters. It understands the language of pictures, in which black pictures refer to unlit events and colourful ones to lit events. Whereas a zombie, however it deals with what it sees, is like the Chinese room in failing to appreciate the reference of symbols (here pictorial) to actual things.
  • apokrisis
    6.8k
    I think the question is, why can't a (super impressive, say mammal-imitating) neural network type machine be a zombie, just like a similarly impressive but old-style symbolic computer/android?bongo fury

    Howard Pattee – my favourite hierarchy theorist and biosemiotician (along with Stan Salthe – wrote this on how even the question of living vs nonliving can be applied to machines. It all starts from a proper causal definition of an organism – one that clearly distinguishes it from a computer or other mechanical process.

    Artificial Life Needs a Real Epistemology (1995)

    Foundational controversies in artificial life and artificial intelligence arise from lack of decidable criteria for defining the epistemic cuts that separate knowledge of reality from reality itself, e.g., description from construction, simulation from realization, mind from brain.

    Selective evolution began with a description-construction cut, i.e., the genetically coded synthesis of proteins. The highly evolved cognitive epistemology of physics requires an epistemic cut between reversible dynamic laws and the irreversible process of measuring initial conditions. This is also known as the measurement problem.

    Good physics can be done without addressing this epistemic problem, but not good biology and artificial life, because open-ended evolution requires the physical implementation of genetic descriptions. The course of evolution depends on the speed and reliability of this implementation, or how efficiently the real or artificial physical dynamics can be harnessed by non-dynamic genetic symbols

    https://www.researchgate.net/publication/221531066_Artificial_Life_Needs_a_Real_Epistemology

    Possibly apokrisis is following that reading, and saying that, paradoxically, consciousness happens as the organism strives to avoid it.bongo fury

    Where does the idea of a zombie even come from except as "other" to what popular culture conceives the conscious human to be.

    Everything starts to go wrong philosophically once you start turning the complementarity to be found in dialectics – the logical unity that underwrites the logical division of a dichotomy – into the false dilemmas of reductionism.

    Reductionism demands one or other be true. Dialectics/semiotics are holistic in that they say existence is about the production of dichotomous contrast. Symmetry-breaking.

    So brain function is best understood in terms of future prediction that seeks to minimise an organism's need for change - as how does an organism exist as what it is unless it can homeostatically regulate the tendency of its environment to get busy randomising it.

    You are "you" to the extent you can maintain an identity in contrast to the entropifying actions of your world - which for humans, is both a physical environment, and a social or informational environment.

    We can be the eye of the storm because we are the still centre of a raging world that revolves around us. That contrast is what we feel as being a self in a world.

    The neural trick to achieving this is a modelling relation which cancels away the changes the world might impose on us - all its unplanned accidents - and thus imposes on the world our "self" that is the world exactly as we intend it to be.

    The baseline has to first be set to zero by cancelling everything that is currently happening to the level of "already processed" habit. And from there, attentional processes can mop up the unexpected - turning those into tomorrow's habits and expectations.

    This is the basis of Friston's Bayesian brain. Neuroscience has got to the point that semiosis can be written out in differential equations. So Pattee's call for a proper epistemology of life and mind are being answered.

    For other reasons, I doubt this will lead to conscious computers. But it at least grounds the next step for neural networks.

  • Joshs
    5.2k
    I find it hard to understand what the nuances of difference are between 'innate capacities for complex cognition' and an 'innate , and therefore universal , computational module'. Sounds like different language for a similar phenomenon.Tom Storm

    An innate language module of the Chomskian sort specifies a particular way of organizing grammar prior to and completely independent of social interaction. Lakoff’s innate capacities for cognition do not dictate any particular syntactic or semantic patterns of language. Those are completely determined by interaction.
  • Tom Storm
    8.3k
    Great. Appreciated.
  • schopenhauer1
    9.9k
    An innate language module of the Chomskian sort specifies a particular way of organizing grammar prior to and completely independent of social interaction. Lakoff’s innate capacities for cognition do not dictate any particular syntactic or semantic patterns of language. Those are completely determined by interaction.Joshs

    I believe Chomsky went from a much more complex grammar rule-based brain to simply "Merge".
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment