• Wayfarer
    20.8k
    Mod Note: Comment moved from The Mind-Created World to constitute separate thread.
  • schopenhauer1
    10k
    Evolutionary Overreach: Midgley suggests that some scientists and science popularizers overreach by making broad philosophical or moral claims based on evolutionary theory. They treat evolution not just as a biological theory but as a complete worldview or ideology.

    "Just-so" Stories: Midgley critiques certain evolutionary explanations, especially in the realm of sociobiology, as being akin to Rudyard Kipling's "just-so" stories – speculative narratives that seem more about confirming existing biases than rigorous scientific explanations.
    Wayfarer

    Just to be the devil's advocate here, but doesn't it seem plausible that only animals have the capacity for "qualities" (experience, a point of view)? And even amongst animals, doesn't it seem plausible that only animals with some form of nervous system have this capacity of qualities/experience/point of view? That being said, can there be some sort of Transcendental Theory of Neurofauna?

    Also, not all evolutionary theories are "Just so", per se, but descriptive. A "just so" story might be something like, "Our ancestor's propensity for favoring the strongest alpha male, is why we have a strong tendency towards fascism". But, a theory that describes how language evolved in humans by examining various models that fit the evidence from artifacts, brain development and anatomy, developmental psychology, etc. might be a legitimately descriptive theory?

    Here is a ChatGPT version of the evolution of language for example:
    1. Primate Ancestry: Limited Communicative Abilities (Approx. 5-7 million years ago)

    Our common ancestors with chimpanzees relied on basic communication skills, primarily using gestures, vocalizations, and facial expressions to convey simple intentions and immediate needs. Their communication was limited in complexity compared to the emerging human capacity.

    2. Emergence of Shared Intentionality and Collaborative Foraging (Approx. 2-3 million years ago)

    In the Homo lineage, around 2-3 million years ago, Homo habilis and Homo erectus emerged. These early humans started relying on collaborative foraging and tool use, requiring increased coordination and the sharing of intentions to hunt, gather, and cooperate effectively. Shared intentionality began to develop in response to the need for better communication during cooperative activities.

    3. Enhanced Cognitive and Motor Skills: Adaptation to Varied Environments (Approx. 2 million years ago)

    Around 2 million years ago, the Homo lineage underwent significant developments in brain size, cognitive abilities, and motor skills. Enhanced cognitive and motor capabilities allowed for more intricate coordination and complex motor planning necessary for cooperative activities, setting the stage for the further development of language-related brain regions.

    4. Emergence of Basic Language Elements and Chomsky's Universal Grammar (Approx. 1.5 million years ago)

    As Homo species faced complex cooperative tasks, basic language elements and rudimentary grammar started to emerge. Chomsky's universal grammar, a theoretical construct proposing inherent grammatical structures in the human brain, played a role in shaping the fundamental structure of early language.

    5. Broca's Area Specialization: Language Production and Planning (Approx. 1 million years ago)

    Around a million years ago, Homo species faced increasingly complex cooperative tasks that demanded precise planning and articulation of intentions. Broca's area began to specialize, enabling the production of structured language and grammatical rules, surpassing the communication capabilities of other primates.

    6. Wernicke's Area Development: Language Comprehension and Understanding Intentions (Approx. 500,000 years ago)

    Approximately 500,000 years ago, as cooperative tasks and cultural activities became more intricate, Wernicke's area in Homo sapiens specialized further to interpret nuanced meanings, understand shared intentions, and process an expanding vocabulary associated with complex cooperative tasks and cultural nuances.

    7. Development of Self-Talk and Internalized Language (Approx. 100,000 - 50,000 years ago)

    As Homo sapiens evolved, the ability to engage in self-talk and internalized language emerged. This capacity allowed for complex thought processes, reflection, and the development of abstract concepts, further enhancing communication and planning for complex cooperative endeavors.

    8. Language Explosion and Cultural Transmission: A Distinctive Human Trait (Approx. 70,000 - 50,000 years ago)

    Around 70,000 to 50,000 years ago, a significant leap in linguistic complexity occurred. Language exploded in its richness and complexity, enabling abstract thought, storytelling, and the transmission of culture across generations. Michael Tomasello's theory of shared intentionality played a crucial role during this phase, emphasizing the evolution of cooperation and communication, further enhancing the unique linguistic and cultural abilities of Homo sapiens.
  • Wayfarer
    20.8k
    First, I'll note that Schopenhauer1's comment above was in response to an extract of a précis I posted of Mary Midgley's book, Evolution as Religion, in the thread Mind-Created World. Details of Midgley's book can be found here.

    doesn't it seem plausible that only animals have the capacity for "qualities" (experience, a point of view)? And even amongst animals, doesn't it seem plausible that only animals with some form of nervous system have this capacity of qualities/experience/point of view?schopenhauer1

    I'm attracted to the philosophical idea that the emergence of even the most simple organisms, is in some sense the appearance of intentionality as a mode of being. As Thomas Nagel argues,
    The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view. There can be a purely physical description of the neurophysiological processes that give rise to an experience, and also of the physical behavior that is typically associated with it, but such a description, however complete, will leave out the subjective essence of the experience – how it is from the point of view of its subject — without which it would not be a conscious experience at all.

    I suggest that the 'subjective essence of experience' is one of the connotations of the term 'being' when used as a noun - that 'a being' is precisely the kind of entity that possesses the element of subjectivity, even if in rudimentary form. This is the point at which qualities of being a.k.a qualia start to become manifest.

    (There's a paper by Evan Thompson on something similar to this idea, I'll see if I can find it and report back.)
  • kudos
    374
    Also, not all evolutionary theories are "Just so", per se, but descriptive. A "just so" story might be something like, "Our ancestor's propensity for favoring the strongest alpha male, is why we have a strong tendency towards fascism". But, a theory that describes how language evolved in humans by examining various models that fit the evidence from artifacts, brain development and anatomy, developmental psychology, etc. might be a legitimately descriptive theory?

    For me, the 'just so' is a product of both the orator and listener. It is our quickness to accept scientific statements along with their baggage that is the particular quality that makes them attractive as carriers for non-scientific ideology. More often than not, it seems of a kind of positivity bias, a juxtaposition of scientific imagery that is that it presents itself in a language of form. Primates used logic to express immediate need. This led to further development of x part of the brain that mediates language, etc. Statements like this carry baggage, like for instance the idea of the individual narrative, the modern self, and the accidental. These are not particular to the quality of the science, but inherited by the form of the storytelling.
  • schopenhauer1
    10k
    I suggest that the 'subjective essence of experience' is one of the connotations of the term 'being' when used as a noun - that 'a being' is precisely the kind of entity that possesses the element of subjectivity, even if in rudimentary form. This is the point at which qualities of being a.k.a qualia start to become manifest.Wayfarer

    Sure, but when precisely that being begins, is the hard part. Is intentionality really qualitative? I would say it's a good contender for a point of view, but not necessarily qualities. An amoeba has reactions, but not intentions it would seem. Sponges have the most basic neural nets, but are insufficient for intention. Perhaps the most basic experience is found in the jellyfish and the worm as they perhaps move towards light and chemicals, though that gets murky between experience and stimuli. Perhaps we would have to start at mollusks or arthropods or insects for first real experiences. But then what is the differentiation here?

    As you state with your quote, physical descriptions can only capture behaviors and morphology, not internal subjectivity. So I can imagine answer being something like "There has to be differentiation enough in the neural networks, such as to specialize and feedback to itself". But that is all descriptive and doesn't seem to confer why that is subjective, and the causal loop is closed off still.
  • Wayfarer
    20.8k
    An amoeba has reactions, but not intentions it would seem.schopenhauer1

    Not what we understand as 'conscious' intention, but they can learn. I also agree that borderline creatures, like sponges and jellies, don't meaningfully manifest much in the way of intentional action, but even so many quasi-intentional behaviours can be observed on the level of the organic molecules that comprise them. (Take a look at From Physical Causes to Organisms of Meaning, Steve Talbott.)
  • schopenhauer1
    10k
    Primates used logic to express immediate need. This led to further development of x part of the brain that mediates language, etc. Statements like this carry baggage, like for instance the idea of the individual narrative, the modern self, and the accidental. These are not particular to the quality of the science, but inherited by the form of the storytelling.kudos

    Indeed, I'd agree here. Evolutionary biology and anthropology can be a form of storytelling. You are making inferences that don't necessarily connect in a the way a physics experiment might, for example, But even physics also has the storytelling aspect. For example, what explains various paradoxes in quantum mechanics? There are various theories telling that story. Granted, biology as far as we know, has many more pitfalls of multiple causation due to complexity of organisms, environment, and history, but there are some models that seem to do a better job at organizing the data into a coherent understanding than others. There is getting data and there are theories that interpret that data.

    If we are to have any value come out of the sciences, other than technology, it would be getting a better synthesis of what could have happened, or is the case, in regards to nature based on the evidence we have, and honing that or creating a better interpretation. This endeavor is likely to not end in some absolute consensus of interpretation any time soon, however.
  • wonderer1
    1.7k
    If we are to have any value come out of the sciences, other than technology, it would be getting a better synthesis of what could have happened, or is the case, in regards to nature based on the evidence we have, and honing that or creating a better interpretation. This endeavor is likely to not end in some absolute consensus of interpretation any time soon, however.schopenhauer1

    I'm inclined to think gaining better understanding of our own natures would be more beneficial than more accurate understanding of our history, although the latter would surely contribute to the former.
  • schopenhauer1
    10k
    I'm inclined to think gaining better understanding of our own natures would be more beneficial than more accurate understanding of our history, although the latter would surely contribute to the former.wonderer1

    :up:
  • Agree-to-Disagree
    407
    Just to be the devil's advocate here, but doesn't it seem plausible that only animals have the capacity for "qualities" (experience, a point of view)?schopenhauer1

    A plant (e.g. a tree) can produce toxic chemicals as a response to being eaten. Isn't this an example of experience?

    Some of these trees can communicate with nearby trees and pass on the message about the risk of being eaten. The nearby trees produce the toxic chemical even though they haven't been eaten yet.

    Doesn't a Venus flytrap show some of the qualities of an animal carnivore. It just can't walk around. :grin:
    There are some fish and animals which work in a similar way to a Venus flytrap. They lure the "food" close and then eat it.
  • Count Timothy von Icarus
    2k
    I think it's a bit of an historical accident that evolutionary biology has become so tied to battles over religion. Basically, you had support for evolution and its popularization firming up around the same time that people began to notice some deep problems with the conception of an eternal universe paired with snowballing evidence for our universe having started to exist at some point in the (relatively) recent past. 14 billion years ain't much compared to infinity after all.

    Evolution was seen as a silver bullet to put down creationist dogmas, and because creationists reacted poorly to the building support for evolution, trying to ban it from schools, etc. the two issues became tied together. But then evidence for a "starting point" for the universe was also seen as a big win by proponents of a "first cause," or "prime mover." Popular athiestic opinion had been in favor of an eternal universe to that point precisely because a starting point reeked of God. But aside from evidence for the Big Bang, problems like Boltzmann Brains began to crop up for the eternal universe.

    And so the ideas seem to have become tied at the hip. The idea that evolution was a silver bullet for religion is born partly out of the religious reaction to evolutionary theory, partly because it had to become the silver bullet now that first cause was back in the popular mind. You see this today. Theists want to talk about the Fine Tuning Problem, the Cosmological Argument, Cosmic Inflation, etc. and militant atheists, your Dawkinses, etc. want to talk about natural selection.

    IMO, these are completely contingent relations, and neither field has any special relation to evidence for or against a God. Plenty of theists have made their peace with evolution and evolution even seems to pair quite nicely with some views of God as unfolding dialectically, or views of natural teleology. But since evolution was historically a battleground over religion, it has remained one by inertia. And this is why we see "evolution as religion," . It is supposed, dogmatically IMO, that any theory of evolution necessitates that evolution occurs through "blind random chance" and thus it seems to preclude the possibility of purpose, cutting the legs out from most religious claims.

    I agree this is a powerful force in modern science/scientism. Neurodarwinism was largely attacked, not because the processes it described weren't isomorphic to the process of natural selection, and not because it lacked predictive power, but precisely because "fire together, wire together and neuronal pruning are inherently interlinked with intentionality," and "evolution simply cannot admit intentionality." And you see this is similar arguments over whether lymphocytes production is "natural selection," if genetic algorithms fail to mimic real selection because they "have a purpose," if lab grown "DNA computers" actually "compute," and in Extended Evolutionary Synthesis.

    I don't see an explanation for the strength of the dogma accept for the "religion-like" elements of how evolution has been used re: scientism. Maybe there is something I'm missing, but selection processes seem like they could involve intentionality or not and still be largely the same sort of thing.
  • kudos
    374
    If we are to have any value come out of the sciences, other than technology, it would be getting a better synthesis of what could have happened, or is the case, in regards to nature based on the evidence we have, and honing that or creating a better interpretation.

    Yes, but you have pulled a switcheroo on the word 'value,' which is here supposed to mean 'applications to.' We're not talking about science as having any value beyond analytic and synthetic proposals that convey the essence of a thing. They are not going to be the key that unlocks reason, consciousness, the meaning of life, or any other glossy-eyed delusions.
  • wonderer1
    1.7k
    I think it's a bit of an historical accident that evolutionary biology has become so tied to battles over religion.Count Timothy von Icarus

    It looks to me like a historical inevitability. Religions tell stories that our relatively uninformed ancestors came up with, to explain the nature of ourselves. Scientific investigation into the nature of ourselves yields something quite different. A lot of people like those old stories a lot better than they think they would like the view from a scientifically informed perspective.
  • baker
    5.6k
    Religions tell stories that our relatively uninformed ancestors came up with, to explain the nature of ourselves.wonderer1

    People keep saying things like this. Where's the evidence that they really made up those stories, and for those stated purposes?
  • wonderer1
    1.7k
    People keep saying things like this. Where's the evidence that they really made up those stories, and for those stated purposes?baker

    The evidence is in the multitude of different mutually contradictory stories. They can all be wrong, but they can't all be right.

    How implausible the stories are is evidence for them being a product of relatively uninformed thinkers.

    I can see how you might have interpreted me as suggesting that the original story tellers told their stories for religion's purposes. That isn't what I intended to convey, so let me try to clarify. I probably should have put "that our relatively uninformed ancestors came up with" in paretheses. Religions (communities of religious followers) propagate claims about the nature of ourselves which are based on stories that the religion originating story tellers told.

    What religion doesn't make claims about what we are?
  • baker
    5.6k
    The evidence is in the multitude of different mutually contradictory stories. They can all be wrong, but they can't all be right.wonderer1
    That's assuming that those stories were invented (?) for the purposes that you claim. How do you know they were invented for those purposes?

    How implausible the stories are is evidence for them being a product of relatively uninformed thinkers.
    Again, that's assuming the purpose you ascribe to them is the true and relevant one.

    Religions (communities of religious followers) propagate claims about the nature of ourselves which are based on stories that the religion originating story tellers told.

    What religion doesn't make claims about what we are?
    Of course. Has it ever occured to you that those stories, even when they are in the form of descriptions or explanations, are actually instructions, statements of the norms of the particular communities that told those stories?
  • wonderer1
    1.7k
    That's assuming that those stories were invented (?) for the purposes that you claim.baker

    No, I tried to make clear that I'm not assuming that the original story tellers had such a purpose, and make clear that I recognize a difference between the purpose of the original storytellers, and the way religions make use of the stories.

    Has it ever occured to you that those stories, even when they are in the form of descriptions or explanations, are actually instructions, statements of the norms of the particular communities that told those stories?baker

    Sure. I was a member of such a community when I was young. These days I recommend avoiding such a parochial view. There is a much more evidenced basis for understanding our natures, available to us these days.
  • baker
    5.6k
    We seem to be talking past eachother.

    I'm saying that I don't think religious narratives are meant for us to "understand" ourselves, but to become a particular type of people. Religions are all about how one *should* be. (Whatever narratives religions have about who we are and where we came from are in the service of how we should be.)
  • EricH
    581
    Here's a very pertinent article which hit my news feed a few days ago. Perhaps this is old news, but it's the first I'm hearing about this.

    https://www.theguardian.com/science/2023/oct/16/survival-of-the-fittest-may-also-apply-to-the-nonliving-report-finds
  • Janus
    15.5k
    Is not "knowing thyself" the first step to becoming something other than what you already are? I mean, you could merely pay lip service to an imposed injunction, but that would not count as being a real change, but merely an act of self-repression designed to make you appear to others (and perhaps to yourself) to be living up to some introjected ideal. It would only be by understanding or knowing yourself that you would be able to tell the difference.
  • schopenhauer1
    10k
    Yes, but you have pulled a switcheroo on the word 'value,' which is here supposed to mean 'applications to.' We're not talking about science as having any value beyond analytic and synthetic proposals that convey the essence of a thing. They are not going to be the key that unlocks reason, consciousness, the meaning of life, or any other glossy-eyed delusions.kudos

    I’m not sure how I pulled a switcheroo, application to is what I meant.

    That being said, I proposed focusing on neurofauna in biology as to where the dividing line is between behavior and mental. What’s the fundamental difference between the non-present POV of a sponge and (perhaps) the present POV of a jellyfish or worm?
  • kudos
    374
    In the end, doesn't the sponge have just as much to do with consciousness and mentality? Of course the sponge can't have a point of view, if what you mean by that is a mental 'map' of its own conscious life. But I might suggest that the sponge still could be said to have concrete being 'for itself.' Even in terms of its atomic structure, if you want to dabble in the scientific, it is built in such a way as to cohere itself and have a unified being that is continually representing its essential qualities. I might go as far as saying that it might not be possible to talk about mind or spirituality without considering matter not purely in content but also as a whole.
  • Alkis Piskas
    2.1k

    I launched a discussion under the title "'Survival of the Fittest': Its meaning and its implications for our life, about 8 months ago.
    (See https://thephilosophyforum.com/discussion/14045/survival-of-the-fittest-its-meaning-and-its-implications-for-our-life/p1). You can find in there my position on the subject, together with those of other members.)
  • schopenhauer1
    10k
    In the end, doesn't the sponge have just as much to do with consciousness and mentality? Of course the sponge can't have a point of view, if what you mean by that is a mental 'map' of its own conscious life. But I might suggest that the sponge still could be said to have concrete being 'for itself.' Even in terms of its atomic structure, if you want to dabble in the scientific, it is built in such a way as to cohere itself and have a unified being that is continually representing its essential qualities. I might go as far as saying that it might not be possible to talk about mind or spirituality without considering matter not purely in content but also as a whole.kudos

    So the big deal I see is that sponges have very basic neural networks that most scientists agree is behavioral but without a mental representation of the world. However, with animals like jellyfish, worms, and insects, the neural nets equates to a mental representation (however basic) of the world. My challenge is to understand what this fundamental difference between the two is. That right there is the essence of the origins of the hard problem of consciousness. However, this seems like an impossible question. It would seem on the surface, there shouldn't be any qualitative difference whereby on one side of the divide a certain number of neurons means no mental representation and on the other side, it does. What does that even mean?
  • kudos
    374
    My challenge is to understand what this fundamental difference between the two is.

    If this is the aim of your work, an excellent topic. You are onto something here...

    That right there is the essence of the origins of the hard problem of consciousness.

    This speculation is where a problem of logical extension is occurring. The essence of consciousness may or may not include other components than simple rationality and functional neural networks; if a computer program could read it's own code, would it totally understand from that it's own place in the world as a computer program? You are leveraging this Darwinian outlook to claim a hypothesis that it rests on simple content has already been fulfilled. It has now become ideology and is no longer the scientific inquiry front that it was formerly impersonating. It works because you have made no scientific assumptions, but have included ontological ones instead, that do not affect the structure of the synthetic propositions outlined.
  • Gnomon
    3.5k
    I suggest that the 'subjective essence of experience' is one of the connotations of the term 'being' when used as a noun - that 'a being' is precisely the kind of entity that possesses the element of subjectivity, even if in rudimentary form. This is the point at which qualities of being a.k.a qualia start to become manifest.Wayfarer
    Wow! That is a deep philosophical insight. But, like all philosophical intuitions, it may not convince those who require physical evidence. Could subjectivity be evolutionarily associated with some physical development, like Broca's bit of brain? Seriously, I'm just kidding. :joke:
  • Fooloso4
    5.5k
    One implication is the rejection of "kinds" in favor of degrees of difference.
  • schopenhauer1
    10k
    If this is the aim of your work, an excellent topic. You are onto something here...kudos

    :up:

    This speculation is where a problem of logical extension is occurring. The essence of consciousness may or may not include other components than simple rationality and functional neural networkskudos

    The kind of consciousness I am talking about wouldn't necessitate "rationality" but some sort of "awareness" of the environment, something akin to a "point of view" or "something it is like to be something".

    if a computer program could read it's own code, would it totally understand from that it's own place in the world as a computer program?kudos

    I guess it is always the debate between map and terrain here. A computer program can behave any number of ways, but it would only be conscious if it had some sort of "something it is likeness" that it "felt".

    It has now become ideology and is no longer the scientific inquiry front that it was formerly impersonating. It works because you have made no scientific assumptions, but have included ontological ones instead, that do not affect the structure of the synthetic propositions outlined.kudos

    Not sure what you are accusing me of here. But here is an article discussing the scientific propositions (at a high level for a broad audience, but based on harder scientific studies):

    The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals.


    The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (Tectum means roof in Latin, and it often covers the top of the brain.) It coordinates something called overt attention—aiming the satellite dishes of the eyes, ears, and nose toward anything important.

    All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.

    Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it.
    The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
    A New Theory Explains How Consciousness Evolved A neuroscientist on how we came to be aware of ourselves By Michael Graziano

    But here in this article we see an example of something I pointed out in a previous thread regarding the mixing of "mental" and "physical" such that there is a "hidden dualism". Notice in that last paragraph the following examples of this switching back and forth (without explanation of how one goes to the other):

    An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning.
    But what is the "simulation" here? What is that? (the HARD PROBLEM).

    And here we see "internal model" is a "simulation" that "keeps track of whatever is being controlled, etc.". But wait, we skipped the good part. How is it the neurons are connected (in fact, the same as) the internal model/simulation?

    The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement.

    What is this "information encoded" "in the complex pattern of the neurons"? That seems like a nice little homunculus.
  • schopenhauer1
    10k
    One implication is the rejection of "kinds" in favor of degrees of difference.Fooloso4

    You can have degrees of computer programming that gives you really good responses, but as good as those outputs are, it may never truly be conscious.
  • Fooloso4
    5.5k


    I am agnostic as to whether AI will ever be conscious. It was not too long ago that it was generally believed that a computer program and associated hardware could pilot a car. Such a thing was thought to require consciousness.
  • wonderer1
    1.7k
    So the big deal I see is that sponges have very basic neural networks that most scientists agree is behavioral but without a mental representation of the world. However, with animals like jellyfish, worms, and insects, the neural nets equates to a mental representation (however basic) of the world. My challenge is to understand what this fundamental difference between the two is. That right there is the essence of the origins of the hard problem of consciousness. However, this seems like an impossible question. It would seem on the surface, there shouldn't be any qualitative difference whereby on one side of the divide a certain number of neurons means no mental representation and on the other side, it does. What does that even mean?schopenhauer1

    This 2021 article says that sponges don't have neurons but do have cells that may have some neuron like functionality. However, the investigation is very preliminary.

    Also, it is an open question as to what extent very simple creatures like worms might achieve a rudimentary mental representation. Neurons can automate behavior without mental representation and I'm skeptical towards the idea that worms (or jellyfish) have even the most rudimentary mental representations. (Although projects like Open Worm may eventually provide evidence one way or another.)

    Sheer quantity of neurons matters. Quantity of neurons plays a significant role in how complex the interconnections between neurons can be. It is (very crudely) analogous to the way that a higher transistor count in a microprocessor can allow for more complex calculations performed within a given unit of time. With 'surplus' neurons available an organism can have neurons which aren't directly involved with getting from sensory input to behavioral output. A network of 'surplus' neurons can sit alongside the neurons which manage basic survival, and instead of monitoring sensory inputs or participating in causing motor responses, the surplus network can monitor both the outputs of sensory neurons and motor neurons and learn about patterns to the organisms own operation that the more primitive I/O networks are not able to learn.

    So this higher level monitoring might recognize something like, 'My automatic response the last time I saw something like that was to eat it, but the result was bad.', and manage to interfere with the behavioral output, so as to avoid a reoccurence of such a bad event.

    I'd suggest that neurons available to learn a more complex way of interacting with the world are a prerequisite to mental representation. The more such 'surplus' neurons there are in a brain the more complex the mental representation can be.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.