• Gnomon
    3.8k
    The Meta-management Theory of Consciousness uses the computational metaphor of cognition to provide an explanation for access consciousness, and by doing so explains some aspects of the phenomenology of consciousness.Malcolm Lett
    As I said before, I'm not qualified to comment on your theory in a technical sense. So, I'll just mention some possible parallels with an article in the current Scientific American magazine (04/24) entitled : A Truly Intelligent Machine. George Musser, the author, doesn't use the term "meta-management", but the discussion seems to be saying that Intelligence is more than (meta-) information processing. For example, "to do simple or rehearsed tasks, the brain can run on autopilot, but novel or complicated ones --- those beyond the scope of a single module --- require us to be aware of what we are doing". In a large complex organization, such supervision --- etymology, to see from above (meta-) --- is the role of upper management. And the ultimate decision-maker, the big-boss, is a Meta-Manager : a manager of managers.

    Musser notes that, "consciousness is a scarce resource", as is the supervising time of the big boss, who can't be bothered with low-level details. Later, he says, "intelligence is, if anything, the selective neglect of detail" Which may be related to your item (3) Limited Access. So, researchers are advised to "go back to the unfashionable technology of 'discriminative' neural networks". Which may get back to your item (1) Intentionality. Intentional behavior requires discrimination between inputs & outputs : incoming low-level data and executive actions. After "all those irrelevant details are eliminated", the manager can focus on what's most important.

    The article refers to a "key feature" of GWT (Global Workspace Theory) as a "configurator to coordinate the modules and determine the workflow". Again, the "Configurator" or optimizer or designer seems to be a high-level management position. That role also seems to require "self-monitoring". The GWT expert speculates that "consciousness is the working of the configurator". Musser notes that, "those capacities . . . aren't relevant to the kinds of problems that AI is typically applied to". So, the GWT guy adds, "you have to have an autonomous agent with a real mind and a control structure for it" Such executive agency also requires the power to command, which your item (2) calls "causality", the influence determining subsequent effects.

    Neuroscientist Anil Seth makes an important philosophical observation : "Consciousness is not a matter of being smart, it's equally a matter of being alive". And that makes the "hard problem" of creating consciousness even harder. Perhaps requiring divine powers. Or a bolt of lightning : "it's alive!!!" :joke:
  • Malcolm Lett
    76
    Sorry for my tardiness in responding.

    I think Metzinger's views are very plausible. Indeed his views on the self as a transparent ego tunnel at once enabling and limiting our exposure to reality and creating a world model is no doubt basically true. But as the article mentions, it's unclear how this resolves the hard problem. There is offered a reason why (evolutionarily speaking) phenomenality emerged but not a how. The self can be functionally specified, but not consciousnessbert1

    I think that just getting some clarity about the functional aspects of consciousness would be a huge leap forwards, regardless of whether they explain the phenomenal aspects. I'm regularly frustrated at various discussions that necessarily go off into hypotheticals because they have nothing agreed upon to ground the discussion. For example, if you're trying to understand the phenomenality of consciousness, but you don't have an agreed reason for why the functional aspects of it exist, or what they do, then you are at a loss to where to define the scope of what is and isn't consciousness -- a classic case is Block's seminal paper that tries to distinguish between access and phenomenal consciousness. His arguments about P-Cs existing without A-Cs, or vice versa, can only be made because we don't have clear boundaries of what consciousness is.

    My point is that the work of the sort of Metzinger's or my own, if we could find some way to test the theories and pin down the details, would help a lot to define those boundaries at the physical and functional level. Then we'd be in a better position to figure out what P-Cs is.

    ..wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel.180 Proof

    I'm not familiar with his theory. I've just watched the TED talk video so far. The basic idea of us developing a self-model, developing models of the world, and seeing the world through those models is precisely what I'm basing my theory on. It's also the same idea put forward by Donald Hoffman's User Interface Theory of Perception. I'll read for fully his theory though - the "tunnel" analogy is interesting. Also interesting is his suggestion that the processes that take our raw perceptions and turn them into our modelled interpretation of the world is "too fast" for us to analyse (introspectively).
  • bert1
    2k
    It's also the same idea put forward by Donald Hoffman's User Interface Theory of Perception.Malcolm Lett

    Yes I think that's right, the two seems very similar in terms of the functional story. But their claims about consciousness seem very different (but I haven't studied either properly - these are just first impression). Contrasting panpsychism with conscious realism is interesting, and something I haven't thought about enough.
  • RogueAI
    2.9k
    Kastrup argues that a computer running a simulation of a working kidney will not pee on his desk, so why would we expect a simulation of a working brain to be conscious?
  • Malcolm Lett
    76
    Lol. It's a funny argument. Too simplistic, but might have some use.

    Just out of interest, I'll have a go.
    So, let's say that this kidney simulation is 100% accurate of a real kidney, to the level of, say, molecules. And that this kidney simulation has a rudimentary simulation of its context operating in a body, so that if a simulated kidney were to pee, then it could. In this example, the kidney would indeed pee, not on his desk, but inside the simulation.

    If we take as an assumption (for the sake of this thought experiment) that consciousness is entirely physical, then we can do the same thing with a conscious brain. This time simulate the brain to the molecular level, and again provide it some rudimentary body context so that the simulated brain thinks it's operated inside a body with eyes, ears, hands, etc. Logically, this simulation thus simulates consciousness in the brain. That's not to say that the simulated brain is conscious in a real world sense, but that it is genuinely conscious in its simulated world.

    The question is what that means.

    Perhaps a simulation is nothing but a data structure in the computer's memory. So there is nothing that it "feels like" to be this simulated brain -- even though there is a simulation of the "feels like" nature.

    Alternatively. David Chalmer's has a whole book arguing that simulations of reality are themselves reality. On that basis, the simulation of the brain and its "feels like" nature are indeed reality -- and the simulated brain is indeed conscious.

    A third argument appeals to a different analysis. Neurons can be said to simulate mind states, in something similar to the same way that a computer simulation of a brain would. I'm appealing to the layered nature of reality. No single neuron is a mind, and yet the collection of billions of neurons somehow creates a mind (again, I'm assuming physicalism here). Likewise, neurons are not single discrete things, but collections of molecules held together by various electromagnetic forces. Trillions of molecules are floating through space in the brain, with arbitrary interaction-based grouping creating what we think of as object boundaries - constructing what we call neurons, glial cells, microtubules, etc. These molecules "simulate" neurons etc. In all of that, there is no such thing as a "mind" or "consciousness" as any kind of object in the "real world". Those things exist as simulations generated by all these free-floating molecule-based simulations of neural-networks. Thus, the computer simulation of a conscious mind is no more or less real than a molecular simulation of a mind.
  • Malcolm Lett
    76
    Just finished reading the review of the Ego Tunnel (https://naturalism.org/resources/book-reviews/consciousness-revolutions). I don't have much of significance to add, but s couple of minor thoughts.

    At the level of the review summary of Metzinger's work, there's not a lot that's unique compared to what various others have written about. It's becoming a well narration of our current neuroscientific understanding. That's not too say that his particular telling of the story isn't valuable, but I do feel a sense of frustration when I read something suggesting that it's one particular person's ideas when actually these are already ideas in the common domain.

    This is more of a reflection for myself. I chose at an early stage that it was going to be very convoluted to tell my particular narrative based on other's works because they all had extra connotations I wanted to avoid. I would have to spend just as much time talking about which parts of those other theories should be ignored in terms of my story. But I can see the frustration that that can create. I shall need to do better to be clear which parts are novel.

    A second thought is about his description of the key evolutionary function of the phenomenal experience that he attributes to the self model. I suspect his books make a better case, but the review suggests he may have fallen into a common trap. The phenomenal and subjective natures of our experience are so pervasive that it can be hard to conceptually separate them from the functions that we're trying to talk about. He says that we need the experience of self to be attached to our perceptions in order to function. But does he lay out clearly why that is the case? It's not a given. A simple robot could be granted access to information that delineates it's physical form from that of the environment without any of the self modelling. Even if it models the "self", it doesn't follow that it then "experiences" the self in any way like we do. I'm all for suggesting that there is a computational basis to that experience, but a) the mechanisms need to be explained, and b) the functional/evolutionary benefit for that extra step needs to be explained.

    That's what I've tried to do with the more low-level handling of the problem in MMT. Though even then there are some hand-wavy steps I'm not happy with.

    Metzinger's books look like an excellent read. Thankyou for sharing the links.
  • RogueAI
    2.9k
    Just out of interest, I'll have a go.
    So, let's say that this kidney simulation is 100% accurate of a real kidney, to the level of, say, molecules. And that this kidney simulation has a rudimentary simulation of its context operating in a body, so that if a simulated kidney were to pee, then it could. In this example, the kidney would indeed pee, not on his desk, but inside the simulation.

    If we take as an assumption (for the sake of this thought experiment) that consciousness is entirely physical, then we can do the same thing with a conscious brain. This time simulate the brain to the molecular level, and again provide it some rudimentary body context so that the simulated brain thinks it's operated inside a body with eyes, ears, hands, etc. Logically, this simulation thus simulates consciousness in the brain. That's not to say that the simulated brain is conscious in a real world sense, but that it is genuinely conscious in its simulated world.
    Malcolm Lett

    The problem here is that simulated urination/urine is not urine (Kastrup's point that the simulated kidney will never pee on his desk), so if simulated urine is not actual urine, simulated consciousness would not be actual consciousness.

    Also, you speak of "inside the simulation". Imagine you're running a simulation of a tornado. Then all the minds in the universe disappear, but the computer the simulation is running on is still active. With all the minds gone, is there still a simulation of a tornado going on? Or is it just a bunch of noise and pixels turning off and on? I think the latter, and this goes back to my point that any simulation is ultimately just a bunch of electric switches turning off and on in a certain way. It takes a mind to attach meaning to the output of those switching actions.
  • wonderer1
    2.2k
    That's not too say that his particular telling of the story isn't valuable, but I do feel a sense of frustration when I read something suggesting that it's one particular person's ideas when actually these are already ideas in the common domain.Malcolm Lett

    :up:

    I appreciate your frustration. Still, I appreciate that Metzinger is able to communicate this way of looking at things effectively.
  • Malcolm Lett
    76
    Also, you speak of "inside the simulation". Imagine you're running a simulation of a tornado. Then all the minds in the universe disappear, but the computer the simulation is running on is still active. With all the minds gone, is there still a simulation of a tornado going on? Or is it just a bunch of noise and pixels turning off and on? I think the latter, and this goes back to my point that any simulation is ultimately just a bunch of electric switches turning off and on in a certain way. It takes a mind to attach meaning to the output of those switching actions.RogueAI

    That's a matter of opinion. Your statement depends on the idea that consciousness is special in some way - beyond normal physics - and that it's our consciousness that creates meaning in the universe.

    An alternative view is that physicalism fully explains everything in the universe, including consciousness (even if we don't know how), and under that view the simulation of the tornado is no different with/without human consciousness. Semiotics explains that data representations have no meaning without something to interpret them. So a computer simulation of a tornado without something to interpret the result certainly would be lacking something - it would just be data noise without meaning. But the thing doing the meaning interpretation doesn't have to be a human consciousness. It could just as easily be another computer, or the same computer, that is smart enough to understand the need to do tornado simulations and to examine the results.

    The urination example is a nice "intuition pump" (as Dennet calls them), but like many intuition pumps it doesn't hold up against closer scrutiny. The point I was trying to make about conscious simulations is that it's not a given that there's a substantial difference between the simulation in a silicon computer versus a simulation in a molecular computer (aka biology). If you hold to the idea that consciousness is purely physical, then this argument doesn't seem so strange.

    I might be wrong, but I think most of us are intuitively looking at the "simulation of urination" example in a particular way: that the computer is running in our world -- let's call it the "primary world" -- and that the simulated world that contains either the simulated urination or the simulated consciousness is a world nested within our own -- so let's call it the "nested world". On first glance that seems quite reasonable. Certainly for urination. But on closer inspection, there's a flaw. While the simulated world itself is indeed nested within the primary world, the simulated urine is not nested within primary world urine. Likewise, the simulated consciousness is not nested within the primary world consciousness. Now, if you take my argument about molecules acting as a mind simulator, then the primary world consciousness in my brain is a sibling to the nested world consciousness.

    There's a tree of reality:
    1. The primary world
       2a. Molecular simulation engine
          3a. Biological conscious mind
       2b. Silicon simulation engine
          3b. Silicon conscious mind
    
  • RogueAI
    2.9k
    What do you think of this?
    https://xkcd.com/505/

    Is it possible to simulate consciousness by moving rocks around (or, as one of the members here claims, knocking over dominoes)?
  • wonderer1
    2.2k
    Is it possible to simulate consciousness by moving rocks around (or, as one of the members here claims, knocking over dominoes)?RogueAI

    Ciitation needed. Where has someone here claimed that consciousness can be simulated by knocking over dominoes?
  • RogueAI
    2.9k

    https://thephilosophyforum.com/discussion/comment/893885

    "Does anyone think a system of dominoes could be conscious? What I meant by a system of dominoes includes a machine that continually sets them up after they fall according to some program."

    oh well then, in principle... MAYBEflannel jesus

    What do you think, Wonderer? Could consciousness emerge from falling dominoes?
  • hypericin
    1.6k


    Incredible op!!! :sparkle: You have clearly poured a lot of thought into the matter, to great effect. I will read your blog post next, and your paper is on my list. I've had similar ideas, though not nearly at your level of depth and detail.

    I think where we most sharply differ is in the nature of deliberation.

    In your account, deliberation is something that chugs along on its own, without awareness. It is only via a meta management process that a predictive summation enters awareness.

    But this is at odds with my introspective account of deliberation. Deliberation itself seems to be explicitly phenomenal: I speak to my self (auditory), and this self talk is often accompanied by imagery (visual). The conscious brain seems to speak to itself in the same "language" that the unconscious brain speaks to it: the language of sensation.

    Is your idea that phenomena such as self talk is a model of the unconscious deliberation that is actually taking place? This does not seem to do justice to the power that words and images have as tools for enabling thought, not just in providing some sort of executive summary. Think of the theory that language evolved primarily not for communication but as a means of enabling thought as we know it. Can meta management explain the gargantuan cognitive leap we enjoy over our nearest animal neighbors?

    If deliberation is phenomenal, then there is no need for this meta management process. Deliberation enters awareness in a manner that is co-equal with the phenomenal models of the external world. If deliberation goes off the rails, then the executive brain can regulate it, since deliberation is at least partially voluntary.

    The evolutionary novelty enabling deliberation would be the ability of the executive brain to insert phenomenal models into it's own inputs. This explains the relative feebleness of especially visual imagery: the same predictive modeling systems used by sensation are *not* reused by the executive brain. Rather, it (sometimes quite crudely) mimics them. Audio, being less information dense, is more amenable to this mimicry.

    Since the cost/benefit ratio of this change seems very favorable, we should expect at least crude deliberation to be widespread in nature. Adding language as a deliberative tool is where the real cognitive explosion happened.

    Here is a rough sketch of my alternative take. (I see I used "rumination" for "deliberation". )
    20240420-103139.jpg
    remove duplicates online
  • Malcolm Lett
    76
    Thank you. Yes, I think I have poured way too much time into this.

    But this is at odds with my introspective account of deliberation. Deliberation itself seems to be explicitly phenomenalhypericin

    Sure. That is indeed a different take. I'm taking what I like to think of as a traditional scientific approach, otherwise known as a reductionist materialist approach. Like anyone in this field, I'm driven by a particular set of beliefs that is driven by little more than intuition - my intuition is that reductive scientific methods can explain consciousness - and so a big motivation -- in fact one of the key drivers for me - is that I want to attempt to push the boundaries of what can be explained through that medium. So I explicitly avoid trying to explain phenomenology based on phenomenology.

    Is your idea that phenomena such as self talk is a model of the unconscious deliberation that is actually taking place? This does not seem to do justice to the power that words and images have as tools for enabling thought, not just in providing some sort of executive summary. Think of the theory that language evolved primarily not for communication but as a means of enabling thought as we know it. Can meta management explain the gargantuan cognitive leap we enjoy over our nearest animal neighbors?hypericin

    Meta-management is just the start - or more accurately an intermediary step somewhere along the way. The multi-model internal representations that we use for thought - for deliberation - are an equally important part of intelligence and of the contents and depth of our conscious experience. Likewise are the mechanisms for attentional control, memory, and sorts of other key features of intelligence. As we know from evolution, these things tend to evolve together in a positive feedback loop. So I wouldn't say that my theory diminishes any of that, rather that it offers a theory of just one part.

    There's also the possibility that there are multiple feedback loops involved that operate under different regimes. For example, there's an interesting dichotomy in discussions related to various higher-order theories and other meta-cognitive ideas between first-order and second-order processes and representations. HOT for example proposes that consciousness is a second-order representation that is presumably additionally second-order processed. But then there's rebuttals by others suggesting that the effects of HOTs could be achieved through first-order mechanisms. MMT is perhaps a little peculiar in that it constructs a second-order representation (the summary step in the feedback loop), but then feeds it into first-order processing.

    In contrast, attention is a mechanism that also needs to monitor and manipulate cognitive processing. The neuroscientific literature at the moment I believe favours a first-order representation, but arguably attention is a second-order process.

    Well, I'm being a bit waffly, but the point I'm trying to make is that there's good reason to suspect that the brain includes multiple different kinds of macro-scale feedback loops that operate in different ways. I suspect a more complete theory would acknowledge that all of those different ways contribute to the final result.
    Since the cost/benefit ratio of this change seems very favorable, we should expect at least crude deliberation to be widespread in nature. Adding language as a deliberative tool is where the real cognitive explosion happened.hypericin

    There's a great 2023 book by Max Bennet, A Brief History of Intelligence. It lays out a very approachable story of how various aspects of human intelligence involved, going all the way back to the first worms. They also mention a number of theories of how language evolved and how it might be involved with thought.
  • hypericin
    1.6k
    my intuition is that reductive scientific methods can explain consciousness - and so a big motivation -- in fact one of the key drivers for me - is that I want to attempt to push the boundaries of what can be explained through that medium. So I explicitly avoid trying to explain phenomenology based on phenomenology.Malcolm Lett

    I absolutely agree with your intuition.

    Of course, there is a difference between explaining self-awareness and explaining phenomenology. I am trying to explain self-awareness, not phenomenology, with phenomenology. Your theory is clearly an explanation of self-awareness, much less clearly an explanation of phenomenology. As you say, you have an intuition of how it might partly explain it, but struggle to articulate it.

    So I wouldn't say that my theory diminishes any of that, rather that it offers a theory of just one part.Malcolm Lett

    My concern was that you were treating what we in the everyday sense term "deliberation", such as self talk, as epiphenomenal, as the "cognitive sense" corresponding to the real work happening behind the scenes. Was that a misunderstanding? Is self talk not the sort of deliberation you had in mind?
  • sime
    1.1k
    Sure. That is indeed a different take. I'm taking what I like to think of as a traditional scientific approach, otherwise known as a reductionist materialist approach. Like anyone in this field, I'm driven by a particular set of beliefs that is driven by little more than intuition - my intuition is that reductive scientific methods can explain consciousness - and so a big motivation -- in fact one of the key drivers for me - is that I want to attempt to push the boundaries of what can be explained through that medium. So I explicitly avoid trying to explain phenomenology based on phenomenology.Malcolm Lett

    Consider the fact that traditional science doesn't permit scientific explanations to be represented or communicated in terms of indexicals, because indexicals do not convey public semantic content.

    Wittgenstein made the following remark in the Philosophical Investigations

    410. "I" is not the name of a person, nor "here" of a place, and
    "this" is not a name. But they are connected with names. Names are
    explained by means of them. It is also true that it is characteristic of
    physics not to use these words.

    So if we forbid ourselves from reducing the meaning of a scientific explanation to our private use of indexicals that have no publically shareable semantic content , and if it is also assumed that phenomenological explanations must essentially rely upon the use of indexicals, then there is no logical possibility for a scientific explanation to make contact with phenomenology.

    The interesting thing about science education, is that as students we are initially introduced to the meaning of scientific concepts via ostensive demonstrations, e.g when the chemistry teacher teaches oxidation by means of heating a testtube with a Bunsen Burner, saying "this here is oxidation". And yet a public interpretation of theoretical chemistry cannot employ indexicals for the sake of the theory being objective, with the paradoxical consequence that the ostensive demonstrations by which each of us were taught the subject, cannot be part of the public meaning of theoretical chemistry.

    So if scientific explanations are to make contact with phenomenology, it would seem that one must interpret the entire enterprise of science in a solipsistic fashion as being semantically reducible to one's personal experiences... In which case, what is the point of a scientific explanation of consciousness in the first place?
  • Malcolm Lett
    76
    My concern was that you were treating what we in the everyday sense term "deliberation", such as self talk, as epiphenomenal, as the "cognitive sense" corresponding to the real work happening behind the scenes. Was that a misunderstanding? Is self talk not the sort of deliberation you had in mind?hypericin

    I'm not sure if you mean "epiphenomenal" in the same way that I understand it. The Cambridge dictionary defines epiphenomenal as "something that exists and can be seen, felt, etc. at the same time as another thing but is not related to it". More colloquially, I understand epiphenomenal to mean something that seems to exist and has a phenomenal nature, but has no causal power over the process that it is attached to. For either definition, is deliberation epiphenomenal? Absolutely not. I would place deliberation as the primary first-order functioning of the brain at the time that it is operating for rational thought. Meta-management is the secondary process, that it performs in addition to its deliberation. Neither are epiphenomenal - as they both perform very important functions.

    Perhaps you are meaning to make the distinction between so-called "conscious and unconscious processing" in the form that is sometimes used in these discussions - being that the colloquial sense of deliberation you refer to is somehow phenomenally conscious in addition to the supposedly "unconscious" mechanical processes that either underly that deliberation or are somehow associated. If that is the intention of your question, then I would have to start by saying that I find distinction arbitrary and unhelpful.

    MMT claims that when first-order cognitive processes are attended to through the meta-management feedback loop + cognitive sense, then that first-order cognitive process becomes aware of its own processing. At any given instant in time, deliberative processes may occur with or without immediately subsequent self-focused attention co-occurring. Regardless, due to the complex and chaotic* nature of deliberation, moments of self-focused attention will occur regularly in order to guide the deliberative process. Thus, over time the cognitive system maintains awareness over its overall trajectory. The granularity of that awareness differs by the degree of familiarity and difficulty of the problem being undertaken.

    I claim that the self-focused attention and subsequent processing of its own cognitive state is what correlates with the phenomenal subjective experience. So all of those processes are conscious, to the degree of granularity of self-focused attention.

    * - by "chaotic" I mean that without some sort of controlling feedback process it would become dissociated from the needs of the organism; that the state trajectory would begin to float unconstrained through cognitive state space, and ultimately lead to poor outcomes.

    Does that answer your question?
  • Malcolm Lett
    76
    Cool XKCD episode, and a nice metaphor.

    https://xkcd.com/505/

    I've been thinking about this over the last couple of days. On first glance I found that it inspired an intuition that no computer simulation could possibly be anything more than just data in a data structure. But it goes further and inspires the intuition that a reductive physicalist reality would never be anything more either. I'm not sure if that's the point you were trying to make, but I wanted to delve deeper because I found it interesting.

    In the comic, Randal Munroe finds himself mysteriously ageless and immortal on a vast infinite expanse of sand and loose rocks. He decides to simulate the entire physical universe, down to sub-atomic particles. The implication is that this rock based universe simulation includes simulation of an earth-like planet with human-like beings. Importantly, the rocks are inert. Randal must move them, according to the sub-atomic laws of physics that he has deduced during his endless time to ponder.

    So, importantly, the rocks are still just rocks. They have no innate meaning. It's only through Randal's mentality that the particular arrangement of rocks means anything.

    In our physical reality, we believe that the sub-atomic particles interact according to their own energetic forces - there is no "hand of god" that does all the moving around. But they operate exactly according to laws of physics. In the thought experiment, Randal plays the personification of those laws. So it makes no material difference whether the subatomic particles move according to their energies in exact accordance with the laws of physics, or whether the "hand of god"/Randal does the exact same moving.

    The rock-based simulation of reality operates according laws of physics that may be more complete than what we currently know but which we assume are entirely compatible with the dogma of reductive physicalism. According to that dogma, we believe that our real reality operates according to the same laws. Thus, the rock-based simulation and our reality are effectively the same thing. This leaves us with a conundrum - our intuition is that the rock simulation is just a bunch of inert objects placed in arbitrary and meaningless ways, it means nothing and carries no existence for any of the earths or humans that it supposedly simulates - at least not without some observer to interpret it. This means that the same applies to our reality. We're just a bunch of (effectively inert) subatomic particles arranged in arbitrary ways. It means nothing, unless something interprets it.

    This kind of reasoning is often used as the basis for various non-physicalist or non-reductive claims. For example, because consciousness is fundamental and universal, and it is this universal consciousness that does the observing. Or perhaps the true laws of physics contain something that is not compatible with reductive physicalism.

    These are both possibilities, and others alike.
    But we're now stuck. We're left questioning the most fundamental question - even more fundamental than consciousness itself: what is reality? And we have no way of identifying which competing hypothesis are more accurate than any other.

    At the end of the day, the rock simulation analogy is a good one for helping to identify the conundrum we face. But it isn't helpful in dispelling a belief in any one dogma, because all theories are open to uncertainty.

    For example, on the reductive physicalist side of the coin. The analogy doesn't necessitate that reductive physicalism is wrong. It's possible that we've just misunderstood the implications of the analogy and how it applies at super-massive scale. It's possible that I'm being pig-headed in trying to claim the sameness of the personification of the laws of nature versus the laws of nature as operated by individual sub-atomic particles. It's possible that we've misunderstood the nature of consciousness (my personal preference). It's also possible (in fact likely) that we're talking sideways on the meaning of reductive physicalism. I recently watched an interview with Anil Seth and Don Hoffman (https://www.youtube.com/watch?v=3tUTdgVhMBk). Somewhere during that interview Anil had to clarify how he thinks of it because of the differences in the way people interpret that phrase.

    Is it possible to simulate consciousness with rocks? I think the only honest answer anyone can give is, "I don't know".
  • Malcolm Lett
    76
    So if we forbid ourselves from reducing the meaning of a scientific explanation to our private use of indexicals that have no publically shareable semantic content , and if it is also assumed that phenomenological explanations must essentially rely upon the use of indexicals, then there is no logical possibility for a scientific explanation to make contact with phenomenology.sime

    The interesting thing about science education, is that as students we are initially introduced to the meaning of scientific concepts via ostensive demonstrations, e.g when the chemistry teacher teaches oxidation by means of heating a testtube with a Bunsen Burner, saying "this here is oxidation". And yet a public interpretation of theoretical chemistry cannot employ indexicals for the sake of the theory being objective, with the paradoxical consequence that the ostensive demonstrations by which each of us were taught the subject, cannot be part of the public meaning of theoretical chemistry.sime

    I struggle to grok the distinction between indexicals etc. I think the point you're making is that we are taught science through observing - ie: first-person experience, but then expected to only describe it in third-person terms - somehow pretending as if the first-person experience doesn't exist or adds nothing.

    I wonder - is the answer the difference between a question and an answer? Or more accurately, the difference between a problem needing scientific explanation, and that explanation? The "question" at hand is the what and why of phenomenal experience. The question is phrased in terms of the first-person. But that's ok. Science only requires that the explanation is phrased in terms of the third-person.

    For example, the question "why is water wet" is phrased in terms of our first-person subjective experience of water. The entire concept of wetness only exists because of the way that water interacts with our skin, and then with our perceptions of those interactions etc. etc. The answer, is a third-person explanation involving detailed descriptions of collections of H20 molecules, sensory cells in the skin, nerves, brain, etc. etc.

    So, assuming that a third-person explanation of consciousness is possible, it's ok that it's a third-person explanation of a first-person question.
  • RogueAI
    2.9k
    Is it possible to simulate consciousness with rocks? I think the only honest answer anyone can give is, "I don't know".Malcolm Lett

    That was a well thought out reply, but I disagree with your conclusion. If materialism/physicalism entails that consciousness might come about through moving rocks around, that such a thing is even possible, it has fallen prey to reductio ad absurdum. Consciousness cannot emerge from moving rocks around. We do not need to wonder if rockslides might be conscious. They aren't. Believing "rock consciousness" to be possible is on par with "I might be a p-zombie". It's a dead end.
  • hypericin
    1.6k
    Does that answer your question?Malcolm Lett

    Sorry for the late reply.

    "Epiphenomenal" was a poor choice of words. I think I was and am at least partially misunderstanding you.

    Take an example. I have the deliberation "I should go to the store today", and I am aware of that deliberation. I initially thought you would say that verbalization "I should go to the store today" would be just the summarized cognitive sense of the actual deliberation, some non-verbal, non conscious (I should go to the store today). The language would come after the deliberation, and is not the form that the actual deliberation takes.

    Is this what you think? Or do you think the brain deliberates linguistically whether or not we are aware of it, and the meta management just grants awareness of the language?
  • hypericin
    1.6k
    Thus, the rock-based simulation and our reality are effectively the same thing.Malcolm Lett

    It's an interesting question, deserving of it's own thread. But I think this isn't right.

    Strictly speaking a computer cannot simulate anything physical. It's can only simulate the physical thing's informational state. The state of a kidney peeing on your desk, but not a kidney peeing on your desk. That is why an interpreter is needed: it is state, divorced from substrate. That state piggybacks on top of the actual, embodied system: the physical computer, or rockputer. And so, the rock based simulation and our reality are fundamentally different.

    But what if consciousness is itself fundamentally state? While in the physical/informational divide I very much want to place consciousness on the informational side, I don't think this is the same as saying consciousness is state. Consider that in a computer the relevant state is represented in certain memory regions. These memory regions, taken together, are just an enormous binary number. So, while counting to infinity if we reach the same number as the relevant state of a consciousness simulation, will that particular state of consciousness wink into existence? I think not.
  • Malcolm Lett
    76
    Take an example. I have the deliberation "I should go to the store today", and I am aware of that deliberation. I initially thought you would say that verbalization "I should go to the store today" would be just the summarized cognitive sense of the actual deliberation, some non-verbal, non conscious (I should go to the store today). The language would come after the deliberation, and is not the form that the actual deliberation takes.

    Is this what you think? Or do you think the brain deliberates linguistically whether or not we are aware of it, and the meta management just grants awareness of the language?
    hypericin

    Yeah, I'm talking about something quite different. Frankly, I don't care much for the arbitrary distinctions people impose on language vs non-language, visual vs non-visual, so-called representational vs non-representational, etc.. As far as I'm concerned, in terms of understanding how our brains function, those distinctions are all premature. Yes, we interpret those differences via introspection, but it's only with very careful examination that you can use those phenomenal characteristic differences to infer anything about differences of actual cognitive computational processing and representation. I treat the brain as a computational system, and under that paradigm all state is representational. And it's only those representations, and their processing, that I care about.

    So, when I use the term deliberation, I'm referring to the computational processing of those representations. More specifically, I'm referring to the "System II thinking" form of computational processing, in Kahnemann's terminology.

    Tying this back to your original question a few posts earlier, I think perhaps the question you were asking was something like this: "does MMT suggest that deliberative processing can occur without conscious awareness or involvement, and that conscious experience of it is some sort of after-effect?". In short, yes. MMT as I've described it would suggest that System II deliberative "thought" is a combination of a) subconscious cognitive processing that performs the largest bulk of the operations, but which sometimes gets side-tracked, and b) consciously aware meta-management of (a). Furthermore, the (b) part only needs to monitor the (a) part in proportion to the rate at which the (a) part tends to make mistakes. So an adult would have less conscious experience of those thoughts than a child. The extended form of this argument is that conscious phenomenal access to such thought seems continuous/unbroken because of memory, and because we don't have information about the gaps in order to identify any that might exist.

    However, I've been reading Metzinger's "Being No One", and in Ch2 he argues that System II thought always occurs with constant conscious involvement, and offers some examples illustrating that point - for example that blindsight patients can only answer binary questions, they cannot "think" their way through the problem if it requires access to their blindsighted information. So I'm wondering now if my MMT claim of sub-conscious System II thought fails to fit the empirical data - or at the very least, the way I'm describing it doesn't work.

    It's an interesting question, deserving of it's own thread.hypericin
    I think you're right. It's an idea I've been only loosely toying with and hadn't tried putting it down in words before.
  • Malcolm Lett
    76
    ↪Malcolm Lett I'm still 'processing' your MMT and wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel.180 Proof

    @180 Proof I wanted to follow-up on that discussion. I've since watched a few interviews and realised that Metzinger seems to have been one of the earlier people to suggest the representational/computational models that underly much of the neuroscientific interpretation of brain function and consciousness today. So I was quite misdirected when I complained that he was repeating what's already widely known of today.

    I've now got a copy of its Being No One book. It's a dense read, but very thorough. I'm only up to chapter 2 so far. There's a few bits I'm dubious of, but largely the approach I've taken seems to be very much in line with how he's approached it.

    Anyway, I wanted to thank you for pointing me in the direction of Metzinger. I'm ashamed that my lit survey failed to uncover him.
  • GrahamJ
    38
    I'm going to respond to the medium article, not the op.

    I can see you've put a lot of effort into this. Congratulations on writing out your stance in coherent language, which is something I'm still working on for my own stance.

    I'm a mathematician and programmer. I have worked in AI and in mathematical biology. I have been interested in computational neuroscience since the 1980s. David Marr is regarded as the godfather of computational neuroscience. I expect you know this quote about the three levels at which any machine carrying out an information-processing task must be understood, but I think it's worth repeating.
    • Computational theory: What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?
    • Representation and algorithm: How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation?
    • Hardware implementation: How can the representation and algorithm be realized physically? [Marr (1982), p. 25]

    When you talk about a design stance, it seems to me that you are interested (mainly) in the computational theory level. That's fine, so am I. When we have an experience, the questions I most want answered is "What is being computed, how is it being computed, and what purpose does the computation serve?". Some people are interested in finding the neural correlates of consciousness. I'm interested in finding the computational correlates of consciousness. This applies to machines as well as living organisms. So far, I think we're in agreement.

    BUT

    I am not impressed by auto-meta-management theory. Maybe I'm too jaded. I have seen dozens of diagrams with boxes and arrows purporting to be designs for intelligence and/or consciousness. Big words in little boxes.

    All the following quotes are from the medium article.

    There’s also a good reason why deliberation isn’t something we use much in ML today. It’s hard to control. Deliberation may occur with minimal to no feedback from the physical body or environment.

    Today, AI is stupidly dominated by ML. And ML is stupidly dominated by NNs. This is just fashion, and it will pass. There's loads of work on searching and planning for example, and it's always an important aspect of the algorithm to allocate computational resources efficiently.

    The tree search algorithm in AlphaZero is 'nothing but' an algorithm for the allocation of resources to nodes in the search tree. This example is interesting from another point of view. At a node deep in the tree, AlphaZero uses a slimmed down version of itself, that is, one with less resources. You could say it uses a model of itself for planning. It may be modelling itself modelling itself modelling itself modelling itself modelling itself modelling itself. Meta-management and self-modelling are not in themselves an explanation for very much.

    The model-free strategy efficiently produces habitual (or automatized) behavior for oft-repeated situations. Internally, the brain learns something akin to a direct mapping from state to action: when in a particular state, just do this particular action. The model-based strategy works in reverse, by starting with a desired end-state and working out what action to take to get there.

    That's not how reinforcement learning is usually done. You have a value function to guide behaviour while the agent is still learning.

    Meta-management as a term isn’t used commonly. I take that as evidence that this approach to understanding consciousness has not received the attention it deserves.

    I think you're not looking in the right places. Read more GOFAI! (Terrible acronym by the way. Some of it's good and old. Some of it's bad and old. Some of it's good and new. Some of it's bad and new.)

    It’s now generally accepted that the brain employs something akin to the Actor/Critic reinforcement learning approach used in ML (Bennet, 2023).

    'Generally accepted'? Citation needed!

    The content of consciousness — whatever we happen to be consciously aware of — is a direct result of the state that is captured by the meta-management feedback loop and made available as sensory input.

    I don't think you've established the existence of a self or a subject that is capable of being aware of anything. You're assuming that it already exists, and is already capable of having experiences (of perceiving apples, etc) Then you're arguing that it can then have more complicated thoughts (about itself, etc). I do not find this satisfactory.

    What might be missing between this description and true human consciousness? I can think of nothing ...
    I'll bundle this with
    *Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data.

    The screamingly obvious thing is feelings. (You're not alone in downplaying the importance of feelings.)

    Feelings are not paint on top of the important stuff. They are the important stuff. In my opinion any theory of consciousness must incorporate feelings at a very fundamental level. In reinforcement learning there is a reward function, and a value function. Why it is I could not tell you, but it seems that our own reward functions and value functions (I think we have multiple ones) are intimately connected with what we subjectively experience as feelings. To go back to Marr, "What is the goal of the computation?" That is where you start, with goals, purposes, rewards. The rest is just engineering...

    The other thing that I think is missing is a convincing model of selfhood (as mentioned above), I think Anil Seth does a much better job of this in his book Being You. He's wrong about some things too...
  • Gnomon
    3.8k
    The tree search algorithm in AlphaZero is 'nothing but' an algorithm for the allocation of resources to nodes in the search tree. This example is interesting from another point of view. At a node deep in the tree, AlphaZero uses a slimmed down version of itself, that is, one with less resources. You could say it uses a model of itself for planning. It may be modelling itself modelling itself modelling itself modelling itself modelling itself modelling itself. Meta-management and self-modelling are not in themselves an explanation for very much.GrahamJ
    The self-referencing models sound reminiscent of Douglas Hofstadter's nested feedback loops espoused in his 1979 book, Gödel, Escher, Bach, and elaborated in his 2007 book, I Am a Strange Loop. He suggested that one of those "slimmed-down models" might be the sentient core of what we experience as The Self and know as "I" or "me", the central "Planner".

    Of course, big-C consciousness is not that simple. Current attempts at Artificial Intelligence are trying a variety of models : Language models, Neural Networks, Random Forest, Linear Regression, etc, etc. But self-modeling may be, not more "intelligent", in terms of processing power, but more human-like, in terms of self knowledge. :smile:

    PS___I have no computer credentials, just a philosophical interest in Consciousness.
  • hypericin
    1.6k
    At a node deep in the tree, AlphaZero uses a slimmed down version of itself, that is, one with less resources. You could say it uses a model of itself for planning. It may be modelling itself modelling itself modelling itself modelling itself modelling itself modelling itself. Meta-management and self-modelling are not in themselves an explanation for very much.GrahamJ

    "What is a model?" is maybe not easily answered, but this example of a "model" doesn't seem to capture the notion. The slimmed down evaluations are aproximations, but not I think models.

    The idea of "model" to me is something like an informationally lossy transformation from one domain into another. A map lossily transforms from physical territory to a piece of paper. A model airplane lossily transforms from big expensive functional machines to small cheap non-functional hobby objects. Representational consciousness lossily transforms from physical reality to phenomenal representations of the world.

    But a cheap and expensive evaluation function are the same kind of thing: one is just less accurate. The comparison unfairly downplays the power of models, and so of MMT.

    The modeling in MMT, as I understand it, are true models: they transform from the deliberative state of the brain to a phenomenal representation of that state, which in turn informs the next deliberative "state".
  • hypericin
    1.6k
    Yes, we interpret those differences via introspection, but it's only with very careful examination that you can use those phenomenal characteristic differences to infer anything about differences of actual cognitive computational processing and representation. I treat the brain as a computational system, and under that paradigm all state is representational. And it's only those representations, and their processing, that I care about.Malcolm Lett

    Nonetheless any theory of consciousness and particularly deliberative consciousness needs to explain how our mental features seem to us to be qualitatively different. Ignoring these differences does not seem constructive, even if they are all ultimately representational. In any case I want to treat self talk here as an example, without claiming it is somehow unique in its neutral implementation.

    Tying this back to your original question a few posts earlier, I think perhaps the question you were asking was something like this: "does MMT suggest that deliberative processing can occur without conscious awareness or involvement, and that conscious experience of it is some sort of after-effect?". In short, yes.Malcolm Lett

    What I was also grasping at in my question is, what exactly do you mean when you say meta management models deliberative thought. From my above post,

    The idea of "model" to me is something like an informationally lossy transformation from one domain into another. A map lossily transforms from physical territory to a piece of paper. A model airplane lossily transforms from big expensive functional machines to small cheap non-functional hobby objects. Representational consciousness lossily transforms from physical reality to phenomenal representations of the world.hypericin

    How do you characterize the modeling in the MMT case? Because to me, "modelling" implies that in the example of self talk, you have two different regimes, language, and the system that language models, which is not parsimonious. Again, without suggesting any special privilege of language, I want to ground your Idea of a modeling feedback loop in a more concrete example.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.