• Gnomon
    3.5k
    The Meta-management Theory of Consciousness uses the computational metaphor of cognition to provide an explanation for access consciousness, and by doing so explains some aspects of the phenomenology of consciousness.Malcolm Lett
    As I said before, I'm not qualified to comment on your theory in a technical sense. So, I'll just mention some possible parallels with an article in the current Scientific American magazine (04/24) entitled : A Truly Intelligent Machine. George Musser, the author, doesn't use the term "meta-management", but the discussion seems to be saying that Intelligence is more than (meta-) information processing. For example, "to do simple or rehearsed tasks, the brain can run on autopilot, but novel or complicated ones --- those beyond the scope of a single module --- require us to be aware of what we are doing". In a large complex organization, such supervision --- etymology, to see from above (meta-) --- is the role of upper management. And the ultimate decision-maker, the big-boss, is a Meta-Manager : a manager of managers.

    Musser notes that, "consciousness is a scarce resource", as is the supervising time of the big boss, who can't be bothered with low-level details. Later, he says, "intelligence is, if anything, the selective neglect of detail" Which may be related to your item (3) Limited Access. So, researchers are advised to "go back to the unfashionable technology of 'discriminative' neural networks". Which may get back to your item (1) Intentionality. Intentional behavior requires discrimination between inputs & outputs : incoming low-level data and executive actions. After "all those irrelevant details are eliminated", the manager can focus on what's most important.

    The article refers to a "key feature" of GWT (Global Workspace Theory) as a "configurator to coordinate the modules and determine the workflow". Again, the "Configurator" or optimizer or designer seems to be a high-level management position. That role also seems to require "self-monitoring". The GWT expert speculates that "consciousness is the working of the configurator". Musser notes that, "those capacities . . . aren't relevant to the kinds of problems that AI is typically applied to". So, the GWT guy adds, "you have to have an autonomous agent with a real mind and a control structure for it" Such executive agency also requires the power to command, which your item (2) calls "causality", the influence determining subsequent effects.

    Neuroscientist Anil Seth makes an important philosophical observation : "Consciousness is not a matter of being smart, it's equally a matter of being alive". And that makes the "hard problem" of creating consciousness even harder. Perhaps requiring divine powers. Or a bolt of lightning : "it's alive!!!" :joke:
  • Malcolm Lett
    70
    Sorry for my tardiness in responding.

    I think Metzinger's views are very plausible. Indeed his views on the self as a transparent ego tunnel at once enabling and limiting our exposure to reality and creating a world model is no doubt basically true. But as the article mentions, it's unclear how this resolves the hard problem. There is offered a reason why (evolutionarily speaking) phenomenality emerged but not a how. The self can be functionally specified, but not consciousnessbert1

    I think that just getting some clarity about the functional aspects of consciousness would be a huge leap forwards, regardless of whether they explain the phenomenal aspects. I'm regularly frustrated at various discussions that necessarily go off into hypotheticals because they have nothing agreed upon to ground the discussion. For example, if you're trying to understand the phenomenality of consciousness, but you don't have an agreed reason for why the functional aspects of it exist, or what they do, then you are at a loss to where to define the scope of what is and isn't consciousness -- a classic case is Block's seminal paper that tries to distinguish between access and phenomenal consciousness. His arguments about P-Cs existing without A-Cs, or vice versa, can only be made because we don't have clear boundaries of what consciousness is.

    My point is that the work of the sort of Metzinger's or my own, if we could find some way to test the theories and pin down the details, would help a lot to define those boundaries at the physical and functional level. Then we'd be in a better position to figure out what P-Cs is.

    ..wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel.180 Proof

    I'm not familiar with his theory. I've just watched the TED talk video so far. The basic idea of us developing a self-model, developing models of the world, and seeing the world through those models is precisely what I'm basing my theory on. It's also the same idea put forward by Donald Hoffman's User Interface Theory of Perception. I'll read for fully his theory though - the "tunnel" analogy is interesting. Also interesting is his suggestion that the processes that take our raw perceptions and turn them into our modelled interpretation of the world is "too fast" for us to analyse (introspectively).
  • bert1
    1.8k
    It's also the same idea put forward by Donald Hoffman's User Interface Theory of Perception.Malcolm Lett

    Yes I think that's right, the two seems very similar in terms of the functional story. But their claims about consciousness seem very different (but I haven't studied either properly - these are just first impression). Contrasting panpsychism with conscious realism is interesting, and something I haven't thought about enough.
  • RogueAI
    2.5k
    Kastrup argues that a computer running a simulation of a working kidney will not pee on his desk, so why would we expect a simulation of a working brain to be conscious?
  • Malcolm Lett
    70
    Lol. It's a funny argument. Too simplistic, but might have some use.

    Just out of interest, I'll have a go.
    So, let's say that this kidney simulation is 100% accurate of a real kidney, to the level of, say, molecules. And that this kidney simulation has a rudimentary simulation of its context operating in a body, so that if a simulated kidney were to pee, then it could. In this example, the kidney would indeed pee, not on his desk, but inside the simulation.

    If we take as an assumption (for the sake of this thought experiment) that consciousness is entirely physical, then we can do the same thing with a conscious brain. This time simulate the brain to the molecular level, and again provide it some rudimentary body context so that the simulated brain thinks it's operated inside a body with eyes, ears, hands, etc. Logically, this simulation thus simulates consciousness in the brain. That's not to say that the simulated brain is conscious in a real world sense, but that it is genuinely conscious in its simulated world.

    The question is what that means.

    Perhaps a simulation is nothing but a data structure in the computer's memory. So there is nothing that it "feels like" to be this simulated brain -- even though there is a simulation of the "feels like" nature.

    Alternatively. David Chalmer's has a whole book arguing that simulations of reality are themselves reality. On that basis, the simulation of the brain and its "feels like" nature are indeed reality -- and the simulated brain is indeed conscious.

    A third argument appeals to a different analysis. Neurons can be said to simulate mind states, in something similar to the same way that a computer simulation of a brain would. I'm appealing to the layered nature of reality. No single neuron is a mind, and yet the collection of billions of neurons somehow creates a mind (again, I'm assuming physicalism here). Likewise, neurons are not single discrete things, but collections of molecules held together by various electromagnetic forces. Trillions of molecules are floating through space in the brain, with arbitrary interaction-based grouping creating what we think of as object boundaries - constructing what we call neurons, glial cells, microtubules, etc. These molecules "simulate" neurons etc. In all of that, there is no such thing as a "mind" or "consciousness" as any kind of object in the "real world". Those things exist as simulations generated by all these free-floating molecule-based simulations of neural-networks. Thus, the computer simulation of a conscious mind is no more or less real than a molecular simulation of a mind.
  • Malcolm Lett
    70
    Just finished reading the review of the Ego Tunnel (https://naturalism.org/resources/book-reviews/consciousness-revolutions). I don't have much of significance to add, but s couple of minor thoughts.

    At the level of the review summary of Metzinger's work, there's not a lot that's unique compared to what various others have written about. It's becoming a well narration of our current neuroscientific understanding. That's not too say that his particular telling of the story isn't valuable, but I do feel a sense of frustration when I read something suggesting that it's one particular person's ideas when actually these are already ideas in the common domain.

    This is more of a reflection for myself. I chose at an early stage that it was going to be very convoluted to tell my particular narrative based on other's works because they all had extra connotations I wanted to avoid. I would have to spend just as much time talking about which parts of those other theories should be ignored in terms of my story. But I can see the frustration that that can create. I shall need to do better to be clear which parts are novel.

    A second thought is about his description of the key evolutionary function of the phenomenal experience that he attributes to the self model. I suspect his books make a better case, but the review suggests he may have fallen into a common trap. The phenomenal and subjective natures of our experience are so pervasive that it can be hard to conceptually separate them from the functions that we're trying to talk about. He says that we need the experience of self to be attached to our perceptions in order to function. But does he lay out clearly why that is the case? It's not a given. A simple robot could be granted access to information that delineates it's physical form from that of the environment without any of the self modelling. Even if it models the "self", it doesn't follow that it then "experiences" the self in any way like we do. I'm all for suggesting that there is a computational basis to that experience, but a) the mechanisms need to be explained, and b) the functional/evolutionary benefit for that extra step needs to be explained.

    That's what I've tried to do with the more low-level handling of the problem in MMT. Though even then there are some hand-wavy steps I'm not happy with.

    Metzinger's books look like an excellent read. Thankyou for sharing the links.
  • RogueAI
    2.5k
    Just out of interest, I'll have a go.
    So, let's say that this kidney simulation is 100% accurate of a real kidney, to the level of, say, molecules. And that this kidney simulation has a rudimentary simulation of its context operating in a body, so that if a simulated kidney were to pee, then it could. In this example, the kidney would indeed pee, not on his desk, but inside the simulation.

    If we take as an assumption (for the sake of this thought experiment) that consciousness is entirely physical, then we can do the same thing with a conscious brain. This time simulate the brain to the molecular level, and again provide it some rudimentary body context so that the simulated brain thinks it's operated inside a body with eyes, ears, hands, etc. Logically, this simulation thus simulates consciousness in the brain. That's not to say that the simulated brain is conscious in a real world sense, but that it is genuinely conscious in its simulated world.
    Malcolm Lett

    The problem here is that simulated urination/urine is not urine (Kastrup's point that the simulated kidney will never pee on his desk), so if simulated urine is not actual urine, simulated consciousness would not be actual consciousness.

    Also, you speak of "inside the simulation". Imagine you're running a simulation of a tornado. Then all the minds in the universe disappear, but the computer the simulation is running on is still active. With all the minds gone, is there still a simulation of a tornado going on? Or is it just a bunch of noise and pixels turning off and on? I think the latter, and this goes back to my point that any simulation is ultimately just a bunch of electric switches turning off and on in a certain way. It takes a mind to attach meaning to the output of those switching actions.
  • wonderer1
    1.7k
    That's not too say that his particular telling of the story isn't valuable, but I do feel a sense of frustration when I read something suggesting that it's one particular person's ideas when actually these are already ideas in the common domain.Malcolm Lett

    :up:

    I appreciate your frustration. Still, I appreciate that Metzinger is able to communicate this way of looking at things effectively.
  • Malcolm Lett
    70
    Also, you speak of "inside the simulation". Imagine you're running a simulation of a tornado. Then all the minds in the universe disappear, but the computer the simulation is running on is still active. With all the minds gone, is there still a simulation of a tornado going on? Or is it just a bunch of noise and pixels turning off and on? I think the latter, and this goes back to my point that any simulation is ultimately just a bunch of electric switches turning off and on in a certain way. It takes a mind to attach meaning to the output of those switching actions.RogueAI

    That's a matter of opinion. Your statement depends on the idea that consciousness is special in some way - beyond normal physics - and that it's our consciousness that creates meaning in the universe.

    An alternative view is that physicalism fully explains everything in the universe, including consciousness (even if we don't know how), and under that view the simulation of the tornado is no different with/without human consciousness. Semiotics explains that data representations have no meaning without something to interpret them. So a computer simulation of a tornado without something to interpret the result certainly would be lacking something - it would just be data noise without meaning. But the thing doing the meaning interpretation doesn't have to be a human consciousness. It could just as easily be another computer, or the same computer, that is smart enough to understand the need to do tornado simulations and to examine the results.

    The urination example is a nice "intuition pump" (as Dennet calls them), but like many intuition pumps it doesn't hold up against closer scrutiny. The point I was trying to make about conscious simulations is that it's not a given that there's a substantial difference between the simulation in a silicon computer versus a simulation in a molecular computer (aka biology). If you hold to the idea that consciousness is purely physical, then this argument doesn't seem so strange.

    I might be wrong, but I think most of us are intuitively looking at the "simulation of urination" example in a particular way: that the computer is running in our world -- let's call it the "primary world" -- and that the simulated world that contains either the simulated urination or the simulated consciousness is a world nested within our own -- so let's call it the "nested world". On first glance that seems quite reasonable. Certainly for urination. But on closer inspection, there's a flaw. While the simulated world itself is indeed nested within the primary world, the simulated urine is not nested within primary world urine. Likewise, the simulated consciousness is not nested within the primary world consciousness. Now, if you take my argument about molecules acting as a mind simulator, then the primary world consciousness in my brain is a sibling to the nested world consciousness.

    There's a tree of reality:
    1. The primary world
       2a. Molecular simulation engine
          3a. Biological conscious mind
       2b. Silicon simulation engine
          3b. Silicon conscious mind
    
  • RogueAI
    2.5k
    What do you think of this?
    https://xkcd.com/505/

    Is it possible to simulate consciousness by moving rocks around (or, as one of the members here claims, knocking over dominoes)?
  • wonderer1
    1.7k
    Is it possible to simulate consciousness by moving rocks around (or, as one of the members here claims, knocking over dominoes)?RogueAI

    Ciitation needed. Where has someone here claimed that consciousness can be simulated by knocking over dominoes?
  • RogueAI
    2.5k

    https://thephilosophyforum.com/discussion/comment/893885

    "Does anyone think a system of dominoes could be conscious? What I meant by a system of dominoes includes a machine that continually sets them up after they fall according to some program."

    oh well then, in principle... MAYBEflannel jesus

    What do you think, Wonderer? Could consciousness emerge from falling dominoes?
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.