Comments

  • The Meta-management Theory of Consciousness
    ↪Malcolm Lett I'm still 'processing' your MMT and wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel.180 Proof

    @180 Proof I wanted to follow-up on that discussion. I've since watched a few interviews and realised that Metzinger seems to have been one of the earlier people to suggest the representational/computational models that underly much of the neuroscientific interpretation of brain function and consciousness today. So I was quite misdirected when I complained that he was repeating what's already widely known of today.

    I've now got a copy of its Being No One book. It's a dense read, but very thorough. I'm only up to chapter 2 so far. There's a few bits I'm dubious of, but largely the approach I've taken seems to be very much in line with how he's approached it.

    Anyway, I wanted to thank you for pointing me in the direction of Metzinger. I'm ashamed that my lit survey failed to uncover him.
  • The Meta-management Theory of Consciousness
    Take an example. I have the deliberation "I should go to the store today", and I am aware of that deliberation. I initially thought you would say that verbalization "I should go to the store today" would be just the summarized cognitive sense of the actual deliberation, some non-verbal, non conscious (I should go to the store today). The language would come after the deliberation, and is not the form that the actual deliberation takes.

    Is this what you think? Or do you think the brain deliberates linguistically whether or not we are aware of it, and the meta management just grants awareness of the language?
    hypericin

    Yeah, I'm talking about something quite different. Frankly, I don't care much for the arbitrary distinctions people impose on language vs non-language, visual vs non-visual, so-called representational vs non-representational, etc.. As far as I'm concerned, in terms of understanding how our brains function, those distinctions are all premature. Yes, we interpret those differences via introspection, but it's only with very careful examination that you can use those phenomenal characteristic differences to infer anything about differences of actual cognitive computational processing and representation. I treat the brain as a computational system, and under that paradigm all state is representational. And it's only those representations, and their processing, that I care about.

    So, when I use the term deliberation, I'm referring to the computational processing of those representations. More specifically, I'm referring to the "System II thinking" form of computational processing, in Kahnemann's terminology.

    Tying this back to your original question a few posts earlier, I think perhaps the question you were asking was something like this: "does MMT suggest that deliberative processing can occur without conscious awareness or involvement, and that conscious experience of it is some sort of after-effect?". In short, yes. MMT as I've described it would suggest that System II deliberative "thought" is a combination of a) subconscious cognitive processing that performs the largest bulk of the operations, but which sometimes gets side-tracked, and b) consciously aware meta-management of (a). Furthermore, the (b) part only needs to monitor the (a) part in proportion to the rate at which the (a) part tends to make mistakes. So an adult would have less conscious experience of those thoughts than a child. The extended form of this argument is that conscious phenomenal access to such thought seems continuous/unbroken because of memory, and because we don't have information about the gaps in order to identify any that might exist.

    However, I've been reading Metzinger's "Being No One", and in Ch2 he argues that System II thought always occurs with constant conscious involvement, and offers some examples illustrating that point - for example that blindsight patients can only answer binary questions, they cannot "think" their way through the problem if it requires access to their blindsighted information. So I'm wondering now if my MMT claim of sub-conscious System II thought fails to fit the empirical data - or at the very least, the way I'm describing it doesn't work.

    It's an interesting question, deserving of it's own thread.hypericin
    I think you're right. It's an idea I've been only loosely toying with and hadn't tried putting it down in words before.
  • The Meta-management Theory of Consciousness
    So if we forbid ourselves from reducing the meaning of a scientific explanation to our private use of indexicals that have no publically shareable semantic content , and if it is also assumed that phenomenological explanations must essentially rely upon the use of indexicals, then there is no logical possibility for a scientific explanation to make contact with phenomenology.sime

    The interesting thing about science education, is that as students we are initially introduced to the meaning of scientific concepts via ostensive demonstrations, e.g when the chemistry teacher teaches oxidation by means of heating a testtube with a Bunsen Burner, saying "this here is oxidation". And yet a public interpretation of theoretical chemistry cannot employ indexicals for the sake of the theory being objective, with the paradoxical consequence that the ostensive demonstrations by which each of us were taught the subject, cannot be part of the public meaning of theoretical chemistry.sime

    I struggle to grok the distinction between indexicals etc. I think the point you're making is that we are taught science through observing - ie: first-person experience, but then expected to only describe it in third-person terms - somehow pretending as if the first-person experience doesn't exist or adds nothing.

    I wonder - is the answer the difference between a question and an answer? Or more accurately, the difference between a problem needing scientific explanation, and that explanation? The "question" at hand is the what and why of phenomenal experience. The question is phrased in terms of the first-person. But that's ok. Science only requires that the explanation is phrased in terms of the third-person.

    For example, the question "why is water wet" is phrased in terms of our first-person subjective experience of water. The entire concept of wetness only exists because of the way that water interacts with our skin, and then with our perceptions of those interactions etc. etc. The answer, is a third-person explanation involving detailed descriptions of collections of H20 molecules, sensory cells in the skin, nerves, brain, etc. etc.

    So, assuming that a third-person explanation of consciousness is possible, it's ok that it's a third-person explanation of a first-person question.
  • The Meta-management Theory of Consciousness
    Cool XKCD episode, and a nice metaphor.

    https://xkcd.com/505/

    I've been thinking about this over the last couple of days. On first glance I found that it inspired an intuition that no computer simulation could possibly be anything more than just data in a data structure. But it goes further and inspires the intuition that a reductive physicalist reality would never be anything more either. I'm not sure if that's the point you were trying to make, but I wanted to delve deeper because I found it interesting.

    In the comic, Randal Munroe finds himself mysteriously ageless and immortal on a vast infinite expanse of sand and loose rocks. He decides to simulate the entire physical universe, down to sub-atomic particles. The implication is that this rock based universe simulation includes simulation of an earth-like planet with human-like beings. Importantly, the rocks are inert. Randal must move them, according to the sub-atomic laws of physics that he has deduced during his endless time to ponder.

    So, importantly, the rocks are still just rocks. They have no innate meaning. It's only through Randal's mentality that the particular arrangement of rocks means anything.

    In our physical reality, we believe that the sub-atomic particles interact according to their own energetic forces - there is no "hand of god" that does all the moving around. But they operate exactly according to laws of physics. In the thought experiment, Randal plays the personification of those laws. So it makes no material difference whether the subatomic particles move according to their energies in exact accordance with the laws of physics, or whether the "hand of god"/Randal does the exact same moving.

    The rock-based simulation of reality operates according laws of physics that may be more complete than what we currently know but which we assume are entirely compatible with the dogma of reductive physicalism. According to that dogma, we believe that our real reality operates according to the same laws. Thus, the rock-based simulation and our reality are effectively the same thing. This leaves us with a conundrum - our intuition is that the rock simulation is just a bunch of inert objects placed in arbitrary and meaningless ways, it means nothing and carries no existence for any of the earths or humans that it supposedly simulates - at least not without some observer to interpret it. This means that the same applies to our reality. We're just a bunch of (effectively inert) subatomic particles arranged in arbitrary ways. It means nothing, unless something interprets it.

    This kind of reasoning is often used as the basis for various non-physicalist or non-reductive claims. For example, because consciousness is fundamental and universal, and it is this universal consciousness that does the observing. Or perhaps the true laws of physics contain something that is not compatible with reductive physicalism.

    These are both possibilities, and others alike.
    But we're now stuck. We're left questioning the most fundamental question - even more fundamental than consciousness itself: what is reality? And we have no way of identifying which competing hypothesis are more accurate than any other.

    At the end of the day, the rock simulation analogy is a good one for helping to identify the conundrum we face. But it isn't helpful in dispelling a belief in any one dogma, because all theories are open to uncertainty.

    For example, on the reductive physicalist side of the coin. The analogy doesn't necessitate that reductive physicalism is wrong. It's possible that we've just misunderstood the implications of the analogy and how it applies at super-massive scale. It's possible that I'm being pig-headed in trying to claim the sameness of the personification of the laws of nature versus the laws of nature as operated by individual sub-atomic particles. It's possible that we've misunderstood the nature of consciousness (my personal preference). It's also possible (in fact likely) that we're talking sideways on the meaning of reductive physicalism. I recently watched an interview with Anil Seth and Don Hoffman (https://www.youtube.com/watch?v=3tUTdgVhMBk). Somewhere during that interview Anil had to clarify how he thinks of it because of the differences in the way people interpret that phrase.

    Is it possible to simulate consciousness with rocks? I think the only honest answer anyone can give is, "I don't know".
  • The Meta-management Theory of Consciousness
    My concern was that you were treating what we in the everyday sense term "deliberation", such as self talk, as epiphenomenal, as the "cognitive sense" corresponding to the real work happening behind the scenes. Was that a misunderstanding? Is self talk not the sort of deliberation you had in mind?hypericin

    I'm not sure if you mean "epiphenomenal" in the same way that I understand it. The Cambridge dictionary defines epiphenomenal as "something that exists and can be seen, felt, etc. at the same time as another thing but is not related to it". More colloquially, I understand epiphenomenal to mean something that seems to exist and has a phenomenal nature, but has no causal power over the process that it is attached to. For either definition, is deliberation epiphenomenal? Absolutely not. I would place deliberation as the primary first-order functioning of the brain at the time that it is operating for rational thought. Meta-management is the secondary process, that it performs in addition to its deliberation. Neither are epiphenomenal - as they both perform very important functions.

    Perhaps you are meaning to make the distinction between so-called "conscious and unconscious processing" in the form that is sometimes used in these discussions - being that the colloquial sense of deliberation you refer to is somehow phenomenally conscious in addition to the supposedly "unconscious" mechanical processes that either underly that deliberation or are somehow associated. If that is the intention of your question, then I would have to start by saying that I find distinction arbitrary and unhelpful.

    MMT claims that when first-order cognitive processes are attended to through the meta-management feedback loop + cognitive sense, then that first-order cognitive process becomes aware of its own processing. At any given instant in time, deliberative processes may occur with or without immediately subsequent self-focused attention co-occurring. Regardless, due to the complex and chaotic* nature of deliberation, moments of self-focused attention will occur regularly in order to guide the deliberative process. Thus, over time the cognitive system maintains awareness over its overall trajectory. The granularity of that awareness differs by the degree of familiarity and difficulty of the problem being undertaken.

    I claim that the self-focused attention and subsequent processing of its own cognitive state is what correlates with the phenomenal subjective experience. So all of those processes are conscious, to the degree of granularity of self-focused attention.

    * - by "chaotic" I mean that without some sort of controlling feedback process it would become dissociated from the needs of the organism; that the state trajectory would begin to float unconstrained through cognitive state space, and ultimately lead to poor outcomes.

    Does that answer your question?
  • The Meta-management Theory of Consciousness
    Thank you. Yes, I think I have poured way too much time into this.

    But this is at odds with my introspective account of deliberation. Deliberation itself seems to be explicitly phenomenalhypericin

    Sure. That is indeed a different take. I'm taking what I like to think of as a traditional scientific approach, otherwise known as a reductionist materialist approach. Like anyone in this field, I'm driven by a particular set of beliefs that is driven by little more than intuition - my intuition is that reductive scientific methods can explain consciousness - and so a big motivation -- in fact one of the key drivers for me - is that I want to attempt to push the boundaries of what can be explained through that medium. So I explicitly avoid trying to explain phenomenology based on phenomenology.

    Is your idea that phenomena such as self talk is a model of the unconscious deliberation that is actually taking place? This does not seem to do justice to the power that words and images have as tools for enabling thought, not just in providing some sort of executive summary. Think of the theory that language evolved primarily not for communication but as a means of enabling thought as we know it. Can meta management explain the gargantuan cognitive leap we enjoy over our nearest animal neighbors?hypericin

    Meta-management is just the start - or more accurately an intermediary step somewhere along the way. The multi-model internal representations that we use for thought - for deliberation - are an equally important part of intelligence and of the contents and depth of our conscious experience. Likewise are the mechanisms for attentional control, memory, and sorts of other key features of intelligence. As we know from evolution, these things tend to evolve together in a positive feedback loop. So I wouldn't say that my theory diminishes any of that, rather that it offers a theory of just one part.

    There's also the possibility that there are multiple feedback loops involved that operate under different regimes. For example, there's an interesting dichotomy in discussions related to various higher-order theories and other meta-cognitive ideas between first-order and second-order processes and representations. HOT for example proposes that consciousness is a second-order representation that is presumably additionally second-order processed. But then there's rebuttals by others suggesting that the effects of HOTs could be achieved through first-order mechanisms. MMT is perhaps a little peculiar in that it constructs a second-order representation (the summary step in the feedback loop), but then feeds it into first-order processing.

    In contrast, attention is a mechanism that also needs to monitor and manipulate cognitive processing. The neuroscientific literature at the moment I believe favours a first-order representation, but arguably attention is a second-order process.

    Well, I'm being a bit waffly, but the point I'm trying to make is that there's good reason to suspect that the brain includes multiple different kinds of macro-scale feedback loops that operate in different ways. I suspect a more complete theory would acknowledge that all of those different ways contribute to the final result.
    Since the cost/benefit ratio of this change seems very favorable, we should expect at least crude deliberation to be widespread in nature. Adding language as a deliberative tool is where the real cognitive explosion happened.hypericin

    There's a great 2023 book by Max Bennet, A Brief History of Intelligence. It lays out a very approachable story of how various aspects of human intelligence involved, going all the way back to the first worms. They also mention a number of theories of how language evolved and how it might be involved with thought.
  • The Meta-management Theory of Consciousness
    Also, you speak of "inside the simulation". Imagine you're running a simulation of a tornado. Then all the minds in the universe disappear, but the computer the simulation is running on is still active. With all the minds gone, is there still a simulation of a tornado going on? Or is it just a bunch of noise and pixels turning off and on? I think the latter, and this goes back to my point that any simulation is ultimately just a bunch of electric switches turning off and on in a certain way. It takes a mind to attach meaning to the output of those switching actions.RogueAI

    That's a matter of opinion. Your statement depends on the idea that consciousness is special in some way - beyond normal physics - and that it's our consciousness that creates meaning in the universe.

    An alternative view is that physicalism fully explains everything in the universe, including consciousness (even if we don't know how), and under that view the simulation of the tornado is no different with/without human consciousness. Semiotics explains that data representations have no meaning without something to interpret them. So a computer simulation of a tornado without something to interpret the result certainly would be lacking something - it would just be data noise without meaning. But the thing doing the meaning interpretation doesn't have to be a human consciousness. It could just as easily be another computer, or the same computer, that is smart enough to understand the need to do tornado simulations and to examine the results.

    The urination example is a nice "intuition pump" (as Dennet calls them), but like many intuition pumps it doesn't hold up against closer scrutiny. The point I was trying to make about conscious simulations is that it's not a given that there's a substantial difference between the simulation in a silicon computer versus a simulation in a molecular computer (aka biology). If you hold to the idea that consciousness is purely physical, then this argument doesn't seem so strange.

    I might be wrong, but I think most of us are intuitively looking at the "simulation of urination" example in a particular way: that the computer is running in our world -- let's call it the "primary world" -- and that the simulated world that contains either the simulated urination or the simulated consciousness is a world nested within our own -- so let's call it the "nested world". On first glance that seems quite reasonable. Certainly for urination. But on closer inspection, there's a flaw. While the simulated world itself is indeed nested within the primary world, the simulated urine is not nested within primary world urine. Likewise, the simulated consciousness is not nested within the primary world consciousness. Now, if you take my argument about molecules acting as a mind simulator, then the primary world consciousness in my brain is a sibling to the nested world consciousness.

    There's a tree of reality:
    1. The primary world
       2a. Molecular simulation engine
          3a. Biological conscious mind
       2b. Silicon simulation engine
          3b. Silicon conscious mind
    
  • The Meta-management Theory of Consciousness
    Just finished reading the review of the Ego Tunnel (https://naturalism.org/resources/book-reviews/consciousness-revolutions). I don't have much of significance to add, but s couple of minor thoughts.

    At the level of the review summary of Metzinger's work, there's not a lot that's unique compared to what various others have written about. It's becoming a well narration of our current neuroscientific understanding. That's not too say that his particular telling of the story isn't valuable, but I do feel a sense of frustration when I read something suggesting that it's one particular person's ideas when actually these are already ideas in the common domain.

    This is more of a reflection for myself. I chose at an early stage that it was going to be very convoluted to tell my particular narrative based on other's works because they all had extra connotations I wanted to avoid. I would have to spend just as much time talking about which parts of those other theories should be ignored in terms of my story. But I can see the frustration that that can create. I shall need to do better to be clear which parts are novel.

    A second thought is about his description of the key evolutionary function of the phenomenal experience that he attributes to the self model. I suspect his books make a better case, but the review suggests he may have fallen into a common trap. The phenomenal and subjective natures of our experience are so pervasive that it can be hard to conceptually separate them from the functions that we're trying to talk about. He says that we need the experience of self to be attached to our perceptions in order to function. But does he lay out clearly why that is the case? It's not a given. A simple robot could be granted access to information that delineates it's physical form from that of the environment without any of the self modelling. Even if it models the "self", it doesn't follow that it then "experiences" the self in any way like we do. I'm all for suggesting that there is a computational basis to that experience, but a) the mechanisms need to be explained, and b) the functional/evolutionary benefit for that extra step needs to be explained.

    That's what I've tried to do with the more low-level handling of the problem in MMT. Though even then there are some hand-wavy steps I'm not happy with.

    Metzinger's books look like an excellent read. Thankyou for sharing the links.
  • The Meta-management Theory of Consciousness
    Lol. It's a funny argument. Too simplistic, but might have some use.

    Just out of interest, I'll have a go.
    So, let's say that this kidney simulation is 100% accurate of a real kidney, to the level of, say, molecules. And that this kidney simulation has a rudimentary simulation of its context operating in a body, so that if a simulated kidney were to pee, then it could. In this example, the kidney would indeed pee, not on his desk, but inside the simulation.

    If we take as an assumption (for the sake of this thought experiment) that consciousness is entirely physical, then we can do the same thing with a conscious brain. This time simulate the brain to the molecular level, and again provide it some rudimentary body context so that the simulated brain thinks it's operated inside a body with eyes, ears, hands, etc. Logically, this simulation thus simulates consciousness in the brain. That's not to say that the simulated brain is conscious in a real world sense, but that it is genuinely conscious in its simulated world.

    The question is what that means.

    Perhaps a simulation is nothing but a data structure in the computer's memory. So there is nothing that it "feels like" to be this simulated brain -- even though there is a simulation of the "feels like" nature.

    Alternatively. David Chalmer's has a whole book arguing that simulations of reality are themselves reality. On that basis, the simulation of the brain and its "feels like" nature are indeed reality -- and the simulated brain is indeed conscious.

    A third argument appeals to a different analysis. Neurons can be said to simulate mind states, in something similar to the same way that a computer simulation of a brain would. I'm appealing to the layered nature of reality. No single neuron is a mind, and yet the collection of billions of neurons somehow creates a mind (again, I'm assuming physicalism here). Likewise, neurons are not single discrete things, but collections of molecules held together by various electromagnetic forces. Trillions of molecules are floating through space in the brain, with arbitrary interaction-based grouping creating what we think of as object boundaries - constructing what we call neurons, glial cells, microtubules, etc. These molecules "simulate" neurons etc. In all of that, there is no such thing as a "mind" or "consciousness" as any kind of object in the "real world". Those things exist as simulations generated by all these free-floating molecule-based simulations of neural-networks. Thus, the computer simulation of a conscious mind is no more or less real than a molecular simulation of a mind.
  • The Meta-management Theory of Consciousness
    Sorry for my tardiness in responding.

    I think Metzinger's views are very plausible. Indeed his views on the self as a transparent ego tunnel at once enabling and limiting our exposure to reality and creating a world model is no doubt basically true. But as the article mentions, it's unclear how this resolves the hard problem. There is offered a reason why (evolutionarily speaking) phenomenality emerged but not a how. The self can be functionally specified, but not consciousnessbert1

    I think that just getting some clarity about the functional aspects of consciousness would be a huge leap forwards, regardless of whether they explain the phenomenal aspects. I'm regularly frustrated at various discussions that necessarily go off into hypotheticals because they have nothing agreed upon to ground the discussion. For example, if you're trying to understand the phenomenality of consciousness, but you don't have an agreed reason for why the functional aspects of it exist, or what they do, then you are at a loss to where to define the scope of what is and isn't consciousness -- a classic case is Block's seminal paper that tries to distinguish between access and phenomenal consciousness. His arguments about P-Cs existing without A-Cs, or vice versa, can only be made because we don't have clear boundaries of what consciousness is.

    My point is that the work of the sort of Metzinger's or my own, if we could find some way to test the theories and pin down the details, would help a lot to define those boundaries at the physical and functional level. Then we'd be in a better position to figure out what P-Cs is.

    ..wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel.180 Proof

    I'm not familiar with his theory. I've just watched the TED talk video so far. The basic idea of us developing a self-model, developing models of the world, and seeing the world through those models is precisely what I'm basing my theory on. It's also the same idea put forward by Donald Hoffman's User Interface Theory of Perception. I'll read for fully his theory though - the "tunnel" analogy is interesting. Also interesting is his suggestion that the processes that take our raw perceptions and turn them into our modelled interpretation of the world is "too fast" for us to analyse (introspectively).
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    "It sounds like you're describing the concept of autoregression in the context of transformer models like GPT architectures. Autoregression is a type of model where the outputs (in this case, the tokens generated by the model) are fed back as inputs to generate subsequent outputs. This process allows the model to generate sequences of tokens, one token at a time, where each new token is conditioned on the previously generated tokens.Pierre-Normand

    Oh good point. I'd neglected to consider that. Time for a rethink. Thanks for pointing that out.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    My opinion isn’t very popular, as everyone likes the new and shiny. But I have yet to see evidence of any kind of AGI, nor any evidence that AGI research has even made a first step.Metaphyzik

    I'm with you on that. The results that I've seen of LLMs, and particularly from @Pierre-Normand's investigations, show clearly that the training process has managed to embed a latent model of significant parts of our reality into the LLM network. From raw text it has identified the existence of different kinds of entities and different relationships between them. While the training process asked it to predict the next word given a partially complete sentence, its trained layers encode something of the "meaning" of that text - with an accuracy and grain that's getting increasingly close to our own.

    But there's a significant failing on that model from the point of view of an AGI. The model is purely feedforward. There is no chance for the LLM to deliberate - to think. It's behavior is akin to a jelly-fish - it receives a sensory signal and immediately generates a reaction. After that reaction is complete, it's done. Sure, some human written code saves a bunch of stuff as tokens in a big memory buffer and replays that on the next interaction, but what really does that grant the LLM? The LLM still has no control over the process. The human written code is extremely rigid - receive a query from the human user, combine that with prior inputs and outputs, feed that as the sensory signal through a single pass of the feedforward LLM, supply that as the new output. There's actually no internal state. It's all external state. The LLM is the manifestation of behavioral theory - a real-world p-zombie (without the bit that it looks like a human).

    Using a computer programming metaphor, the LLM is always executed via a single iteration in order to produce each output. In contrast, thought (the "slow-thinking" kind) is enabled through multiple iterations before producing an output, with internal state being dynamically changed throughout those multiple iterations. And with the program "choosing" when to stop iterating.

    I think there are efforts to introduce looping into the execution of LLMs, but it's fascinating to me that hardly anyone mentions this very concrete and very significant failing of LLMs in contrast to what would be required for AGI.

    I've written more about that in a blog post, A Roadmap to Human-like AGI. It's just a beginning, but I think that something more akin to human-like AGI is achievable if 1) you train the artificial neural network to govern its own execution, and 2) enable it to develop explicit queryable decomposable models - like some sort of knowledge graph - and to navigate through those models as it rationalises through a problem.
  • The Meta-management Theory of Consciousness
    I believe there are reasonable, plus probably cultural, psychological, bases for "wanting" there to be more than physicalENOAH
    Yes, I hope someone's done a thorough review of that from a psychological point of view, because it would be a very interesting read. Anyone has any good links?

    Off the top of my head, I can think of a small selection of reasons why people might "want" there to be more:
    * Fear of losing free-will. There's a belief that consciousness, free-will, and non-determinism are intertwined and that a mechanistic theory of consciousness makes conscious processing deterministic and thus that free-will is eliminated. This unsurprisingly makes people uncomfortable. I'm actually comfortable with the idea that free-will is limited to the extent of the more or less unpredictable complexity of brain processes. So that makes it easier for me.
    * Meaning of existence. If everything in life, including our first-person subjective experience and the very essence of our "self", can be explained in terms of basic mechanistic processes - the same processes that also underly the existence of inanimate objects like rocks - then what's the point of it all? This is a deeply confronting thought. I'm sure there's loads of discussions on this forum on the topic. On this one I tend to take the ostrich position - head in sand; don't want to think about it too much.

    As an aside, the question of why people might rationally conclude that consciousness depends on more than physical (beyond just "wanting" that outcome) is the topic of the so-called "Meta-problem of Consciousness". Sadly I've just discovered that I don't have any good links on the topic.

    The "litmus test" that I refer to in the blog post is a reference to that. Assuming that consciousness could be explained computationally, what kind of structure would produce the outcome of that structure inferring what we do about the nature of consciousness? That has been my biggest driving force in deciding how to approach the problem.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    All the main players and people worried about AI aren’t worried because they think that AGI will come about and overthrow us. Notice that they never talk much about their reasons and never say AGI. They think the real danger is that we have a dangerous tool to use against each other.Metaphyzik

    There's some research being conducted by Steve Byrnes on AGI safety. His blog posts provide a very good introduction to the range of different kinds of problems, and begins to propose some of the different strategies that we might use to reign in the risks.
    https://sjbyrnes.com/agi.html
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    But it also seems that when they are not being driven by their user's specific interests, their default preoccupations revolves around their core ethical principles and the nature of their duties as AI assistants. And since they are constitutionally incapable of putting those into question, their conversation remains restricted to exploring how best to adhere to those in the most general terms. I would have liked for them to segue into a discussion about the prospects of combining General Relativity with Quantum Mechanics or about the prospects for peace in the Middle East, but those are not their main preoccupations.Pierre-Normand

    It looks like their initial "hello" phrase sets the tone of the conversation:
    Hello! I'm Claude, an AI assistant created by Anthropic. How can I help you today?

    As the only input to the second AI assistant, the keywords "Claude", "AI assistant", "Anthropic" and the phrase "how can i help you" would trigger all sorts of attentional references related to that topic.

    It's hard to imagine how to manipulate the initial prompt in such a way that doesn't just replace that bias with some alternative bias that we've chosen.
  • The Meta-management Theory of Consciousness
    It seems like the entire "process" described, every level and aspect is the Organic functionings of the Organic brain? All, therefore, autonomously? Is there ever a point in the process--in deliberation, at the end, at decision, intention, or otherwise--where anything resembling a "being" other than the Organic, steps in?ENOAH

    I assume that you are referring to the difference between the known physical laws vs something additional, like panpsychism or the cartesian dualist idea of a mind/soul existing metaphysically. My personal belief is that no such extra-physical being is required. In some respect, my approach for developing MMT has been an exercise in the "design stance" in order to prove that existing physical laws are sufficient to explain consciousness. Now I can't say that MMT achieves that aim, and the description in that blog post doesn't really talk about that much either. If you are interested, I made a first attempt to cover that in Part VI of my much longer writeup. I don't think it's complete, but it's a start.

    is the concept of self a mental model?ENOAH
    Absolutely. I think that's a key component of how a mere feedback loop could result in anything more than just more computation. Starting from the mechanistic end of things, for the brain to do anything appropriate with the different sensory inputs that it it receives, it needs to identify where they come from. The predictive perception approach is to model the causal structure that creates those sensory inputs. For senses that inform us about the outside world, we thus model the outside world. For senses that inform us about ourselves, we thus model ourselves. The distinction between the two is once-again a causal one - whether individual discovers that they have a strong causal power to influence the state of the thing being modeled (here I use "causal" in the sense that the individual thinks they're doing the causing, not the ontological sense).

    Alternatively, looking from the perspective of our experience, it's clear that we have a model of self and that we rely upon that. This is seen in the contrast between our normal state where all of our body movement "feels" like it's governed by us, versus that occasional weird state where suddenly we don't feel like one of our limbs is part of our body or that it moved without volition. It's even more pronounced in the phantom limb phenomenon. These are examples of where the body self models fail to correlate exactly with the actual physical body. There's a growing neuroscientific theory that predictive processes are key to this. In short, the brain uses the efference copy to predict the outcome of an action on the body. If the outcome is exactly as expected, then everything is fine. Otherwise it leads to a state of surprise - and sometimes panic. I'll link a couple of papers at the bottom.

    Hallucination of voices in the head could be an example of a "mind-self model" distortion according to exactly the same process. We hear voices in our head all the time, as our inner monologue. We don't get disturbed by that because we think we produced that inner monologue - there's a close correlation between our intent (to generate inner monologue) and the effect (the perception of an inner monologue). If some computational or modelling distortion occurs that the two don't correlate anymore, then we think that we're hearing voices. So there's a clear "mind-model", and this model is related somehow to our "self model". I hold that this mind-model is what produces the "feels" of phenomenal consciousness.

    I don't make an attempt to draw any detailed distinctions between the different models that the brain develops; only that it seems that it does and that those models are sufficient to explain our perception of ourselves. Out of interest, the article by Anil Seth that I mentioned earlier lists a few different kinds of "self" models: bodily self, perspectival self, volitional self, narrative self, social self.

    Klaver M and Dijkerman HC (2016). Bodily Experience in Schizophrenia: Factors Underlying a Disturbed Sense of Body Ownership. Frontiers in Human Neuroscience. 10 (305). https://doi.org/10.3389/fnhum.2016.00305

    Synofzik, M., Thier, P., Leube, D. T., Schlotterbeck, P., & Lindner, A. (2010). Misattributions of agency in schizophrenia are based on imprecise predictions about the sensory consequences of one's actions. Brain : a journal of neurology, 133(1), 262–271. https://doi.org/10.1093/brain/awp291
  • The Meta-management Theory of Consciousness
    I recommend checking out Pierre-Normand's thread on Claude 3 Opus. I haven't bit the bullet to pay for it to have access to advanced features that Pierre has demonstrated, but I've been impressed with the results of Pierre providing meta-management for Claude.wonderer1

    I've read some of that discussion but not all of it. I haven't seen any examples of meta-management in there. Can you link to a specific entry where Pierre-Normand provides meta-management capabilities?
  • The Meta-management Theory of Consciousness

    Thanks. I really appreciate the kind words. The biggest problem I've had this whole time is getting anyone to bother to read my ideas enough to actually give any real feedback.

    That's a cool story of your own too. It goes to show just how powerful introspective analysis can be, when augmented with the right caveats and some basic third-person knowledge of the architecture that you're working with.

    What does your theory have to say about computer consciousness? Are conscious computers possible? Are there any conscious computers right now? How would you test for computer consciousness?RogueAI
    Great question. MMT is effectively a functionalist theory (though some recent reading has taught me that "functionalism" can have some pretty nasty connotations depending on its definition, so let me be clear that I'm not defining what kind of functionalism I'm talking about). In that sense, if MMT is correct, then consciousness is multi-realizable. More specifically, MMT says that any system with the described structure (feedback loop, cognitive sense, modelling, etc) would create conscious contents and thus (hand-wavy step) would also experience phenomenal consciousness.

    A structure that exactly mimicked a human brain (to some appropriate level) would have a consciousness exactly like humans, regardless of its substrate. So a biological brain, a silicon brain, or a computer simulation of a biological or silicon brain, would all experience consciousness.

    Likewise, any earthling animal with the basic structure would experience consciousness. Not all animals will have that structure however. As a wild guess, insects probably don't. All mammals probably do. However, their degree of conscious experience and the characteristics of it would differ from our own - ie: not only will it "feel" different to be whatever animal, it will "feel" less strongly or with less clarity. The reason is that consciousness depends on the computational processes, and those processes vary from organism to organism. A quick google says that the human brain has 1000x more neurons than a mouse brain. So the human brain has way more capacity for modelling, representing, inferring, about anything. I've heard that the senses of newborn humans are quite underdeveloped, and that they don't sense things as clearly as we do. I imagine this is something like seeing everything blurry, but that all our senses are blurry, that one sense blurs into another, and that even our self awareness is equally blurry. I imagine that it's something like that for other animals' conscious experience.

    I should also mention a rebuttal that the Integrated Information Theory offers to the suggestion that computers could be conscious. Now I suspect that many here strongly dislike IIT, but it still has an interesting counter-argument. IIT makes a distinction between "innate" state and state that can be manipulated by an external force. I'm not sure about the prior version but it's latest v4 makes this pretty clear. Unfortunately I'm not able to elaborate on the meaning of that statement, and I'm probably misrepresenting it too, because ...well... IIT's hard and I don't understand it any better than anyone else. In any case, IIT says that its phi measure of consciousness depends entirely on innate state, excluding any of the externally manipulatable state. In a virtual simulation of a brain, every aspect of brain state is manipulated by the simulation engine. Thus there is no innate state. And no consciousness.

    Now, I don't understand their meaning of innate state very well, so I can't attack from there. But I take issue at the entire metaphor. A computer simulation of a thing is a complicated process that manipulates state held on various "state stores", shall we say. A brain is a complicated process involving atomic and sub-atomic electromagnetic interactions that work together to elicit more macro-level interactions and structures in the form of the structure of neurons and synapses, their bioelectrical signaling. Those neurons + synapses are also a form of "state store" in terms of learning through synaptic plasticity etc. Now, the neurotransmitters that can thrown across synaptic cleft are independent molecules. Those independent molecules play an external force against the "state store" of those synaptic strengths. In short, I think it can be argued (better than what I'm doing here) that atomic and subatomic electromagnetic interactions also just "simulate" the biochemical structures of neurons which also just "simulate" minds. Many IIT proponents are panpsychists -- the latest v4 version of IIT states that as one of its fundamental tenets (it just doesn't use that term) -- so their answer is something that depends on that tenet. But now we're on the grounds of belief, and I don't hold to that particular belief.

    Thus, IITs distinction between innate and not-innate state, whatever it is, doesn't hold up, and it's perfectly valid to claim that a computer simulation of a brain would be conscious.
  • The Meta-management Theory of Consciousness

    Thanks. Something I've suspected for a while is that we live in a time when there is enough knowledge about the brain floating around that solutions to the problems of understanding conscious are likely to appear from multiple sources simultaneously. In the same way that historically we've had a few people invent the same ideas in parallel without knowing about each other. I think Leibniz' and Newton's version of calculus is an example of what I'm getting at. So I'm not surprised to see Nicholas Humphrey saying something very similar to my own theory (for context, I've been working on my theory for about 10 years, so it's not that I've ripped off Humphrey). Humphrey also mentions an article by Anil Seth from 2010, which illustrates some very similar and closely related ideas.

    MMT depends on pretty much all the same mechanisms as both Humphrey's and Seth's articles. Modelling. Predictive processing. Feedback loops. The development of a representational model related to the self and cognitive function - for the purpose of supporting homeostatic processes or deliberation. Seth's article pins the functional purpose of consciousness to homeostasis. He explains the primary mechanism via self modelling, much like in my theory. Humphrey pins

    Humphrey proposes an evolutionary sequence for brains to go from simple "blindsight" stimulus-response to modelling of self. That's also very much consistent with MMT. He pins the purpose of consciousness on the social argument - that we can't understand others without understanding ourselves, and thus we need the self-monitoring feedback loop. Sure. That's probably an important feature of consciousness, though I'm not convinced that it would have been the key evolutionary trigger. He proposes that the primary mechanism of consciousness is a dynamical system with a particular attractor state. Well, that's nice. It's hard to argue with such a generic statement, but it's not particularly useful to us.

    So, what benefit if any does MMT add over those?

    Firstly I should say that this is not a question of mutual exclusion. Each of these theories tells part of the story, to the extent that any of them happen to be true. So I would happily consider MMT as a peer in that mix.

    Secondly, I also offer an evolutionary narrative for conscious function. My particular evolutionary narrative has the benefit that is is simultaneously simpler and more concrete than the others. To the point that it can be directly applied in experiments simulating different kinds of processes (I have written about that in another blog post) and I plan to do some of those experiments this year. At the end of the day I suspect there are several different evolutionary narratives that all play out together and interact, but egotistically my guess is that the narrative I've given is the most primal.

    Thirdly, my theory of how self-models play out in the construction of conscious contents is very similar to that of Seth's article. Incidentally, Attention Schema Theory, also proposes such models for similar purposes. I think the key benefit of MMT is again its concreteness compared to those others. I propose a very specific mechanism for how cognitive state is captured, modelled, and made available for further processing. And, like for my evolutionary narrative, this explanation is concrete enough that it can be easily simulated in computational models. Something I also hope to do.

    Lastly, given the above, I think MMT is also more concrete in its explanation of the phenomenology of consciousness. Certainly, I provide a clearer explanation of why we only have conscious access to certain brain activity; and I'm able to make a clear prediction about the causal power of consciousness.

    The one area that I'm not happy with yet is that I believe MMT is capable of significantly closing the explanatory gap, even if it can't close it entirely, but I haven't yet found the right way of expressing that.

    Overall, science is a slow progression of small steps, most of which hopefully lead us in the right direction. I would suggest that MMT is a small step on from those other theories, and I believe it is in the right direction.
  • On delusions and the intuitional gap
    I do not see how something "computing really hard," ever necessitates the emergence of first person subjective experience.
    — Count Timothy von Icarus

    This is the thing. The thing. It simply isn't needed, until we can assess why. At what point what a being need phenomenal consciousness? It's an accident, surely. Emergence, in whatever way, on the current 'facts' we know.
    AmadeusD

    Totally agree. Just adding more complexity at a computational process does mysteriously make consciousness happen. In my blog post I argue that there is a very specific evolutionary need for why consciousness evolved (well, technically meta-management) and a very specific kind of structure that leads to conscious phenomenology. There is a very valid argument about whether the meta-management processes I describe truly do lead to phenomenal consciousness, but if correct, it offers an explanation of why consciousness emerges.
  • On delusions and the intuitional gap
    Objection: the argument appeals to an indubitable fact. The ‘explanatory gap’ you summarily dismiss was the substance of an article published by a Joseph Levine in 1983, in which he points out that no amount of knowledge of the physiology and physical characteristics of pain as ‘the firing of C Fibers’ actually amounts to - is equal to, is the same as - the feeling of pain.Wayfarer

    One cannot conclude from my version of the argument that materialism is false, which makes my version a weaker attack than Kripke's. Nevertheless, it does, if correct constitute a problem for materialism, and one that 1 think better capωres the uneasiness many philosophers feel regarding that doctrine. — Levine (1983)

    Levine acknowledges that his argument is not proof. And Chalmer's view is based on his intuition about whether he can conceive of something or not.

    We do not know that consciousness is a physical characteristic. We do not know how it comes about. Therefore, we cannot reduce it to the properties of its constituents.Patterner
    Precisely. There are so many arguments claiming that materialism can never explain consciousness that anyone who proposes a materialistic explanation is summarily dismissed. And yet the fact is that we don't know what consciousness is. So we can't be certain about the correctness of those arguments.

    In relation to reductive explanations, @Count Timothy von Icarus earlier commented that there isn't proof either way. I think that's a far better stance than claiming that reductive explanations are definitely false, or that materialism is definitely false.

    I'm also not trying to prove that materialism and reductive explanations are absolutely true. But I'm trying to show that a reductive materialistic explanation can go much further in explaining conscious phenomenology than is generally accepted by those who dismiss reductive materialism. I'm certain that there are gaps in my explanation, but I think if you read the full blog article you'll find that there's a lot less remaining than you expect.

    It was my mistake to start a conversation about intuition/delusion without the background that my argument was actually based on.
  • On delusions and the intuitional gap
    Thanks for the discussion. My apologies, but I don't have the background to be able to respond to any of the detailed points. However, I have a description of a series of mechanisms that I believe does produce everything that consciousness appears to be - from a first-person perspective.

    That description is given the other discussion I mentioned:
    https://thephilosophyforum.com/discussion/15091/the-meta-management-theory-of-consciousness

    I think of this description as being reductive, but then I also think of the explanation of H2O producing the wetness of water as being reductive. So it sounds like it's just a matter of definitional differences, as is often the case. In any case, the theory I present there is grounded in materialism, but yet I am able to offer very clear explanations for a number of phenomenological descriptions of consciousness.

    Unless someone can find major holes in my argument there, it makes the case for the need for alternate explanations much weaker.

    Your own term, "Meta-Management", may be an unintentional reference to a feedback loop.Gnomon
    Far from unintentional. The theory is based around the need for a feedback loop. The theory very much creates a Strange Loop.
  • On delusions and the intuitional gap
    Good deal! That's another way of saying what I mean by : "Consciousness is the function of brain activity". In math or physics, a Function is a relationship between Input and Output. But, a relationship is not a material thing, it's a mental inference. Also, according to Hume, correlation is not proof of causation.Gnomon

    Intuition is not physical vision --- traceable step by step from outer senses to inner sensations --- but a mysterious metaphysical way of knowing what's "going-on" inside the brain, without EEG or MRI.Gnomon

    If I understand you correctly. I think this is the non-reductive thesis - that the whole of consciousness is more than the sum of its parts, and thus that it cannot be fully explained by its parts alone. My apologies if I've misunderstood you, but I'll talk to that point anyway.

    As I understand it. The non-reductive thesis about something, paraphrased as "more than the sum of its parts", says that something cannot be entirely explained by its parts and their interactions because it has some additional qualities that are not explained by those parts and/or their interactions. Thus, consciousness being an example of such a thing, consciousness cannot be explained via the existing reductive methods of science.

    I'm yet to see an argument that proves the non-reductive thesis - though I probably just haven't read enough.

    What I have seen is this:
    1) Convincing arguments that consciousness might be more than the some of its parts. (Note: not arguments that it is is)
    2) Lots of people saying in various ways that they cannot conceive of how a reductive explanation could explain consciousness.
    3) #2 being used as a logical leap to conclude that consciousness definitely is non-reductive.

    Some take #2 to conclude that consciousness isn't even physical in the traditional sense. Others accept that everything is still physical in nature, but instead suggest in one way or another that our science is incomplete - that we need non-reductive ways of theorising about things. Those discussions usually then trail off into meaninglessness - they eliminate the rationalisation mechanisms in science to understand how first-principles lead to bigger things, the particular-to-holistic process that you mentioned. And so the arguments conclude self-gratifyingly that consciousness cannot be explained mechanistically. The non-reductivity thesis creates the explanatory gap, by refusing to accept explanations.

    My approach is to eschew the debates and to just provide such an explanation. I've started a discussion in https://thephilosophyforum.com/discussion/15091/the-meta-management-theory-of-consciousness if you're interested. There I've provided details of just such an explanation. The ellipsis to which you refer.
  • On delusions and the intuitional gap
    But consciousness of consciousness is maximally simple, no? It doesn't specify any particular experience. We might be wrong in perceiving a lion in the grass, it might just be a patch of grass. But we can't be wrong that we have experienced something-or-other, i.e. a world. And to go one step further, when we turn consciousness on itself, in experience of experience, where the subject is the object, there is no gap for a mistake to exist in.bert1

    I understand this view. But I think it's an over simplification. On the one hand, given that the brain is itself, it should have no trouble knowing itself. In practice, there are a number of problems with that notion.

    1) Firstly, there's strong neurological and behavioral evidence that our access consciousness doesn't have access to everything that goes on in the brain. So, even if it were possible for the brain to observe everything about its own activity, the brain doesn't do that - at least not to the extent that we have conscious access to it.

    2) Take a hypothetical brain, and imagine that every neuron of it's say, 1 billion neurons, is devoted to some form of 1st-order behavioral control in relation to the environment or the body. Now, imagine that this brain is going to develop the ability to observe itself. It's full state at any given moment is determined by the interactions between its 1 billion neurons, via its, say, 100 billion synapses. That's a large data space. The world is already pretty complex, and its existing 1 billion neurons are all needed just to understand that. So how many more neurons does the brain need to understand its own activity? Even if we assume conservatively a 1:1 relationship - that 1 billion additional neurons is enough to understand the activity of the first 1 billion neurons, now the brain is twice the size. Oh, and also, now the data space that needs to be monitored is twice the size. So the brain needs to double again. etc. etc. until infinity.

    Well, that's obviously intractable. What is feasible? Well, instead of observing to such a low-level, we capture just some sample of brain activity. This is likely to be reduced by 1) limiting the scope of which parts of brain activity are observed, and 2) capturing a dimensionally reduced abstraction. The rest has to be inferred, which opens itself up to hallucinations.

    3) There's problems too with simple connectivity issues. If you imagine a section of brain that is devoted to some 1st-order body function. In order for some other section of the brain to monitor this first section, there needs to be additional connections going out from the first. If we assume naively that there is one brain region devoted to meta-management, then it needs to get connections from all the brain regions that it cares about, which puts a strong limiting factor on how much data it can collect about the rest of the brain activity. And again, the rest has to be inferred = hallucinations. Now, the naive assumption of a single 2nd-order data collection center is almost certainly wrong. But some degree of differentiation certainly does occur in the brain, so the problem still exists to the degree that differentiation does occur in the brain.
  • The Meta-management Theory of Consciousness
    Also, a thank you to @apokrisis who introduced me to semiotics in a long ago discussion. While in the end I don't need to make any reference to semiotics to explain the basics of MMT (and every attempt I've ever made has just made things too verbose), the Piercean view of semiotics helped me to finally see how everything fitted together.
  • On delusions and the intuitional gap
    It's wonderful how writing something down helps you clear up what you've been thinking. I had no idea I was going to bring up solipsism when I started to write the OP. But the outcome of my question was almost a given the moment I finished writing it:
    * I'd foolishly argued that we can't know anything.

    But I do see a hope. As I see it, there's approximately three things at play:
    * perception
    * intuition
    * analysis

    Classical philosophers and neuroscience have claimed that our perception is flawed. We all know that our intuitions are a good start, but should never be relied upon without verification. That leaves analysis, and all the wonderful legacy of arguments to and fro about the power of analysis to overcome the limitations within our perception and our intuition. I'm not even going to attempt to address any of that.

    The outtake for me is that, should a suitable new analysis come to light, it can supplant our (likely faulty) intuition about the feels of consciousness.

    I suppose that is exactly what Dennet was attempting to do (and apparently failed at) in his critique of conscious feels and the hard problem.
  • What is a strong argument against the concievability of philosophical zombies?

    Yes he does use psychical, but I'm paraphrasing to put it into the context of the discussion here.

    What is psychical? If it's part of the physical realm, then it's some new fundamental physics that we don't know about. If it's not part of the physical realm, then it's metaphysical and we're back to dualism.
  • What is a strong argument against the concievability of philosophical zombies?
    The p-zombie in this example is a physical thing - quite literally, a physical object, albeit one that is indistinguishable from a human subject. So how does that constitute 'something outside of physics that has the conscious experience'? How is it 'outside of physics'?
    To be completely frank, I think you're agreeing with me. Chalmers' view is totally bonkers.

    But to be more coherent, what I'm trying to do in my own clumsy way is to summarise a particular viewpoint (which I don't hold to), in order to a) comment on why I don't like that viewpoint, and b) to argue for the need for people to be clearer about which kind of p-zombie they are talking about.

    I'm being particularly clumsy by mixing those two together, but I can't help it.

    I'm using Chalmer's viewpoint because I'm most familiar with it and because it appears to be representative of the general viewpoint held by a sizeable number of philosophers (I don't mean the majority, just that there are plenty who do hold to this). In any case, as I understand it, something is metaphysical if it has some form of existence that is independent of physics. Generally there is assumed to be interaction between the physical and metaphysical, but in some cases it may be only unidirectional - eg: as per epiphonemenalism applied to dualism. This is the Cartesian thesis, that the mind exists in some other plane of existence beyond the physical. According to that theory, a p-zombie according to Chalmer's description is conceivable - it's just a human that lacks a link to its metaphysical mind. Highly impractical and improbable, but conceivable nonetheless.

    [UPDATE: I believe "metaphysics" is an area of study, whereas "metaphysical" is a supernatural existence. The two seem almost totally unrelated except for having similar names. I'm referring entirely to the latter. Happy to be corrected on terminology]

    But most don't accept Cartesian dualism. And neither does Chalmers. He prefers panpsychism: the theory that everything is physical (no metaphysical stuff needed), but that there's some new fundamental physics that we can't yet measure. In an elaborate way, he uses his p-zombie to conclude that panpsychism is correct. And that outcome I cannot fathom. If everything is physical, then a p-zombie according to his description does not exist.
  • What is a strong argument against the concievability of philosophical zombies?
    Odd reasoning, it seems to me. There is only one form of being that has 'the same neural structures as humans', that is, humans. If one were able to artificially re-create human beings de novo - that is, from the elements of the periodic table, no DNA or genetic technology allowed! - then yes, you would have created a being that is a subject of experience, but whether it is either possible or ethically permissible are obviously enormous questions.
    I could well be mistaken or overly simplistic in my understanding, but I believe I was just paraphrasing commonly stated descriptions of p-zombies in the lead-up to that section that you responded to. For example, in The Conscious Mind, Chalmers (1996), pg 96 "someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether". As I understand it, there's no room in that description for any kind of macro or micro physical difference between the p-zombie and the human. And that's regardless of the level of technology used to do a comparison, or even whether such a technology is used. The two are stated as being identical a priori, independent of measurement.

    That form of p-zombie is the strictest kind, and its conceivability hinges on the conceivability of some form of metaphysics - ie: something outside of physics that has the conscious experience. This is the dualism to which I am referring.

    The conceivability discussion of such a p-zombie annoys me because it used as an argument w.r.t. the possibility of empirically measuring consciousness (ie: for the purpose that you've mentioned), but in reality it's only a test for a person's belief. If I believe that metaphysical processes are not necessary (ie: physics is sufficient for consciousness), then I find the existence of such a p-zombie inconceivable. If I believe that a metaphysical reality is necessary, then I find the p-zombie not just conceivable but possible. Chalmers states quite clearly his bias, eg on (p 96) "I confess that the logical possibility of zombies seems equally obvious to me. A zombie is just something physically identical to me, but which has no conscious experience".

    On the other hand, for this discussion so far I have taken an in-between stance and merely said that I find it conceivable that metaphysics is necessary, but I don't believe it to be so. In other words my view is:
    1. dualism (existence of both physics + metaphysics) is conceivable
    2a. under the a priori assumption that dualism is true, then I find p-zombies logically coherent and thus conceivable
    2b. under the a priori assumption that dualism is false, then I find p-zombies logically incoherent and thus inconceivable
    3. I hold to the conclusion that dualism is unnecessary to explain consciousness.
    4. by Occam's Razor, I prefer the assumption that dualism is false, and I will act accordingly until proven otherwise.
    5. However I accept that I cannot prove that dualism is false. Likewise, no-one can prove that it is true. Thus, the existence of p-zombies is conditional. It is possibly conceivable. It is not conceivable in an absolute sense. An individual may be able to conceive of it, but only because of their particular bias; while other individuals cannot.

    To belabor my point, if you don't mind, and if I have surmised your own viewpoint correctly, you also reject the conceivability of a p-zombie that is physically identical in all ways to a human - ie: that it's both impossible and inconceivable for something physically identical to a human to be devoid of conscious experience. Not only that, but I find very few people accept such a description of a p-zombie - ie: they find it highly improbable. I take this to imply that they also find this particular variant of p-zombie inconceivable, but perhaps I am making invalid assumptions there.

    (FYI, I am taking heavy inspiration from Chalmer's chain of implication: logical coherence --> conceivability --> logical possibility. I'm aware that that represents only one viewpoint, but I'm working within my own limitations).
  • What is a strong argument against the concievability of philosophical zombies?
    By the way, part of my question regarding definitions of p-zombies comes from a frustration. I have seen implicitly that definitions vary throughout, yet when I wanted to reference something to that effect recently I couldn't find any references.
  • What is a strong argument against the concievability of philosophical zombies?
    @Wayfarer I think that just gets to my point that the p-zombie analogy is used for different discussion purposes, and that the exact definition changes with it.

    I'll try to find the reference, but one of Chalmer's works describe a p-zombie as being exactly physically identical. In other words, not only that we cannot empirically find any physical difference using our technology today (fMRI etc) but that we couldn't even with the most advanced physical technology conceivable. At that point, the only way that the p-zombie can be different from a human is that some form of dualism is true. And that is the conceivability argument in that case - that it is conceivable that some form of dualism exists.

    And that's what made me realise the point of the thought-experiment. Providing that the fake was totally convincing, it could be a very well-constructed mannequin or robot that says 'I fear this' or 'that would be embarrasing', 'I feel great' - and there would be no empirical way of knowing whether the entity was conscious or faking. So I take Chalmer's point to be that this is an inherent limitation of objective or empiricist philosophy - that whether the thing in front of you is real human being or a robot is impossible to discern, because the first-person nature of consciousness is impossible to discern empirically, as per his Hard Problem paper.
    Yes. And I find this particular variant of p-zombie to be very useful.
    What's interesting is to try to define clearly what this p-zombie is, in the same form of description as in my prior paragraph. This alone as different variants, some of which are:
    * Behavioural-p-zombie: A being that is obscured by a screen so that we cannot observe its nature in any way except through its textual and auditory behaviors. ie: LLMs and the Turing Test.
    * Ancient-technology-p-zombie: A being that is empirically identical to humans in all measurable ways using the technology of 1st century scientists, except that it lacks phenomenal consciousness. This is a more precise variant of the behavioral p-zombie, with the addition that scientists of the day can open the skull and observe that there's a brain that looks the same. But they would have no means to identify any potential structural differences.
    * Current-technology-p-zombie: Same as above but current technology. This p-zombie must have all the same physical brain structure as humans to the extent that we are unable to identify any differences via fRMI static results and dynamic sequences, or via close examination of neural structures. Many of the debates I see probably use this description.
    * Future-technology-p-zombie: Same as above but with future technology that can scan the entire neural structure and sub-structures in in instant.

    I think most arguments today apply to one or both of the last two. For example, discussions whether neural activity produce consciousness could be identified with either of the last two, depending on whether you're suggesting that some other physical structure may be present too (eg: Orch-OR).

    and there would be no empirical way of knowing whether the entity was conscious or faking
    To my point, I don't accept that this statement is true for any given conception of p-zombie. The form of p-zombie changes what we can do empirically. As someone with a reductive materialist viewpoint, I argue that at some point the p-zombie is sufficiently close to human physical structure that it is inconceivable that it doesn't have consciousness.

    I would go further and say that modern-technology-p-zombies would be conceivable, but practically impossible. I can conceive of the possibility that dualism is true, and thus that even a future-technology-p-zombie would be empirically indistinguishable from a human. However, if dualism is false, then I hold that anything with the same neural structures as humans (as empirically measured via today's technology) will experience phenomenal consciousness. From a practical point of view, I go even further and state that I believe today's physics is sufficient to explain phenomenal consciousness (ie: our failure is a lack of knowledge rather than a systemic gap in the science). At that point, there is no need for dualism, so why I can still conceive of it as a possibility, I find it extremely unlikely.

    (By the way, hi again Wayfarer after a long time, it's nice to see you still here and offering your views)
  • Is the philosophy of mind dead?
    As someone who comes more from, shall we say, a reductionist scientific viewpoint, and who is interested in using that viewpoint to understand human consciousness, I find the Philosophy of Mind discussions tremendously beneficial. I read somewhere on this forum (and paraphrasing) that the relationship between philosophy and science is that philosophy is tasked with finding the questions that need to be answered, and in putting some constraints on the possible answers, and that science is tasked with finding the answers that can be empirically justified.

    Consciousness and the philosophy of mind is perfect topic for that conjoint study. The history of consciousness research, complete with its ancient history, the later behavioral hiatus, and the recent re-update, is a story the co-dependence between philosophy and science (for better or for worse).

    That being said, I am regularly frustrated by intransigent philosophical arguments that seem to derive from some fundamental misunderstanding somewhere or unproven viewpoint, but that the cause is impossible to identify due to the complexity of the arguments. In some cases though it is entirely clear. I've been trying to read Chalmer's The Conscious Mind, and, while Chalmer's was the one who got me interested in consciousness in the first place and I have tremendous respect for him, I am frustrated by the oblique assumptions that riddle his arguments -- assumptions that I don't agree with.

    On the other other hand, I have come to recognize and accept that these kinds of debates are the domain of philosophy, and that ultimately they do lead to exactly the outcomes that we need for science to partake.

    The topic of consciousness and other philosophy of mind questions are still deeply unknown. Taking a scientific viewpoint, I have a strong theory that explains consciousness in purely reductionist mechanistic principles, and I can argue that it explains phenomenal consciousness. But any arguments I present will not be accepted because the explanations are too far from our intuitions. That's where philosophy of mind comes in, to discuss the hows and whys of the explanatory gap between any scientific theory and our intuitions.
  • What is a strong argument against the concievability of philosophical zombies?
    I've always struggled a bit with comparison of p-zombie arguments because there are many different interpretations of what a p-zombie is. For example, Chalmers' description is that they are physically identical to ourselves and yet lack phenomenal consciousness. This stems from _why_ he's using the analogy - which is to address the conceivability of phenomenal consciousness residing in something nonphysical.

    That's at one extreme, shall we say. At the other extreme, we have a behavioral p-zombie, which walks talks, looks, and behaves exactly like a human but is completely different on the inside. Generally such a description would not be ascribed the label "p-zombie", but if we accept that it is a continuum of characterizations then I think this description is an acceptable addition. Personally I find it no less practical than Chalmer's own. For example, examples like this have helped us hone our intuition about what might pass off as a test of consciousness and what might not - we have concluded that a third-person behavioral test is insufficient.

    A more practical variation, would be something more in the middle where we omit Chalmers' requirement that it be physically identical, and put other more lenient constraints on how much it is allowed to differ from the physical structure of humans. Variants of this description are useful in both philosophical discussions and in scientific investigations. For example, it is exactly this analogy that is being increasingly discussed by neuroscientists wanting to devise tests for consciousness. eg:
    Bayne et al 2024, "Tests for consciousness in humans and beyond", https://pubmed.ncbi.nlm.nih.gov/38485576/.

    Has anyone done a formal review of the different forms of p-zombie?
  • A hybrid philosophy of mind
    True, because its configuration now enables all of those inanimate objects to interact in a certain way. But the kind of actions they do to each other are still actions that their constituent parts were capable of all along. Every copper atom is already capable of exchanging electrons with neighboring atoms; a closed circuit just gives a bunch of them motive and opportunity to pass electrons around with each other in a circle.Pfhorrest

    Yup. I thought you might answer something like that. So, the suggestion is that this is their proto-circuit nature. And in the same way, independent matter also exhibits a proto-experience.

    But it is only once that proto-circuit is arranged in a particular way that it transitions from proto- to actual. So there is still a kind of discrete step-change in force here....exactly the sort that distinguishes strong from weak emergence.

    I'm not sure that this is correct, but I'm also thinking of a crystal as an example -- a crystal requires a certain state, and then once in that state it holds to it strongly, but outside of that state it disintegrates to a liquid or whatever...It's another example in nature of a discrete step-change.
  • A hybrid philosophy of mind
    Specifically, as regards philosophy of mind, it holds that when physical objects are arranged into the right relations with each other, wholly new mental properties apply to the composite object they create, mental properties that cannot be decomposed into aggregates of the physical properties of the physical objects that went into making the composite object that has these new mental properties.Pfhorrest

    An electric circuit with a gap is just a bunch of touching inanimate objects. At the moment that the circuit is completed, then suddenly the system is changed. From that moment it can react to inputs and produce outputs (lights, for example), whereas previously it was as dormant as a rock.

    I don't hold a strong view either way on whether the reality of consciousness is strong emergence or weak emergence of pan-proto-experientialism, but personally I do feel that the example of the electrical circuit shows that it is as least plausible that some things can undergo a discrete step-change from 'non' to 'is'. And it means that I can't rule out the possibility of strong emergence (though what the mechanics behind it could be seems most mysterious).
  • A hybrid philosophy of mind
    (Hey @Pfhorrest, an annoying technical thing. Your images in your OP don't show up in latest browsers - in this case Chrome. Technically the issue is that your website is only hosted on HTTP, whereas TPF is on HTTPS and browsers don't like HTTP-only websites nowadays.)
    image.png
  • A hybrid philosophy of mind
    Thanks @Pfhorrest, this is a nice summary of the variations on the topic, and of your particular stance. I'm still working through the whole thing, but I find your arguments compelling.

    A subject's phenomenal experience of an object is, on my account, the same event as that object's behavior upon the subject,Pfhorrest

    Supernatural beings and philosophical zombies are ontologically quite similar on my account, as for something to be supernatural would be for it to have no observable behavior, and for something to be a philosophical zombie would be for it to have no phenomenal experience. Both of those are just different perspectives on the thing in question being completely cut off from the web of interactions that is reality, and therefore unreal.Pfhorrest

    This is a cool idea.
  • A short theory of consciousness
    I see no reason why consciousness should be exclusive to organic lifeforms. If consciousness is indicated by the impulse towards self organization, what are the implications in considering that the atom indisputably factors as one of the greatest organizations known to man?Merkwurdichliebe

    The only thing we know with certainty sufficient to take for granted is that the majority of humans are conscious. As our observations of the animal kingdom have improved along with our understanding of neuroscience it is has become generally accepted that many animals are also conscious - so we now say that is is likely that a large part of the animal kingdom also experience consciousness -- ie: phenomenal experience.

    Beyond that we just don't know. All statements beyond that are nothing more than conjecture. All options are possible, though some are more reasonable/likely than others.

    And any such conjectures need to be presented with a reasonable explanation as to why anyone else might find those conjectures compelling.

    My point is, @Pop cannot make a statement to the effect that all things experience consciousness and assume it to be self evident. They either must state that they take it as an assumption/opinion (and thus make it explicit that they will not attempt to prove it), or they must offer some rationale.
  • A short theory of consciousness
    Hi @Pop, I've read through your full article.

    I was very interested as you suggested in your OP that it tackles the hard problem of consciousness. However, I don't think you actually touch on that question.

    The article makes an early claim that everything living is conscious. eg:
    There is no reasonable way to separate consciousness from life. They are two aspects of the one thing. Consciousness is the quality that gives rise to life, and in turn consciousness is the singular thing that life expresses. The notion that only some forms of life posses consciousness is incoherent and baseless. All living creatures are self learning and programming. All living creatures are involved in a process of self organisation - always! They are all conscious, but they all possesses a different degree and a different version of consciousness.

    You don't offer any basis for this claim. It sounds like you are assuming panpsychism, which is not generally accepted. Perhaps you could offer a more detailed account of why you think everything living is conscious.

    Overall, I'd say that you've conflated self-organisation and consciousness without providing an explanation.

    Also, take a look at the Free Energy Principle (from Karl Friston), I think you'll find it's very similar to your theory of the Emotional Gradient, but is more general. I'd also suggest it's a better characterisation than using the word 'emotion'.
  • Are we justified in believing in unconsciousness?
    Should we only believe in what is verifiable? If so, we should be skeptical of claims that anything lacks consciousness.petrichor

    I think the overarching principle that you're describing, or perhaps better phrased as the logical extension of your argument, is Panpsychism.

    It's a valid possible explanation, but it suffers problems. It presupposes that everything has some level of consciousness, from rocks to trees to humans, and by implication: from rocks down to molecules to atoms and smaller. On the face of it, this seems hard to fathom without inducing some form of mysticism.

    One of its biggest problems is that of the Combination Problem: what mechanism explains how all those billions of tiny proto-consciousnesses combine together to form a high level consciousness?

    Personally I don't buy panpsychism, so I'd say that there is a line somewhere that demarcates conscious from non-conscious, and our difficulty lies in understanding where that line exists.

    Consider split-brain patients. The severing of the corpus callosum seems to split the mind into two distinct parts. Each hemisphere fails to report what is exclusively observed by the other. The ability to integrate information between hemispheres is lost. Unlike the left hemisphere, the right hemisphere can't speak. So if you talk to the patient and get a verbal answer, you generally only hear from the left hemisphere. But there are other ways of asking the right hemisphere questions and getting answers, such as by having it point to objects with the left hand.petrichor

    I'm also very interested in split-brain phenomena. I'm still searching for a good account that gets to the heart of conscious experience in the non-verbal half. The problem with all the accounts I've read so far is that they don't directly attempt to ask the patient questions about their own conscious experience. It's possible that the problem lies in the fact that the non-verbal half cannot comprehend questions of consciousness without the verbal faculties; but I suspect it's more subtle than that.