• Malcolm Lett
    41
    I've been developing my own theory of consciousness in the background for some time, and I've finally completed a manuscript that summarises that theory. So I'd like to present that theory now in the hope that some will find it enlightening, and because I know I'll learn a lot from the rebuttals that come back.

    The theory presents a mechanistic account of consciousness. I've been learning a lot about alternative views since I've been here, which will distill in my mind over time, but as yet they are not incorporated into the theory.

    So, here it is, what do you tihnk?

    https://github.com/toaomalkster/conscious-calculator/wiki/A-Theory-of-Consciousness

    (oh, by the way, it's not short)
  • Outlander
    758
    Would it be possible to add a short summary in the form of bullet points? Oh, that's right. Yes. It is. Could you please do so? Thanks!
  • JerseyFlight
    782
    Would it be possible to add a short summary in the form of bullet points?Outlander

    I don't think so, though I shouldn't answer for someone else. I'm almost finished with this intricate paper. It would be very hard to reduce what he has here to bullet points because the argument builds on itself stage by stage. This is a sweeping overview and contribution to theory. The author proves that he is not an amateur in this area. Anyone wanting to directly engage this thesis is going to have to have intricate knowledge of the subject. It's very likely the author has indeed "produced something that will contribute significantly to the fields of artificial consciousness and artificial general intelligence, and perhaps even to the fields of cognitive science and neuroscience..."
  • Outlander
    758


    Well, do your best.

    Describe the first stage then the next. From your perspective. Neat thing is your interpretation may be completely independent of the intent of the author.

    What's his "point" or points, basically?

    What do you know or have reason to believe now that you didn't before reading?
  • JerseyFlight
    782


    You just need to read the paper if you're interested in it. I have refrained from comment because the things I want to comment on do not directly interact with his main thesis, "to offer a more complete and concrete theory of the mechanisms behind consciousness and to show how consciousness and high-level thought relate to each other. Secondly, to attempt show that those physical mechanisms are sufficient for the subjective experience of consciousness. And thirdly, to form a basis for future research into artificial general intelligence."

    This papers deserves serious replies. It provides the foundation for an up to date discussion. My interest is more in the direction of cognitive science, and this paper did touch on things I consider to be relevant: "High-level thought involves multiple steps and intermediate states. Working memory serves to hold those intermediate states."

    This is damn important because it tells us that High-level-thought is not the result of will power, it's a matter of cognitive equipment functioning properly and working at a high level. This matters, because we keep on dealing with intelligence as though it were simply a matter of will power or greater effort on the part of the student. What one's State Machine is, becomes that way through a concrete material social process, there is no way around this, and it makes a huge difference when it comes to the way we view humans and approach education.
  • Outlander
    758
    This papers deserves serious replies.JerseyFlight

    Yeah so does the lowliest person asking how you're doing. If you're responding on a basis of betterment of the human condition.

    It provides the foundation for an up to date discussion.JerseyFlight

    Well maybe it does dude but if you can't even describe to me here what it's about I mean... how do we know you're not just bamboozled by mentally satisfying jargon? Why is it so foundational. A six year old can describe in relevant enough detail why something is interesting. Why won't you?
  • Outlander
    758
    What one's State Machine is, becomes that way through a concrete material social process, there is no way around this, and it makes a huge difference when it comes to the way we view humans and approach education.JerseyFlight

    So, nurture vs. nature. All of a sudden I just feel all world suffering and poverty fading into oblivion due to this revelation... if only we thought of this sooner. Come on. What does he say beyond that timeless argument?
  • JerseyFlight
    782

    This is my point that I derived from the paper. It is not the point of the paper or the author. Please stop derailing this thread. Either read the paper or move on to another thread. Thanks.
  • Outlander
    758
    Can someone summarize any new/relevant ideas presented in a sensical fashion without avoiding any potential admittance they have no idea what they read? Thanks. Just trying to learn here.

    Personally, I don't read works from modern day philosophers. Any ideas I have I can safely call my own, or at least derived from the masters. No modern rehashes. I imagine many use philosophy forums in the same fashion.

    Edit: The OP stated he is not only expecting but eager to learn from rebuttals. Just looking for a bare bones assessment of the premise(s) upon which action can be easily done. Sigh. Guess I'll read it. But for the record. This doesn't count. :grin:

    Also, any mod reading may consider not only what's his name's sentiment along with my explicit acknowledgment to go ahead and remove our entire interaction for sake of relevant discussion. Thanks! You're welcome for the bumps.
  • Francis
    32
    I browsed it and you claim your theory does not explain the existence of Qualia (phenomenal experience). You do not deny the existence of phenomenal experience, but you believe that much of consciousness can be explained in terms of elaborate of firing neurons.

    Certainly some of the brains processes can be explained in such a way, and have been by science. The good thing about the easy problems of consciousness is that it is much easier to uncover through scientific experiment how the brain performs such operations. This is why I personally am more interested in theories that specifically center around phenomenal experience, because its where the brux of the mystery lies.

    You certainly put a lot of effort into this, I may read more of your theory if I find time in the future, but out of curiosity, how much comparison and reconciliation have you done between your theory and the current easy problem theories held by modern neuroscience?
  • apokrisis
    5.1k
    I admit I only skimmed the intro stuff. But it's all familiar material to me. So I will plunge in with my immediate reaction - apologies for the blunt response.

    I see two instant failings.

    The first is to treat consciousness as the product of "brain machinery". No matter how much you talk about neural nets or feedback loops, it just sets you on the wrong path. Consciousness can only start to make sense - scientifically - when approached in terms of biological and ecological realism.

    Life is a non-mechanical phenomenon. It is not a form of computation, but fundamentally about the dissipation of entropy. A nervous system has that job to do. That is the foundation from which a scientific account has to build its way up. So if you don't start with the correct view of the biology, you can't arrive at a correct view of the neurobiology.

    The second key criticism is that "consciousness" as you are talking about it here also conflates a biological level of awareness - that which all large brained animals would share due to the great similarity of their neurobiology - with the language-enhanced mentality of Homo sapiens.

    We are very different as we evolved a capacity for syntactic speech. And that new level of semiosis is what allows a socially-constructed sense of self-awareness.

    In short, animals are extrospective - "trapped in the present". They can't introspect. There is no inner world in the sense we understand it where we consider our "selves" to be in charge and "experiencing the experiences".

    Speech is what transforms humans so that we have a rational self-regulation of our emotions, a narrative and autobiographical structure to our memories, a sense of personal identity, an ability to imagine and daydream in "off-line" fashion rather than being tied to a simple anticipatory form of forward planning and mental expectation.

    So any scientific "theory of consciousness" has to be grounded in biological explanations with ecological validity. That is, it would have to be a scaled up version of a story in which the nervous system exists for the simple evolutionary purpose of dissipating entropy. And doing that involves being able to apply biological information to stabilise physico-chemical uncertainty.

    Computers are founded on stable stuff. Biology is founded on the ability to stabilise stuff - channel metabolic processes in desired directions, rebuild bodies that are continually falling apart, reduced the uncertainty of the world in terms of a state of experience being produced by a brain.

    And then language and culture make all the difference to the quality of human consciousness. There is this whole overlay of thought habits that we learn. Any theory of consciousness as a biological phenomenon has to also simplify its target by stripping out the psychological extras that complicate the socially-constructed mentality of us humans.

    Then as to the actual computational model you have advanced, I didn't spot anything that seemed new or different even for that "cognitive flow chart" style of theorising.

    As you say, there isn't some step that magically turns a bunch of information processing into a vivid state of experience - the step that jumps the explanatory gap. But that is what this style of theorising either has to achieve, or recognise that its lack of biological realism is why in fact it winds up creating a vast and unbridgable gap.

    Your problem is that you can add all the mechanistic detail you want, but it never starts to come together in a way that says "this machine is conscious". Whereas a biological/ecological explanation can start off saying the whole deal is about a "biosemiotic" self~world modelling relation.

    That is how theoretical biology would explain "life" in the most general fashion - the semiotic ability of genes and other biological information to regulate the underpinning physical processes that build living bodies that fit actual environments. And then neurosemiosis follows on quite naturally as a further level of organismic "reality modelling" for the purposes of metabolic homeostasis.

    That is all brains have to do - see how to construct a stable world for ourselves in terms of achieving our basic ecological goals.

    Once you start talking about logic or thought monitoring, you have skipped over the vast chunk of the embodied facts of psychology. And even worse, you are calling them "unconscious" or "automatic". You are saying there is information processing that is just information processing - the neural network level. And then somehow - by adding levels and levels - you suddenly get information processing that is "conscious".

    A computational approach builds in this basic problem. A neurobiological approach never starts with it.

    So the question is why you would even pursue a mechanistic theory in this day and age? Why would you not root your theory in biology?

    Apologies again. But you did ask for a response.
  • PoeticUniverse
    833
    The theory presents a mechanistic account of consciousness.Malcolm Lett

    Yes, for the contents of consciousness are compositional, that is, the parts come together into a unified whole. Consciousness's value to us for survival, for it reveals the distinctions important to us. Consciousness is intrinsic; consciousness exists only for itself; physics only deals with extrinsic causes.

    Intelligence is what does the doing; consciousness is for being, exclusive, causing nothing but in itself, as the brain results leading to their representations in consciousness are already done and finished.

    The existence of the feedback path is the explanation for why we have awareness of our thoughtsMalcolm Lett

    'Feedback' is the key to the 'hard problem'; the conscious state maintains itself seamlessly as a non reducible whole. It might be, too, or alternatively, that qualia are the brain's own privately developed/evolved language.

    Summary:

    'The Feeling of Life Itself'
    (There is a book out)

    Physics describes but extrinsic causes,
    While consciousness exists just for itself,
    As intrinsic, compositional,
    Informational, whole, and exclusive,

    Providing distinctions toward survival,
    But causing nothing except in itself,
    As in ne’er doing but only as being,
    Leaving intelligence for the doing.

    The posterior cortex holds the correlates,
    For this is the only brain region that
    Can’t be removed for one to still retain
    Consciousness, it having feedback in it;

    Thus, it forms an irreducible Whole,
    And this Whole forms consciousness directly,
    Which process is fundamental in nature,
    Or's the brain’s private symbolic language.

    The Whole can also be well spoken of
    To communicate with others, as well as
    Globally informing other brain states,
    For the nonconscious knows not what’s been made.
  • Malcolm Lett
    41
    Would it be possible to add a short summary in the form of bullet points? Oh, that's right. Yes. It is. Could you please do so? Thanks!Outlander

    I considered doing that and then deleted it. I knew if I wrote a summary then people would read that and jump to conclusions without reading the whole paper.

    If you're interested, read the paper. Otherwise, ignore it.

    I don't think so, though I shouldn't answer for someone else. I'm almost finished with this intricate paper. It would be very hard to reduce what he has here to bullet points because the argument builds on itself stage by stage.JerseyFlight

    Cheers @JerseyFlight
  • Outlander
    758


    Am reading it now mate. Don't think I'm trying to knock it either just to do so or because I think everything modern is garbage. Just, you're clearly a smart dude and so wholly capable of breaking it down for someone with a simpler mindset to comprehend or rather "get the gist" of. What conclusions could be jumped to if they don't consist entirely of bare bones logical points? What else is there to even discuss devoid of explicit logical points?
  • Malcolm Lett
    41
    A computation approach builds in this basic problem. A neurobiological approach never starts with it.apokrisis

    I agree with you in principle that a computational approach may be a failed start. But I have a couple of points in response.

    1. The computational approach appears to have the greatest explanatory power of the various alternatives out there. My theory, for example, is testable from the point of view of the "neural correlates of consciousness", and furthermore it offers predictions about what we'll discover as neuroscience develops. It provides explicit mechanisms behind why we are aware of certain things, and not aware of others. And I could provide many other examples if I take the time.

    2. I have not seen a non-computational theory provide this level of detail.

    Perhaps it is more accurate to say that a computational theory of the brain and consciousness has the best ability to "model" the observed behaviours (internal and external), enabling us to do useful things with that modelling capability; however it may not form a "complete" theory.
  • Malcolm Lett
    41
    We are very different as we evolved a capacity for syntactic speech. And that new level of semiosis is what allows a socially-constructed sense of self-awarenessapokrisis

    That statement assumes that semiosis only applies to language. But Pattee showed quite convincingly that it applies to DNA (in a link you shared, if my memory is correct), as an example of semiotics applying to something other than language. I would suggest that the mechanisms I have proposed are another example of semiotics. I probably haven't got the split quite right, but as an attempt at the style of Pattee:
    * sense neurons produce a codified state having observed an object
    * other neurons interpret that codified state and use them for control
    * the codified state has no meaning apart from what the system interprets of it
  • Augustusea
    146
    very interesting read, well done
  • Malcolm Lett
    41
    So the question is why you would even pursue a mechanistic theory in this day and age? Why would you not root your theory in biology?apokrisis

    I'm not actually sure where the problem is here. I see the two as complimentary. As I have stated in my paper, the overall problem can be seen from multiple complimentary views: mechanistic/computational view ("logical" in my paper), and biological ("physical" in my paper).
  • apokrisis
    5.1k
    1. The computational approach appears to have the greatest explanatory power of the various alternatives out there.Malcolm Lett

    Reading more bits in detail, my criticism remains. Even as a computational approach, it is the wrong computational approach.

    You are thinking of the brain as something that takes sensory input, crunches that data and then outputs a "state model" - a conscious representation.

    So a simple input/output story that results in a "Cartesian theatre" where awareness involves a display. But then a display witnessed by who?

    And an input/output story that gives this state model top billing as "the place where all data would want to be" as that is the only place it gets properly appreciated and experienced.

    But biology-inspired computation - the kind that Stephen Grossberg in particular pioneered - flips this around. The brain is instead an input-filtering device. It is set up to predict its inputs with the intent of being able to ignore as much of the world as it can. So the goal is to be able to handle all the challenges the world can throw at it in an automatic, unthinking and involuntary fashion. When that fails, then attentional responses have to kick in and do their best.

    So it is an output/input story. And a whole brain story.

    The challenge in every moment is to already know what is going to happen and so have a plan already happening. The self is then felt as that kind of prepared stability. We know we are going to push the door open and exactly how that is going to feel. We feel that embodied state of being.

    And then the door turns out to be covered in slime, made of super-heavy lead, or its a projected hologram. In that moment, we won't know what the fuck is going on - briefly. The world is suddenly all wrong and we are all weird. Then attention gets to work and hopefully clicks thing back into place - generating a fresh state of sensorimotor predictions that now do mesh with the world (and with our selves as beings in that world).

    But this attentional level awareness is not "consciousness" clicking in. It is just the catch-up, the whole brain state update, required by a failure to proceed through the door in the smooth habitual way we had already built as our own body image.

    Consciousness is founded on all the things we don't expect to trouble us in the next moment as much as the discovery that there is almost always something that is unexpected, novel, significant, etc, within what we had generally anticipated.

    That is why I say it is holistic. What you managed to ignore or deal with without examination - which is pretty much everything most of the time - is the iceberg of the story. It is the context within which the unexpected can be further dealt to.

    As I say, this is a well developed field of computational science now - forward modelling or generative neural networks. So even if you want to be computational, you haven't focused on the actually relevant area of computer science - the one that founded itself on claims of greater biological realism.

    furthermore it offers predictions about what we'll discover as neuroscience develops. It provides explicit mechanisms behind why we are aware of certain things, and not aware of others.Malcolm Lett

    Errm, no. You would have to show why you are offering a sharper account than that offered by a Bayesian Brain model of attentional processing for instance.

    2. I have not seen a non-computational theory provide this level of detail.Malcolm Lett

    And you've looked?

    Besides....

    We appear to perceive certain external and internal senses and data sources, while not perceiving others. For example, we don't have direct access to arbitrary long term memories and it seems quite reasonable to assume that access to long term memory requires some sort of background lookup

    ....is an example of the sketchiness of any "level of detail".

    Basic "psychology of memory" would ask questions like are we talking about recognition or recollection here? I could go on for hours about the number of wrong directions this paragraph is already headed in from a neurobiological point of view.

    Perhaps it is more accurate to say that a computational theory of the brain and consciousness has the best ability to "model" the observed behaviours (internal and external), enabling us to do useful things with that modelling capability; however it may not form a "complete" theory.Malcolm Lett

    Sure. But I've seen countless cogsci flow chart stories of this kind - back in the 1980s, before thankfully folk returned to biological realism.

    That statement assumes that semiosis only applies to language.Malcolm Lett

    I said language was a "new level" of semiosis. So I definitely was making the point that life itself is rooted in semiosis - biosemiosis - and mind, in turn, is rooted in neurosemiosis, with human speech as yet a further refinement of all this semiotic regulation of the physical world.

    I would suggest that the mechanisms I have proposed are another example of semiotics. I probably haven't got the split quite right, but as an attempt at the style of Pattee:
    * sense neurons produce a codified state having observed an object
    * other neurons interpret that codified state and use them for control
    * the codified state has no meaning apart from what the system interprets of it
    Malcolm Lett

    That's not it.

    But look, if your interest is genuine, then stick with Pattee.

    I think I said that I did all the neurobiology, human evolution, and philosophy of mind stuff first. I was even across all the computer science and complexity theory.

    But hooking up with Pattee and his circle of theoretical biologists was when everything fully clicked into place. They had a mathematical understanding of biology as an information system. A clarity.

    I'm not actually sure where the problem is here. I see the two as complimentary. As I have stated in my paper, the overall problem can be seen from multiple complimentary views: mechanistic/computational view ("logical" in my paper), and biological ("physical" in my paper).Malcolm Lett

    In some sense, the machinery is complementary to the physics. But that is what biosemiosis is about - the exact nature of that fundamental relationship.

    So it is not just about having "two views" of the phenomenon - "pick whichever, and two is better than one, right?"

    My claim here is that the only foundationally correct approach would be - broadly - biosemiotic. Both life and mind are about informational constraints imposed on dynamical instability. Organisms exist because they can regulate the physics of their environment in ways that produce "a stable self".

    And (Turing) computation starts out on the wrong foot because it doesn't begin with any of that.
  • SaugB
    27
    Hello, I read your interesting paper and did have a question for you. It is a question I have myself thought for some time [and I really hope it is not singularly relevant to my own mental processes], and maybe it is relevant here but I might botch some of your terminology a little! You think of the 'state model' in relation to thought, and my question is on visual memory, bordering somewhat on the phenomena of imagination. Let's say you are actively remembering the face of a beautiful girl you met this afternoon when you are in bed and almost asleep---because you find her face attractive. How come, when you are consciously or actively recalling that girl's face, the dress she wore also features in your mental picture, without you having to consciously recall it? For it would be strange if it were just a face floating without context, but it usually isn't. What aspect of consciousness would explain that inadvertent inclusion of a detail such as a dress, almost a bit of unconscious imagination, but which is such that, if you turn your attention to it, is fully clear and apparent to your 'mind's eye' in much the same way as the girl's beautiful face is? I feel like focusing on the 'forcefully' thought thoughts that a conscious mind can think is not sufficient to answer this question convincingly. But with this example I am not talking about full-fledged dreams, which many noteworthy theorists posit fully on the side of the unconscious. Perhaps a bit outrageously, I am suggesting that the divide between conscious and not-conscious, intentional and unintentional, is difficult to actually define, even in the case of a seemingly singular mental memory image, as in the example I have given. But I think the most important practical point is that, if there is a bit of unintentional content [like the dress] in our visual memories/thoughts, then can we ever expect to 'artificially program' that into any AI, assuming we want to make that AI very similar to humans? In any case, these might sound like particular questions in response to your theory, but I still think it is food for thought! Please do let me know what your responses are, including if I am incorrect somewhere regarding what you are going for. Good paper though, I enjoyed it!
  • SophistiCat
    1.5k
    I considered doing that and then deleted it. I knew if I wrote a summary then people would read that and jump to conclusions without reading the whole paper.Malcolm Lett

    Well, that's the point of summaries, in a way: to enable readers to jump to the conclusion of whether to commit to reading a largish text or to pass. But I know what you mean.
  • Malcolm Lett
    41
    Perhaps a bit outrageously, I am suggesting that the divide between conscious and not-conscious, intentional and unintentional, is difficult to actually defineSaugB

    SaugB, I'll respond to your main question in a minute, but your mention of sleep and the unconscious caught my attention, so let me segue briefly to make an outrageous claim of my own:
    * We might be conscious during sleep.

    I've heard it claimed that the only reason we appear to 'experience' dreams is that we remember them upon waking. I've never been happy with that premise.

    With blatant dependence on my own possibly misguided theories, for me to experience consciousness doesn't require that I appear awake to an outsider.While my body may be asleep, my consciousness could be awake. In most cases the level of apparent executive control is attenuated, but as is well known we do sometimes gain awareness of being in a dream and in the act regain some of our executive control -- I've experienced this myself, though sadly only once or twice. All of that is fully supported by my theory.

    All of that could also be explained via the memory theory, so I could be totally wrong here. But I can't help but enjoy the apparent contradiction of being conscious during sleep.
  • Malcolm Lett
    41
    How come, when you are consciously or actively recalling that girl's face, the dress she wore also features in your mental picture, without you having to consciously recall it?SaugB

    I think this comes down to our misconception of how memory and executive control works. We're convinced that we have complete control over every aspect of our thought, but I believe the reality is that we're only perceiving the high-level results of hidden low-level processes. So, while you may want to remember just a person's face, you don't have that much control over what pops into your 'mind's eye'.

    Biological memory is associative. You need some sort of anchor, like a smell, a name, or a feeling, and this triggers a cascade of activity that produces something associated with that anchor. So that's one part of why you don't have control over the exact details of what is recalled.

    The other significant part is of the mechanics of recall itself. It's seems to be fairly well agreed within neuroscience circles that there is no "memory" region in the human brain. Rather, events are split up into their sense components and stored within the regions of the brain that process those senses. eg: the event of seeing a train rolling past will have visual memories within the visual cortext, and sound memories within the auditory cortex. It is believed that the role of the hippocampus is to splice those disparate sources back together again when a memory is recalled -- basically re-constructing the experience.

    So I think that further explains why you get more than what you ask for. It also explains why, even though you think you can see all the details of the person's clothes, you're probably remembering the wrong colour :gasp:.

    Out of interest, here's a paper on my reading list that might to have some relevancy here:
    • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5840147/ . Persuh, M., LaRock, E., & Berger, J. (2018). Working Memory and Consciousness: The Current State of Play. Frontiers in human neuroscience, 12, 78. --- "In this article, we focus on visual working memory (VWM) and critically examine these studies as well as studies of unconscious perception that seem to provide indirect evidence for unconscious WM"
  • Malcolm Lett
    41
    But biology-inspired computation - the kind that Stephen Grossberg in particular pioneered - flips this around. The brain is instead an input-filtering device. It is set up to predict its inputs with the intent of being able to ignore as much of the world as it can. So the goal is to be able to handle all the challenges the world can throw at it in an automatic, unthinking and involuntary fashion. When that fails, then attentional responses have to kick in and do their best.apokrisis

    I'm fairly confident that both the input/output and output/input viewpoints are equally accurate and correct. It's just that they are, as I say, viewpoints that start at opposite ends of a metaphorical tunnel and meet in the middle.

    So, that is to say that the output/input or input/filtering models which you're referring to are equally important. You've given me some useful keywords and names to start reading up about it, so thanks muchly.

    As I say, this is a well developed field of computational science now - forward modelling or generative neural networks. So even if you want to be computational, you haven't focused on the actually relevant area of computer science - the one that founded itself on claims of greater biological realism.apokrisis
    I may be misunderstanding which particular kind of neural network you're referring to, but sounds like artificial neural networks such as the "deep learning" models used in modern AI. Some in the field think that these are or will plateau. We have no way to extend them to the capability of artificial general intelligence. Examples like AlphaGo and AlphaZero are amazing feats of computational engineering, but at the end of the day they're just party tricks.

    My claim here is that the only foundationally correct approach would be - broadly - biosemiotic. Both life and mind are about informational constraints imposed on dynamical instability. Organisms exist because they can regulate the physics of their environment in ways that produce "a stable self".apokrisis
    I'm curious about one thing. What's your stance on the hard problem of phenomenal experience? If I'm understanding you correctly, you're suggesting an explanation that is just as materialist as my own (ie: no metaphysical). So it should suffer the same hard problem. You suggested in another post that a "triadic" model somehow avoids both the hard problem and the need to resort to metaphysics, but it isn't clear to me how that works.

    (Perhaps that should be a question for another post)
  • Malcolm Lett
    41
    @apokrisis

    I just wanted to say that I really appreciate your comments. I always find new avenues for learning that come from them. So thanks a lot for that.
  • apokrisis
    5.1k
    I'm fairly confident that both the input/output and output/input viewpoints are equally accurate and correct.Malcolm Lett

    But only one of them theorises that the brain predicts its inputs. And that view also happens to accord with the facts.

    This is Friston’s now classic paper on the principles - https://www.uab.edu/medicine/cinl/images/KFriston_FreeEnergy_BrainTheory.pdf

    What's your stance on the hard problem of phenomenal experience? If I'm understanding you correctly, you're suggesting an explanation that is just as materialist as my own (ie: no metaphysical). So it should suffer the same hard problem.Malcolm Lett

    My answer on that is if you understand how the brain works - how it is in an embodied modelling relation with the world - then the hard problem becomes how it wouldn’t feel like something to be modelling the world in that fashion.

    A hard problem remains when you get down to questions of why red looks red in precisely the way it does. That is a hard problem to the degree you have no counterfactual to drive a causal explanation. Red is always red and not something else, so it is not accessible to a theory that says change x and you will instead see that red is y.

    But if the brain is living a modelling relation with the world, then that claim involves a ton of counterfactuals.

    For example, it explains why we can become depersonalise in the sensory deprivation conditions of a flotation tank. Or even in the brainstem-gated state of sleep.

    Take away a flow of real world stimulus and our brain no longer has that world to push against - to out guess in terms of predicting the next sensory input state. With nothing to get organise against, a clear sense of self also evaporates. You can’t feel yourself pushing against the world if the world ain’t pushing back. And so there is no relationship being formed, and no self being constructed as the “other” to the world.

    You suggested in another post that a "triadic" model somehow avoids both the hard problem and the need to resort to metaphysics, but it isn't clear to me how that works.Malcolm Lett

    Semiosis is the triadic story. Systems or hierarchical organisation are triadic stories. A modelling relation is a triadic story.

    It is a general architecture for explaining biological complexity. You have the three things of the neuronal model, the dynamical world is aiming to regulate, and that relationship actually happening.

    I just wanted to say that I really appreciate your comments. I always find new avenues for learning that come from them.Malcolm Lett

    That’s great. It’s not an easy subject. And you can find every kind of viewpoint being marketed.
  • Pop
    425


    Hi, Your theory states that some animals are conscious, but not others. I wonder where you drew the line? How and why ? I also have a theory of consciousness, but could not draw this line. So I'm interested in your reasons for doing this.
  • Mww
    1.9k


    I read it, and I have some familiarity with a few of your references. However, being steeped in Enlightenment cognitive metaphysics, I’m in no position to critique the technicalities. Still, the schematic of the state/control systems fit nicely with Kantian transcendental philosophy, which is stipulated as a logical process. Names are different, functionality is generally the same.

    Bottom line.....too modern for me, but nonetheless a worthy treatise.
  • Dfpolis
    1.1k
    I just encountered your post, and began your paper. I found it well written and open to most of the problems you face.

    You write, "what do I mean by the use of the word consciousness and of its derivative, conscious?

    In simple terms, I am referring to the internal subjective awareness of self that is lost during sleep and regained upon waking."

    I think you are confusing two concepts here. One is subjective awareness of contents, the other, what might be called "medical consciousness," which is full realized in a responsive wakeful state. Medical consciousness is objectively observable, and, I would suggest, part of Chalmers' easy problem. Subjective awareness is found also in sleep, in our awareness of dreams, and its modeling is Chalmers' hard problem.

    You might want to define, or at least exemplify, "high level" so that we have something more concrete to reflect upon.

    "As we shall see later on, the existence of phenomenal experience is unfortunately not explained by the theory presented here. However, I believe all aspects of the content of that experience are explained; which I think is a significant enough achievement to celebrate."

    Perhaps, but that does not warrant calling it entitle a theory of "consciousness." It is merely a theory of neural data processing, sans awareness.

    A theory of consciousness needs to address the problem discussed by Aristotle in De Anima iii, 7, i.e. how does what is merely intelligible (neurally encoded contents, Aristotle's "phantasms"), become actually known. It is not enough that we have elaborately processed representations, however extensive the domain they model. We also need to be aware of the contents so represented.

    Further, it is not clear that any purely neural model can adequately represent the contents we are aware of. Consider seeing an apple. One neural state represents not only the apple acting on our neural state, but also the fact that our neural/sensory state is modified. There are two concepts here, but only one physical representation. Yes, one may say that each concept has its own representation, but that does not explain how the initial representation gets bifurcated.

    Let me be clearer. Organisms interact with their environment, and reacting to incident changes in their physical state (input signals), respond in what evolution molds into an appropriate way. This requires no concept of an external object, of an apple seen. All that is required is that, given a set of input signals, our neural net generate an adaptive set of output signals. There are no other signals telling us that it is not a purely internal change eliciting our adaptive response, but an external object.

    So, as there is only one physical representation, how do we bifurcate it into a concept of us being modified, of us seeing, and a concept of an object modifying us, an object being seen?

    Unless your model can explain this kind of one to many mapping, I think it is unfair to say that it provides an adequate explanation of the contents of consciousness, let alone of awareness of those contents.
  • Malcolm Lett
    41
    Hi, Your theory states that some animals are conscious, but not others. I wonder where you drew the line? How and why ? I also have a theory of consciousness, but could not draw this line. So I'm interested in your reasons for doing this.Pop

    Hmm, I didn't mean it that way. Perhaps I need to rephrase that section of the paper a little.

    It's more that I recognise that both are possibilities: that there may or may not be a line. In the start of my paper I summarise some points about the neocortex and thalamus and their functional equivalents and suggesting that there may be a correlation with the existence of those components and consciousness. But I don't hold strongly to that conclusion.

    Later on I suggest that there is a minimum level of intelligence required to be able to rationale about one's own consciousness and thus to reach the conclusion that one is conscious, but that doesn't preclude the possibility of being conscious without being aware of the fact -- in fact, I suggest that many creatures fall into that category.
  • Malcolm Lett
    41
    You write, "what do I mean by the use of the word consciousness and of its derivative, conscious? In simple terms, I am referring to the internal subjective awareness of self that is lost during sleep and regained upon waking."

    I think you are confusing two concepts here. One is subjective awareness of contents, the other, what might be called "medical consciousness," which is full realized in a responsive wakeful state. Medical consciousness is objectively observable, and, I would suggest, part of Chalmers' easy problem. Subjective awareness is found also in sleep, in our awareness of dreams, and its modeling is Chalmers' hard problem.
    Dfpolis

    Yes, good point. I was trying to provide an easy introduction, but it does confuse the two concepts. Curiously, that has even come out from a response I gave to @SaugB:
    While my body may be asleep, my consciousness could be awake.Malcolm Lett
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.