Comments

  • Presenting my own theory of consciousness
    Consider seeing an apple. One neural state represents not only the apple acting on our neural state, but also the fact that our neural/sensory state is modified. There are two concepts here, but only one physical representation.Dfpolis

    I would rather say that there are three concepts here:
    1. Representation of sense data -- ie: the interpreted form of the apple, as a neural state.
    2. Interpretation of that representation -- ie: our modified overall mental state, or the "content" of conscious awareness.
    3. The hard problem of consciousness -- ie: why the neural state "content" is accompanied by the "existence" of conscious awareness.

    The neural state of #1 may or may not enter our conscious awareness. There is plenty of evidence for our brain processing and even acting on sense inputs without the need for us to be consciously aware of it at the time.

    #2 also has an easy neural representation, which is part of what my paper focuses on. I tend to think of the total state of the brain as a hierarchy of representations: the raw sense data has a very low level representation that is generated for each sense modality (touch vs sight etc.) and is not in any way perceived consciously. Those low level representations are slowly merged and built on as multiple layers of hierarchies are built upwards, until finally they form together into a single coherent and very high-level representation. It is somewhere towards the top of that hierarchy where the "content" of consciousness is derived. (And just to be clear, I use this as a simplistic way of thinking about the brain when it's convenient. I don't assume it's a full explanation).

    What I'm trying to say here is that #2, or the "fact that our neural/sensory state is modified", is that high level representation that integrates all the senses and all our current mental state, following a filtering process from attentional focus.

    I've found @apokrisis's mention of semiotics particularly helpful. I now see #1 and #2 as two components of a semiotic process. If there are neural states that represent sense inputs, what is it that perceives those neural states? Historically this has been answered by invoking the idea of a "soul". But the idea of semiotics explains that the same underlying mechanisms (in this case neurons) can also be used to interpret the sense representations.

    Now, where is the division between #2 and #3? Is there some additional state that is accompanied with conscious awareness and which is not representable as neural state? I believe the answer is "no".
  • Presenting my own theory of consciousness
    You write, "what do I mean by the use of the word consciousness and of its derivative, conscious? In simple terms, I am referring to the internal subjective awareness of self that is lost during sleep and regained upon waking."

    I think you are confusing two concepts here. One is subjective awareness of contents, the other, what might be called "medical consciousness," which is full realized in a responsive wakeful state. Medical consciousness is objectively observable, and, I would suggest, part of Chalmers' easy problem. Subjective awareness is found also in sleep, in our awareness of dreams, and its modeling is Chalmers' hard problem.
    Dfpolis

    Yes, good point. I was trying to provide an easy introduction, but it does confuse the two concepts. Curiously, that has even come out from a response I gave to @SaugB:
    While my body may be asleep, my consciousness could be awake.Malcolm Lett
  • Presenting my own theory of consciousness
    Hi, Your theory states that some animals are conscious, but not others. I wonder where you drew the line? How and why ? I also have a theory of consciousness, but could not draw this line. So I'm interested in your reasons for doing this.Pop

    Hmm, I didn't mean it that way. Perhaps I need to rephrase that section of the paper a little.

    It's more that I recognise that both are possibilities: that there may or may not be a line. In the start of my paper I summarise some points about the neocortex and thalamus and their functional equivalents and suggesting that there may be a correlation with the existence of those components and consciousness. But I don't hold strongly to that conclusion.

    Later on I suggest that there is a minimum level of intelligence required to be able to rationale about one's own consciousness and thus to reach the conclusion that one is conscious, but that doesn't preclude the possibility of being conscious without being aware of the fact -- in fact, I suggest that many creatures fall into that category.
  • Presenting my own theory of consciousness
    @apokrisis

    I just wanted to say that I really appreciate your comments. I always find new avenues for learning that come from them. So thanks a lot for that.
  • Presenting my own theory of consciousness
    But biology-inspired computation - the kind that Stephen Grossberg in particular pioneered - flips this around. The brain is instead an input-filtering device. It is set up to predict its inputs with the intent of being able to ignore as much of the world as it can. So the goal is to be able to handle all the challenges the world can throw at it in an automatic, unthinking and involuntary fashion. When that fails, then attentional responses have to kick in and do their best.apokrisis

    I'm fairly confident that both the input/output and output/input viewpoints are equally accurate and correct. It's just that they are, as I say, viewpoints that start at opposite ends of a metaphorical tunnel and meet in the middle.

    So, that is to say that the output/input or input/filtering models which you're referring to are equally important. You've given me some useful keywords and names to start reading up about it, so thanks muchly.

    As I say, this is a well developed field of computational science now - forward modelling or generative neural networks. So even if you want to be computational, you haven't focused on the actually relevant area of computer science - the one that founded itself on claims of greater biological realism.apokrisis
    I may be misunderstanding which particular kind of neural network you're referring to, but sounds like artificial neural networks such as the "deep learning" models used in modern AI. Some in the field think that these are or will plateau. We have no way to extend them to the capability of artificial general intelligence. Examples like AlphaGo and AlphaZero are amazing feats of computational engineering, but at the end of the day they're just party tricks.

    My claim here is that the only foundationally correct approach would be - broadly - biosemiotic. Both life and mind are about informational constraints imposed on dynamical instability. Organisms exist because they can regulate the physics of their environment in ways that produce "a stable self".apokrisis
    I'm curious about one thing. What's your stance on the hard problem of phenomenal experience? If I'm understanding you correctly, you're suggesting an explanation that is just as materialist as my own (ie: no metaphysical). So it should suffer the same hard problem. You suggested in another post that a "triadic" model somehow avoids both the hard problem and the need to resort to metaphysics, but it isn't clear to me how that works.

    (Perhaps that should be a question for another post)
  • Presenting my own theory of consciousness
    How come, when you are consciously or actively recalling that girl's face, the dress she wore also features in your mental picture, without you having to consciously recall it?SaugB

    I think this comes down to our misconception of how memory and executive control works. We're convinced that we have complete control over every aspect of our thought, but I believe the reality is that we're only perceiving the high-level results of hidden low-level processes. So, while you may want to remember just a person's face, you don't have that much control over what pops into your 'mind's eye'.

    Biological memory is associative. You need some sort of anchor, like a smell, a name, or a feeling, and this triggers a cascade of activity that produces something associated with that anchor. So that's one part of why you don't have control over the exact details of what is recalled.

    The other significant part is of the mechanics of recall itself. It's seems to be fairly well agreed within neuroscience circles that there is no "memory" region in the human brain. Rather, events are split up into their sense components and stored within the regions of the brain that process those senses. eg: the event of seeing a train rolling past will have visual memories within the visual cortext, and sound memories within the auditory cortex. It is believed that the role of the hippocampus is to splice those disparate sources back together again when a memory is recalled -- basically re-constructing the experience.

    So I think that further explains why you get more than what you ask for. It also explains why, even though you think you can see all the details of the person's clothes, you're probably remembering the wrong colour :gasp:.

    Out of interest, here's a paper on my reading list that might to have some relevancy here:
    • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5840147/ . Persuh, M., LaRock, E., & Berger, J. (2018). Working Memory and Consciousness: The Current State of Play. Frontiers in human neuroscience, 12, 78. --- "In this article, we focus on visual working memory (VWM) and critically examine these studies as well as studies of unconscious perception that seem to provide indirect evidence for unconscious WM"
  • Presenting my own theory of consciousness
    Perhaps a bit outrageously, I am suggesting that the divide between conscious and not-conscious, intentional and unintentional, is difficult to actually defineSaugB

    SaugB, I'll respond to your main question in a minute, but your mention of sleep and the unconscious caught my attention, so let me segue briefly to make an outrageous claim of my own:
    * We might be conscious during sleep.

    I've heard it claimed that the only reason we appear to 'experience' dreams is that we remember them upon waking. I've never been happy with that premise.

    With blatant dependence on my own possibly misguided theories, for me to experience consciousness doesn't require that I appear awake to an outsider.While my body may be asleep, my consciousness could be awake. In most cases the level of apparent executive control is attenuated, but as is well known we do sometimes gain awareness of being in a dream and in the act regain some of our executive control -- I've experienced this myself, though sadly only once or twice. All of that is fully supported by my theory.

    All of that could also be explained via the memory theory, so I could be totally wrong here. But I can't help but enjoy the apparent contradiction of being conscious during sleep.
  • Presenting my own theory of consciousness
    So the question is why you would even pursue a mechanistic theory in this day and age? Why would you not root your theory in biology?apokrisis

    I'm not actually sure where the problem is here. I see the two as complimentary. As I have stated in my paper, the overall problem can be seen from multiple complimentary views: mechanistic/computational view ("logical" in my paper), and biological ("physical" in my paper).
  • Presenting my own theory of consciousness
    We are very different as we evolved a capacity for syntactic speech. And that new level of semiosis is what allows a socially-constructed sense of self-awarenessapokrisis

    That statement assumes that semiosis only applies to language. But Pattee showed quite convincingly that it applies to DNA (in a link you shared, if my memory is correct), as an example of semiotics applying to something other than language. I would suggest that the mechanisms I have proposed are another example of semiotics. I probably haven't got the split quite right, but as an attempt at the style of Pattee:
    * sense neurons produce a codified state having observed an object
    * other neurons interpret that codified state and use them for control
    * the codified state has no meaning apart from what the system interprets of it
  • Presenting my own theory of consciousness
    A computation approach builds in this basic problem. A neurobiological approach never starts with it.apokrisis

    I agree with you in principle that a computational approach may be a failed start. But I have a couple of points in response.

    1. The computational approach appears to have the greatest explanatory power of the various alternatives out there. My theory, for example, is testable from the point of view of the "neural correlates of consciousness", and furthermore it offers predictions about what we'll discover as neuroscience develops. It provides explicit mechanisms behind why we are aware of certain things, and not aware of others. And I could provide many other examples if I take the time.

    2. I have not seen a non-computational theory provide this level of detail.

    Perhaps it is more accurate to say that a computational theory of the brain and consciousness has the best ability to "model" the observed behaviours (internal and external), enabling us to do useful things with that modelling capability; however it may not form a "complete" theory.
  • Presenting my own theory of consciousness
    Would it be possible to add a short summary in the form of bullet points? Oh, that's right. Yes. It is. Could you please do so? Thanks!Outlander

    I considered doing that and then deleted it. I knew if I wrote a summary then people would read that and jump to conclusions without reading the whole paper.

    If you're interested, read the paper. Otherwise, ignore it.

    I don't think so, though I shouldn't answer for someone else. I'm almost finished with this intricate paper. It would be very hard to reduce what he has here to bullet points because the argument builds on itself stage by stage.JerseyFlight

    Cheers @JerseyFlight
  • The meaning of the existential quantifier
    My understanding of ∃ is that it means exactly you originally quoted it as: "there exists some ....".

    So a statement that ∃m(m is a man and m is Greek) means that there definitely does exist at least one instance where there is a man, and that man is Greek.

    That is indeed different to saying "some men are Greek", because this statement doesn't imply anything about the existence of men at all.

    I think what you're trying to say is that "some men are Greek" is more accurately represented as:
    * given M = set of men, if cardinalogy(M) > 0 then ∃ m ∈ M: such that m is Greek.

    More succinctly, what I'm trying to say is that the translation from "some men are Greek" to the use of ∃ is the problem here. It's not that the definition of ∃ needs changing.
  • How to gain knowledge and pleasure from philosophy forums
    Sadly @Wayfarer and @JerseyFlight have just epitomised a number of points that @Ansiktsburk was raising :

    • discussions that go off topic and produce no outcome
    • ego stroking
    • whatever is started two hotshots takes over the discussion

    Just saying.
  • Summarizing the theories of consciousness
    Wow. Nice summary. You have thoroughly convinced me of the need to learn more about triadicism.
  • How to measure what remains of the hard problem
    The key thing to understand here is that semiosis - as I am using the term - is all about information regulating physics.apokrisis

    Interesting. You don't think that term is suitable for generalising into the virtual? ie: simulated physicality?
  • How to measure what remains of the hard problem
    But I feel strongly that what is at stake here is of fundamental philosophical importance.Wayfarer

    Likewise. I'm equally annoyed by those who claim conscious experience isn't something of importance, just because they can't measure it or account for it in their theories. Hell, I can't account for it in my own theory, but I still think it's important - if for nothing else than the fact that it's the single biggest reason why my theory may be completely bonkers.

    But regardless, I suspect we may fall on different sides of a proverbial line.

    So, how could the meaning of a state of being be something that is ever going to be revealed in an fMRI scan?Wayfarer

    Hmm. Yes, I was being a little vague. There's just too much to try to put into text. But let me circle round this topic for a minute.

    Could fRMI reveal the meaning of a state? Maybe. Quite probably, after sufficient technological advances. If it is correct that all conscious state is a result of neuronal firings.

    Is that inhumane? Forgive me if I'm reading too much into your statement, but I felt like you were coming from a perspective of hoping/assuming that there is something more to our existence than just the physical/material structures of brain/bones/blood/neurons/etc. As inhumane as it feels to many, and to myself, I've slowly come to think that there isn't any inherent meaning to life beyond the physical. So, yes, I suppose it is inhumane. But no more so than anything else.

    At this stage, fRMI doesn't reveal much about the inner workings of the mind. But I do think that all of our conscious experience will ultimately be explained through the processes of electrical firings of neurons.
  • How to measure what remains of the hard problem
    What I'd like to know is what you mean by: the problem is the assumption that 'understanding' is binary?TheMadFool

    I'm simply referring to the fact that different systems/individuals can have differing degrees of understanding. eg: my calculator has zero of understanding of chinese; I understand about enough to sometimes recognise chinese characters vs not chinese characters; which is significantly less understanding than someone who can read chinese characters.
  • How to measure what remains of the hard problem
    As I understand it, action comes before perception. If this is the case consciousness is not merely an image but an inter-working and synthesis of environment... it also means more than this, I cannot draw it all out. But think of this for a moment, there is no such thing as a computer without a long historical material process, the fact that one wants to separate the quality of the computer from this process, gathering of raw materials, creation, assembly, etc., only serves to manifest the limitations and distortions (obliviousness) of the one who artificiates the divisions. We are not talking about the fully developed being of a thing that miraculously popped into existence, we are, whether one likes it or not, talking about a historical process, social activity. Therefore, the mechanisms that account for this process are both historical and material. To say we are confined to representations seems to overlook the very real material process. I am not dogmatic here, but this seems like a gigantic, ignorant gap in the thinking.JerseyFlight

    I'm not ignoring all the history of how humans came to be. I was focusing on a particular behaviour to highlight that we can introspect ourselves - ie: the subject making objective measures about itself.

    I was responding to comments by @Wayfarer, which I took to be a reference to the suggestion that we cannot learn anything about the mechanisms behind our own consciousness because we can't use that consciousness to examine itself (like how an eye cannot see itself). I'm aware of that viewpoint but I want to free any beholders of that view from their shackles, because we can achieve so much more than that.

    (Edited, because I originally mistakenly attributed some comments to MadFool instead of Wayfarer)
  • How to measure what remains of the hard problem
    @apokrisis do you know what term would apply semiotics to cognitive computation, irrespective of the physical substrate? I like the idea behind Pattee's biosemiotics, but it sounds like it applies specifically to biological organisms. There's cognitive semiotics, but it seems to be very broad and high-level, encompassing all sorts of social aspects, eg: body language.

    I'm thinking of the sort of low-level detail that Pattee goes into with his analysis of the DNA/RNA mechanisms behind cell replication, but applied to a neural network (of undefined physical nature) that computes. I'm also thinking that the idea of a semiotic closure could apply to a system that is aware of itself.
  • How to measure what remains of the hard problem
    As I suggested, there is an intrinsic difficulty with attempting to treat the subject - the thinker, the agent who is writing and speaking - as an object of scientific analysis.Wayfarer

    However, in this case, the object of analysis is also the subject doing the examining. It's precisely because you can't stand outside or, or 'objectify', the object of analysis that is the cause of both the 'hard problem' and 'the explanatory gap'. This is why it is in principle outside the scope of empirical analysisWayfarer

    There is indeed some difficulty associated with the subject trying to objectively analyse themselves, or a researcher attempting to analyse the subjective experience of another. There are definitely sizable barriers there - otherwise we would have known a long time ago what kind of conscious experience animals have.

    But it isn't insurmountable, and it can be done, so long as one is aware of the limitations. This is obvious due to the amount we have learned about the brain and our subjective from fRMI and the like.

    There's an important but not so obvious other path of investigation. I'm quite sure that the content of our conscious experience is a representational model. A 'summary', if you like, of a certain subset of data flowing through the brain. One can argue that this means we cannot introspect anything about the mechanisms behind our subjective experience, because we are confined to this representational model, and we must inherently distrust the accuracy of this model.

    But software development uses models too, usually referred to as an abstraction. And every software engineer knows that abstractions leak details of the underlying implementation.

    The representational model leaks too. For example, what we are /are not conscious of is very informative. The fact that, on close inspection, we don't actually experience our senses directly, but that they are always preprocessed with meaning attached. Eg: the parsing of words heard in audible speech.

    There's a lot more to that than fits in a comment, but my point is that the subject can learn a lot about their internal workings from their own subjective experience.
  • How to measure what remains of the hard problem
    Whereas you're suggesting that nothing is outside its jurisdiction.Wayfarer

    Precisely.

    If science was truly restricted to what we understand then we never would have got to where we are today. The reality is that people aren't restricted to the particular definition of a word on the day. All words are a post-hoc approximation of the reality or concept that we intuitively perceive. It helps to agree on meanings so why have a common language, but it's a problem when those agreements of definition hamper the ability to think further.

    Sorry. I feel that's a rant off topic.
  • How to measure what remains of the hard problem
    Superb job on expressing your ideas friend.JerseyFlight

    Cheers. Appreciated.
  • How to measure what remains of the hard problem
    Modern science has tended to want to see 'everything in the universe' as physical, because physical objects are amenable to the precise objectification and quantification that is central to its method. That was part of the conceptual revolution introduced by Galileo, Newton, and Descartes, among others, at the advent of modern science.Wayfarer

    I see only two rational possibilities:
    1. everything is physical
    2. everything is metaphysical

    Modern science takes #1 as assumed and tries to slowly eat away at the unknown, finding physical explanations, under the assumption that eventually (at the point of infinity) all previously unknown will be explained through the physical.

    Alternatively, given the inherent difficulty with the unknown, many assume that there must be some additional non-physical aspect that is necessary to explain everything. But I find this dualistic (or is it trialistic?) theory irrational -- though I'll find it hard to verbalise why.

    Rather, I think the more rational alternative is that everything is metaphysical, and that the physical world is just 'imagined'. For example, as one interpretation of Descartes ideas: the only thing that exists is the subjective "I", and I'm merely imagining the rest of you. Though I'm not presupposing a particular outcome of whether we all exist as our own subjective meta-physical beings vs. there's only just me.

    But I tend to fall back into a position of preferring #1 because the physical world is just better defined than the metaphysical one - at least according to society's current understanding.
  • How to measure what remains of the hard problem
    The best general theory of mind and life is that it is a semiotic process. A modelling relation.apokrisis

    Yes. That looks promising. I think it offers some useful tools for "measuring" more of the explanatory gap.

    Biosemiotics basically says three things:
    1) it is not sufficient to define the living world via its physical mechanisms,
    2) you also need to consider the 'data' that the mechanisms produce - aka symbols,
    3) and the two are intrinsically linked because, as it happens in all dynamic living systems that we are aware of, you cannot have one without the other and still produce the kinds of behaviours that we expect of a dynamic living system.

    But what's most useful from that is that it provides a framework for measuring the effectiveness of a system to produce self-referential conscious-like processing capabilities, and its efficiency.

    It occurs to me that one way of using semiotics is kind of similar to Tononi's Phi theory, in that it provides a way of characterising different systems - how well does the system follow the circular process of physical mechanics interpreting symbols and creating more physical mechanics from those systems.

    In another view, it's a kind of (slightly open ended) anthropic principle applied to the underlying mechanisms of living organisms. In The Necessity Of Biosemiotics Matter-Symbol Complementarity, Pattee explains that living organisms on earth use the particular DNA/RNA processes that they do, because that's what works.
  • How to measure what remains of the hard problem
    We're certain that X understands Chinese. It must be then that the Chinese Room understands ChineseTheMadFool

    The problem is the assumption that 'understanding' is binary.

    A calculator understands maths in much the same way as the room in the Chinese Room analogy understands chinese. It has some non-negligible understanding of the maths that it's programmed to work with. If it had no understanding, then it wouldn't suffice as a culculator.

    We take say that humans "understand" a concept because we build detailed models around that concept. We model not just the end result of how to apply a concept, but also layered theories and explanations. We attach all sorts of context to the concept: how we "feel" about that concept, when/when not to apply it.

    All of that can be explained using the same underlying computational processes that the calculator uses.

    Is there something 'special' about the human understanding vs the calculator understanding that isn't just a matter of degree? Well, I personally think not, but I'll leave that as an open question for now.

    What I will suggest though, is that the word "understand" is socially understood to mean a certain thing only because that's our human-centro definition of it.
  • How to measure what remains of the hard problem
    Well, the way I see it, all that needs to be done is, like the brain, we need to have in place hardware capable of logic and memory. After that, consciousness is simply a matter of feeding such a system with data.TheMadFool

    This is pretty much my view too. Almost Dennet-like, I suppose. That in the long run we'll figure out the mechanisms and we'll see all of consciousness as a mechanical process. But I also see the explanatory gap as needing explanation.

    I'm reading through Michel Bitbol's It is never known, but it is the knower (thanks @Wayfarer). He claims that scientists naively infer from our past scientific successes that we'll also succeed in explaining consciousness through physical mechanistic principles. I disagree. I think we will eventually explain it as a physical mechanistic process because the majority of evidence is that everything physical in the universe is a physical mechanistic process, and the majority of evidence is that we are physical.

    But to my mind, current theorists who propose a mechanical process and claim that it explains everything about consciousness are indeed naive. There is definitely a something needing explaining. Like Bitbol's thesis on the importance of taking subjectivity seriously, any theory on the mechanics behind a subjective conscious experience is incomplete until it explains how the objective mechanics produces the subjective.

    Ultimately, like for much of scientific discovery, we'll improve our understanding of "mechanical process" while taking the path towards the nirvana of understanding consciousness.

    (BTW, I'm not quoting you because I assume you think that's all there is. It was just a convenient starting point for making my own point)
  • Summarizing the theories of consciousness
    Artefact - an object made by a human being, typically one of cultural or historical interest.
    "gold and silver artefacts"
    2.
    something observed in a scientific investigation or experiment that is not naturally present but occurs as a result of the preparative or investigative procedure.
    Wayfarer

    Hmm, yes. I see I'm going to have a hard time picking the right words.
  • Summarizing the theories of consciousness
    The point about the Buddhist approach is that it never refines ‘consciousness’ as some kind of mystical whatever

    Its interesting that it still treats consciousness as a separate thing from those other qualities.

    My own opinion is that it will ultimately be proven to be merely an artifact of those other qualities (though I have no idea how we'll get there), but interesting just how much every society and ancient philosophy always seems to puzzle over it.
  • Summarizing the theories of consciousness
    the broad options are monism, dualism and triadicism

    Oh. I've got some reading to do. Thanks.
  • Welcome to The Philosophy Forum - an introduction thread
    Greetings.

    I've been looking for a suitable forum to share some of my own ideas on consciousness.
    I've been thinking about it on and off since 2014 and developed a fairly detailed theory. But I approach it much more from an engineering perspective than the philosophical one, so I'm not totally sure whether this is going to be the right place.

    I still haven't figured out the difference between a materialist and a physicalist, or if there is a difference. And I'm not sure about identity theorists, functionalists, and behaviourists, and whether there is some other 'ist that I could relate better to. But my philosophy probably falls somewhere in the space of a skeptic physicalist.