Search

  • A potential solution to the hard problem

    The so called ``hard problem of consciousness'' arises when we attempt to understand (describe in scientific terms) how consciousness arises from the brain's activity. If a scientist, say Bob, inspects the brain activity of somebody else, say Alice, then Alice's brain and its functioning is in fact represented in Bob's brain, that is in Bob's consciousness. Let us repeat: Alice brain is represented in Bob's consciousness. From Bob's perspective, everything, including Alice and her brain, is represented in his consciousness. The outside world, including Alice, is like being painted in Bob's consciousness. When inspecting, for instance by monitoring the functioning of Alice's brain, Bob tries to figure out how Alice perceives the world, how she is conscious about the world around her. From Bob's perspective this is just like a picture within picture. However hard Bob tries to understand Alice's consciousness and her perception of the outer world, this is just a picture represented (``painted'') in Alice's brain, which in turn is a picture in Bob's consciousness.

    We see that there are different levels of representation. Relative to me, on the highest level there is a representation, a ``picture'', of the world as perceived by consciousness. Within such the highest level ``picture'' there are lower level ``pictures'' associated with other observers. If we do not take this into account an do not distinguish between different levels of representation, then we have the ``hard problem of consciousness''. The problem is in our failure to recognize that lower level representation of the world (a ``picture'') within a third person's brain under our scientific investigation cannot be identified with the higher level of representation (associated with the experimenter’s “consciousness”). And the experimenter’s “consciousness”, is just a representation (a picture) in my consciousness.

    The highest level of representation of the experienced world is associated with consciousness. On the other hand, the world is described by a wave function. This means that there is close relationship between consciousness and wave function. The lower lever representation of the world in another person's brain is not consciousness, and if we wish to understand how consciousness can arise in that person's brain we have ``the hard problem of consciousness''. Consciousness and the associated wave function are the highest level concepts, and cannot be derived from the lower level concepts.
    Solipsism is avoided by postulating that wave function (consciousness) can be localized in any brain (either within a particular Everett's world or somewhere else in the multiverse). Thus, I could have been at the place of another person’. Namely, wave function is a mathematical object whose evolution is determined by its initial value, which can be either such, or others. A wave function can be associated with a universe in which the ``I’’ (“me feeling”) is in Bob’s brain, seeing Alice as a representation (a ``picture’’) in his brain. Or alternatively, a wave function can be associated with a universe, such that the ``I’’ is in Alice’s brain seeing Bob as a representation (a ``picture’’) in her brain. In other words, the wave function of the universe can be localized in or associated with Bob’s brain, or it can be localized in Alice’s brain. Other possible forms of wave function can exist in principle, for instance, a wave function not sharply localized in one’s particular brain , or within one particular Everett branch at all, but being spread over a larger range of branches. Mystical experiences reported by many people can be understood as being associated with such wave functions.

    So, a wave function associated with a particular me feeling, localized in a particular head, is just a possible collapse of an all possibilities embracing wave function. One of the many possibilities is the existence of the universe fine tuned for life. And of course, the wave function associated with or describing my conscious experience, has collapsed into such a fine tuned world.
  • Logical proof that the hard problem of consciousness is impossible to “solve”

    That is your particular intepretation of the problem. David Chalmer’s original paper doesn’t say that.Wayfarer

    Correct, I was not quoting Chalmers. And its not an incorrect interpretation of the problem either, set for a layman's understanding. At the core of the hard problem, the issue is that we cannot objectively evaluate subjective experience, or what it is like to be another being. I go into more depth in some other posts here, see if my answers jive or not.

    He never says that the problem is what it is like to be a conscious individual that isn’t ourselves.Wayfarer

    If we were able to objectively evaluate the subjective experience of an individual, we would have no hard problem. He doesn't have to say those exact words to understand the reason behind his claim.

    Which he proposes as a 'naturalistic dualism'. The key point being the emphasis on 'experience' which is by nature first-person.Wayfarer

    Right, we cannot objectively evaluate subjective experience. So since we can't use objectivity in regards to 'what it is like to be the consciousness', we have to use non-objective terms. He can use the word dualism if he wants, but he's not implying that subjective consciousness isn't physical or some 'other'. He's just noting there's no objective way to evaluate the subjective experience of being consciousness in physical terms, as we have no way of evaluating what its like to be something we are not.

    The point is to hammer home that the hard problem is not, "Is our consciousness in our brains?" Yes, it is. There is no soul, or other essence as neuroscience has shown repeatedly. It just means that we cannot objectively talk about the subjective experience of being conscious, because we have no way of objectively knowing what the personal experience a person is feeling when they say, "I feel pain". We can see their bodily reactions, their actions, and their brain functions, but we cannot currently understand what that 'feeling' is, unless we are that person themself. Perusing through your Chalmer's quotes, I don't see where I'm at odds, so we might be in agreement here.
  • The 'hard problem of consciousness'

    There's a reason why Chalmers says "arises from" rather than "is caused by." You're assuming causation here, but that's not built into the hard problem. Have you read Jaegwon Kim? His ideas on supervenience and other grounding relations are very helpful.

    This question is similar to asking why H2O is wetWolfgang

    Can you explain why this is a tautological question? I would have said that there are non-tautological, chemical reasons that explain why H2O, at a certain temperature, is wet.

    Centralization in the brain brought with it the need for a feedback mechanism that made it possible to consciously perceive incoming stimuli – consciousness, understood as the ability to sense stimuli.Wolfgang

    But that's precisely the hard problem: Whence this "ability to sense stimuli"? Why couldn't the stimuli simply do their thing (including whatever self-correction you want to build into it) without being sensed? And don't forget that the other aspect of the hard problem is to explain its modal status: Is consciousness necessarily as it is? What is it about biological life that gives rise to this phenomenon, this "feedback mechanism," rather than some other?

    And of course this all leaves out the scientific question: Exactly how did the evolutionary process occur? What happens with neurons that leads to consciousness? Fortunately, this aspect of the hard problem is not for us philosophers to solve!
  • The 'hard problem of consciousness'

    In my posts above I'm arguing against the property dualism that is implied in the so called hard problem of consciousness. The problem reappears also in epistemological forms of dualism, such as in indirect realism, or in any philosophy in which it is assumed that consciousness is inaccessible to our knowledge.

    Those are not my problems. I'm a direct realist, and a monist, so there's no need for you to give me a lecture on the monist nature of the world. Likewise, when I'm talking of subjective and objective in their ontological and epistemological senses, I'm not trying to split the world in two. In a monist world, things can have different modes of existing, and some things are observer-dependent (e.g. money) while other things (e.g. mountains) exist regardless of observers. But thanks anyway
    jkop
    Actually, direct realism is part of the hard problem. In asserting that you see the world as it is - as static objects and physical brains, and comparing that to how the mind appears and is described as being non-physical and immaterial is how the hard problem arises because it does not account for causation and that causes are not their effects and vice versa. I have argued that the distinction between direct and indirect is incoherent. What does it even mean to directly or indirectly access something? I asked you what an observer is, and you didn't answer the question.

    So it's not just an issue of perception. It's a problem of language-use. We don't need to use terms like, "direct", "indirect", "subjective" and "objective", even in a monist sense.
  • The 'hard problem of consciousness'

    Ok, Husserl might not seem to be a dualist, but the assumption that consciousness is immaterial in the sense that it never appears as an object in a world of objects, implies an epistemological dualism, and the hard problem reappears. For if consciousness is immaterial, then it seems we have no way of knowing what it's like to be another observer, or how immaterial experiences arise in a material world.jkop

    Well, that's true! The whole point of the argument is to throw into stark relief a fundamental gap in the generally-accepted physical account of the world. It has been said many times that in the transition from the medieval geocentric universe to modern cosmology, that the world became concieved in terms which make life itself, and human life in particular, a kind of anomaly* (per Stephen Hawking's often-quoted quip that we're a kind of chemical scum on a medium-sized planet, or Steven Weinberg's remark that 'the more the universe seems comprehensible the more it seems pointless'.)

    The point is, though, that the objective judgement of the miniscule dimensions of human life against the vast background of modern cosmology is existence 'viewed from the outside', so to speak. It is made from a perspective in which we ourselves are treated as objects. And that is a direct implication of modern objective science in which the measurable attributes of objects (mass, volume, number, velocity and so on) are declared fundamental and the appearance, colour, etc assigned to secondary or subjective status. It is a worldview tailor-made to exclude the subject to as to arrive at the putative, scientific 'view from nowhere'. And I think that is all the hard problem argument shows up - and it does so quite effectively (one reason why at any given time there are a number of threads discussing it.)

    For idealists for whom everything is consciousness, the hard problem does not arise from a metaphysical or epistemological wedge. Likewise, it doesn't arise for direct realists under the assumption that we see objects directlyjkop

    I consider myself idealist, but I also believe that all the objects I interact with are real objects. They're not constituted by mind, but on the other hand, they only appear and are meaningful within experience. That is what I mean by 'idealism', although perhaps it is closer to phenomenology. Whereas I have the sense that when you say 'idealism', you believe that it posits something called 'mind' which is constitutive of reality in the same way that 'matter' is for materialism. But that, I would suggest, is what Whitehead meant by the sense of misplaced concreteness, or the attribution of reality to abstractions (such as 'mind' and 'matter'). It's a reification.

    Which leads to:

    For example, a bird observing its environment,, birdwatchers observing the bird, a prison guard observing prisoners, a solo musician observing his own playing, an audience observing the musician, scientists observing their experiments, a thinker observing his own thinking (e.g. indirectly via its effects).jkop

    Splendid observation! This brings up the idea of the 'lebenswelt' or 'umwelt' which is very much part of both phenomenology and embodied cognition. They refer to the 'meaning-world' in which all organisms including humans orient themselves, where 'objects' appear in terms of their use and meaning for that being. Within that context, objects are no longer abstractions, but real and felt elements of lived experience.

    Compare this passage from the phenomenologist, Maurice Merleau Ponty:

    For the player in action the football field is not an ‘object,’ that is, the ideal term which can give rise to an indefinite multiplicity of perspectival views and remain equivalent under its apparent transformations. It is pervaded with lines of force (the ‘yard lines’; those which demarcate the ‘penalty area’) and articulated in sectors (for example, the ‘openings’between the adversaries) which call for a certain mode of action and which initiate and guide the action as if the player were unaware of it. The field itself is not given to him, but present as the immanent term of his practical intentions; the player becomes one with it and feels the direction of the ‘goal,’for example, just as immediately as the vertical and the horizontal planes of his own body. It would not be sufficient to say that consciousness inhabits this milieu. At this moment consciousness is nothing other than the dialectic of milieu and action. Each maneuver undertaken by the player modifies the character of the field and establishes in it new lines of force in which the action in turn unfolds and is accomplished, again altering the phenomenal field. (Merleau-Ponty, 1942/1963, pp. 168–9, emphasis added) — Quoted in Précis of Mind in Life: Biology, Phenomenology, and the Sciences of Mind, Evan Thompson

    Notice that this approach undercuts the tendency to view 'consciousness' (or mind) as an object, state or thing of any kind. This is why the embodied cognition approach provides a solution, or remedy, to the hard problem, by showing up the artificial nature of the division between mind and world which is at its root.

    ------
    *This is the thrust of an early (1955) esssay in the phenomenology of biology by Hans Jonas: The Phenomenon of Life.
  • Looking for suggestions on a particular approach to the Hard Problem

    Hello everyone,

    I'm a philosopher by training though not a philosopher of mind, but I don't work in philosophy or academia any more.

    I've had some thoughts about the hard problem recently, and I was wondering if anyone could match them to a particular theory on the topic and ongoing discussion.

    I was recently talking to a friend of mine, who was explaining the position of illusionism with regards the qualitative character of mental states, i.e. that we only believe we have such states, but actually this is an illusion. A way to gloss this position is that we are all actually philosophical zombies, only most of us don't know it, and even those who do can't shake the belief we aren't.

    I've got my own problems with this position, but leaving it aside, it seemed to bring out clearly that a physicalist account of the mind does need some kind of solution to the hard problem - i.e. how do we account for consciousness (in terms of the experience of states with qualitative character) within our best account of physics. If we can't, then we need to accept some kind of non-physicalism about the mind, in which these kinds of states don't exist within the same realm as physical stuff.

    Reading a popular coffee table book about astronomy and modern theoretical physics got me thinking that this seems an unnecessary dictonomy at this stage. Those guys are happy to counternance all kinds of aspects of the physical universe which go well beyond what we might characterise as a kind of meat and two veg Hobbesian materialism, i.e. we've got 3 dimensions of space, 1 of time, and matter is just stuff located within those coordinates.

    I then got thinking about this phenomena of consciousness we all have. One of the things you can note about it is that it is definitely not co-extensive with everthing going on inside and around our bodies. I don't have pain or sensory receptors in my brain. I don't have eyes in the back of my head, and I can't feel what exactly is going on in my pancreas right now - at least, not with very much resolution. Consciousness relates certain types of information the nervous system of the body has access to, and not others. I remember reading recently, I think in new scientist, that one theory of consciousness is that it has evolved to allow the organism to respond to certain problems in the environment and itself with great nuance and feedback.

    It's an obvious point against certain kinds of physicalism that we cannot open up a person's brain (or even nervous system in general), and see a little simulacrum of what the person is experiencing. But I think that that is what physicalism (unless we want to be illusionists, as described) needs to presume they are looking for - as the qualitative aspects of consciousness need to be accounted for. If we can't locate these states somewhere in the physical world (construed as the world explorable by modern - or the best - physics), then we will have to admit that non-physicalism may well be true.

    But as mentioned, we don't need to assume that physicalism means the same as an older variety of materialism, in which it is very difficult to situate consciousness, simply because when we open up the brain, we don't see the qualitative consciousness of another person. It may simply be that the qualitative consciousness hasn't been detected yet, or can't be, using the experimental tools we have at our disposal. This is par of the course in physics - lots of phenomena are postulated but need technology or conditions to develop in order for them to be confirmed.

    For example, there is lots of new work now being done in quantum biology. It now seems that there are lots of ways in which biological entities have evolved in order to incorporate quantum effects into their functioning. We might not have expected this, and we may even be tempted to think "but how did the organisms know quantum physics was there to be used?" The answer is - they didn't. They randomly mutated in such and such ways that ended up bringing quantum effects into play, and some of these had evolutionary beneficial effects.

    I think it would be worth while to consider moving forward with the hypothesis that consciousness is something like this. It is some kind of physical effect which displays a qualitative character, but an effect which we cannot yet directly observe in the brain. Why we cannot just observe it would be something for philosophers and scientists to develop hypotheses about - much like biologists are developing and testing hypotheses using quantum theory to try to explain biological processes they couldn't previously - and then test those hypotheses that can be testable, try to work towards those hypotheses that are in theory testable, but not yet, and think through the remainder. Only after all these have been shown to fail should we then be considering the option of non-physicalism, because only then would consciousness really be a problem which was outside the bounds of our best science.

    Can anyone tell me if there are programmes of research into consciousness, qualitative states, and the hard problem along these lines? Thanks alot.
  • Intentional vs. Material Reality and the Hard Problem


    It is Moderate Realism, which sees universal concepts grounded in the objective character of their actual and potential instances rather than in Platonic Ideas or Neoplatonic Exemplars. Nominalism and conceptualism see universals as categories arbitrarily imposed by individual fiat or social convention.
    So, something like aristotelian realism about universals? Well that would make them more than a mere insignificant mental abstraction, it's a real thing in the world by your take, albeit inextricably linked to the particular. I'm not familiar with terms like 'notes of comprehension' or 'essential notes'. You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension. Can you unpack this term?

    No. Notice that we run all the original instructions. Any program that simply runs an algorithm runs it completely. So, your 'atmospheric sampler' program does everything needed to complete its computation.
    I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.] As soon as the instruction is removed, thus it ceases being a sampler and, suddenly would become a sampler [because it now has the capacity to sample] once the instruction is re-introduced. Even though it runs through the entire program in the thought experiment, during the period when the instruction is removed, the program is in a state where it no longer has the looping/iterative-sampling capacity hence the fact that it is not a sampler during that period.

    The problem is, we have no reason to assume that the generation of consciousness is algorithmic. Algorithms solve mathematical problems -- ones that can be presented by measured values or numerically encoded relations. We have no such representation of consciousness. Also, data processing operates on representations of reality, it does not operate on the reality represented. So, even if we had a representation of consciousness, we would not have consciousness.
    What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.

    Also I don't get the point about why operating on reality representations somehow makes data-processing unable to be itself conscious. The kind of data-processing going on in the brain is identical to the consciousness in my account. It's either that or the thing doing the data processing [i.e. the brain] which is [has the property of] consciousness by virtue of the data processing.

    In the computational theory of mind, consciousness is supposed to be an emergent phenomenon resulting from sufficiently complex data processing of the right sort. This emergence could be a result of actually running the program, or it could be the result of the mere presence of the code. If it is a result of running the program, it can't be the result of running only a part of the program, for if the part we ran caused consciousness, then it would be a shorter program, contradicting our assumption. So, consciousness can only occur once the program has completed -- but then it is not running, which means that an inoperative program is causes consciousness.
    These choices are not exhaustive.. Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order. It's okay for some of the frames to be skipped because the viewer can infer the scene from the adjacent frames. In this instance the program is a movie player not because of the mere presence of the instructions nor because of the output of one or another frame [be it the middle frame or the last frame]. It also couldn't just result from only some of the instructions running, it requires them all to run properly for at least most [a somewhat arbitrary, viewer-dependent number] of the iterations so that enough frames are output for the viewer to see some semblance of a movie. In this case it's not the output of one loop that results in consciousness nor the output of some pre-specified number of sequential iterations that results in the program being a movie player. Instead it is a combination of a working program and some number of semi-arbitrary and not-necessarily sequential outputs which result in the program being a movie player. This is not even a far-out example, it's easy to imagine a simple, early american projector which operates via taking film-strip.. Perhaps sections of the film-strip are damaged which leads to inadequate projection of those frames. Would you say this projector is not a movie-player if you took out one of its parts before it reached the step where it's needed and then impossibly becomes a movie-player once the part is re-introduced right before it was needed?

    We are left with the far less likely scenario in which the mere presence of the code, running or not, causes consciousness. First, the presence of inoperative code is not data processing, but the specification of data processing. Second, because the code can be embodied in any number of ways, the means by which it effects consciousness cannot be physical. But, if it can't be physical, and it's not data processing, what is the supposed cause?
    I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.

    No, not at all. It only depends on the theorem that all finite state machines can be represented by Turing machines. If we are dealing with data processing per se, the Turing model is an adequate representation. If we need more than the Turing machine model, we are not dealing with data processing alone, but with some physical property of the machine.

    I agree that the brain uses parallel processing, and might not be representable as a finite state machine. Since it is continually "rewiring" itself, its number of states may change over time, and since its processing is not digital, its states may be more continuous than discrete. So, I am not arguing that the brain is a finite state machine. I am arguing against those who so model it in the computational theory of mind.
    I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described? Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality. I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.

    This assumes facts not in evidence. David Chalmers calls this the "Hard Problem" because not only do we have no model in which a conglomerate of neurons operate to produce consciousness, but we have no progress toward such a model. Daniel Dennett argues at length in Consciousness Explained that no naturalistic model of consciousness is possible.
    I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem. Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] . In this case it's the relations between the parts which result in the phenomenon cannot be reducible to simply the parts. I'd imagine there's some sort of way you can account for consciousness by the interactions of the component neurons in the system

    I also haven't read Dennett's arguments so I can't comment on them.

    It is also clear that a single physical state can be the basis for more than one intentional state at the same time. For example, the same neural representation encodes both my seeing the cat and the cat modifying my retinal state.
    Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentially

    "Dichotomy" implies a clean cut, an either-or. I am not doing that. I see the mind, and the psychology that describes it, as involving two interacting subsystems: a neurophysical data processing subsystem (the brain) and an intentional subsystem which is informed by, and exerts a degree of control over, it (intellect and will). Both subsystems are fully natural.

    There is, however, a polarity between objects and the subjects that are aware of them.
    Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain. I think this is the only difference we have. I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain.

    Please rethink this. Kant was bullheaded in his opposition to Hume's thesis that there is no intrinsic necessity to time ordered causality. As a result he sent philosophy off on a tangent from which it is yet to fully recover.

    The object being known by the subject is identically the subject knowing the object. As a result of this identity there is no room for any "epistic gap." Phenomena are not separate from noumena. They are the means by which noumena reveal themselves to us.

    We have access to reality. If we did not, nothing could affect us. It is just that our access is limited. All human knowledge consists in projections (dimensionally diminished mappings) of reality. We know that the object can do what it is doing to us. We do not know all the other things it can do.

    We observe everything by its effects. It is just that some observations are more mediated than others.
    I'm not entirely familiar with the Kantian thesis here, but I think the fact that our physical models [and that the entities within the models] change with updated evidence and the fact that fundamental objects seem to hold contradictory properties - wave-particle nature imply that theoretical entities like the 'atom' etc are constructs. Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.

    This is very confused. People have learn about themselves by experiencing their own subjectivity from time immemorial. How doe we know we are conscious? Surely not by observations of our physical effects. Rather we know our subjective powers because we experience ourselves knowing, willing, hoping, believing and so on.
    I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.
  • Sleeping Through The Hard Problem of Consciousness

    TMF!

    If the subjective experience explains the nature of reality, what would explain the information acting [that acts] upon all matter (or emergent matter as it were)? In other words, how do we reconcile informational energy acting upon all matter within our consciousness(?).

    I think that would be one of the missing pieces there, as it relates to your notion that the nature of reality (consciousness) is subjective.

    Otherwise TMF, I agree with your Subjective Epistemological/Ontological Problem. This is the problem associated with “subjective” versus “objective” perspectives on being in the world. Of course the way to think about this is that subjective experiential consciousness is fully "contained" within the individual. This containment results in two important sub problems, which are mirror images of each other. The first is the problem of directly knowing another’s subjective experience—the problem being it cannot be done. This is the problem of: “How do I know that you see red the way I see red?” This problem also relates to our knowledge of consciousness in other animals, which we can only know indirectly. This is also related to the philosophical problem of zombies.

    Indeed, all subjective experiences can only be inferred via behavior from an objective perspective. The second issue is the inversion of this problem. This is the problem that, as individuals, we are trapped in our subjective perceptual experience of the world. That is, the only way I can know about the world is through my subjective theater of experience.
    3017amen

    Since brain activity correlates with everything mental, inclusive of so-called subjective mental experiences, it goes without saying that these so-called subjective mental experiences are explicable just with brain activity and since this is exactly what the hard problem of consciousness claims is impossible, I'd say there really is no hard problem of consciousness.
  • Solution to the hard problem of consciousness

    Thank you for these tutorials in the philosophy of science. But you might want to check your facts.apokrisis

    This is not clear: "And do the tests claim the theory is true? Or do they make the more modest epistemic claim that the theory seems pragmatically reliable in terms of the purposes you had in mind?"

    What does "they" refer to? Tests? Tests don't make claims.

    Of course. In the same way that all theories have to be motivated by a counterfactual framing - one which could even in principle have a yes/no answer.

    So are all minds the result of a mush of complicated neurology found inside skulls? As a first step towards a natural philosophy account of consciousness, does this feel 99% certain to you.

    If not, why not? Where is your evidence to the contrary?
    apokrisis

    I'm strongly in favor of idealism. I think the explanatory gap is evidence that science can't solve the hard problem. The gap will grow and grow and people will eventually abandon the scientific approach to "solving" consciousness. There's nothing to be solved because matter doesn't exist.

    Does poking this delicate mush with a sharp stick cause predictable damage to consciousness? Well ask any lobotomy patient.

    And so we can continue - led by the hand - to where neuroscience has actually got to in terms of its detailed theories, and the evidence said to support them.
    apokrisis

    There's an idealist explanation for why poking a brain causes changes to mental states. If you poke a dream brain, the dreamer alters the dream. That's clunky, I admit, and begs the question of why a dreamer would modify their dream when their dream brain is poked, but it IS a non-materialist explanation for why changes to brains results in changes to minds: it's all part of the dream. And as evidence for my assertion that it's all a dream, I'll keep pointing out that we keep running into the hard problem and science keeps not solving it. It's not even close to solving it. There's not even a coherent framework for what an explanation for consciousness will look like. Neuroscience can keep piling up neural correlates to mental states, but that hasn't solved the hard problem, and it won't in the future. There's not going to be an Aha! moment where we get x amount of neural state-mental state correlations, and suddenly grasp the answer to the mind-body problem.

    All good moral questions. How do you answer them?apokrisis

    I can't. But then, idealists are very much the minority. Nobody expects idealism to solve anything. I think there's a day of reckoning for physicalism, though. Science has been solving these technical problems for a long time now. But it's going to fail people in this area (machine consciousness), and it's really going to come as a shock to a lot of people. This is still a society very much in love with scientism.

    I thought it was because we all act the same way. Roughly. Within engineering tolerances.

    You might need a neuroscience degree, along with an MRI machine, to tell if a person is indeed built the same way.

    You know. Verified scientific knowledge and not merely social heuristics.
    apokrisis

    Maybe. But we all also have brains, hearts, lungs, etc. A liquid nitrogen cooled computer the size of a room that passes a turning test is not going to resemble a person at all. People aren't going to assume it has a mind. Getting people to go along with machine rights isn't going to be easy, esp. when the scientists just shrug when people ask them if the machines are conscious.
  • The 'hard problem of consciousness'.

    What we perceive, feel, and think is experienced from a unique internal perspective. According to the ‘hard problem of consciousness' some of these mental states are separate to and not reducible to physical systems in the human bodyBrock Harding

    I do not believe this is the hard problem. We know that all experiences of ourselves reduce to the brain. That's really not in question. The hard problem is understanding exactly what a person feels when a certain brain state triggers.

    For example, I'm imagining a field of grass. We can see the brain states that trigger. But we can't see the image of me imagining the field of grass. I can tell you what I feel. I can tell you what I image. But there's no objective way to measure this, it is purely from my subjective communication. We can't say, "Brain state X for certain causes every person to objectively imagine a field of grass. We know there IS a brain state that is doing it. We know its a physical response. But because we don't have the "image" ourselves in front of us, we can't really objectively test or reproduce it. We have to rely on your personal communication, which might be wrong, biased, or not descriptive enough.

    To help think this through further, imagine the color green. How do I know that the thing you call green is the same image in my head? Its basically that sort of problem. We need some objective measurement, like "light wavelength" to determine what "green" really is between you and I both. Until we discover some outside way of measuring thoughts besides personal subjective experience, we cannot duplicate the issue.

    But it is not, at all, ever, a denial that our brain is what makes us think.
  • How to answer the "because evolution" response to hard problem?

    Regarding Quote 02 above, I answer by declaring we humans, unlike the automatons, possess a self who, described functionally, maintains a personal POV of events as reported via the senses & the cogitating mind.ucarr

    That is the problem. Where is the physical evidence for consciousness? What does ‘consciousness’ do? This is in light of understanding that it is perfectly for a philosophical zombie to exist (without disrupting our understanding of nature).

    This and what you say after leads directly to Husserl:

    How does our scientific process, based mainly within objectivism, render an objective profile of subjectivity? In facing The Hard Problem, have we arrived at the limit of scientific objectivism?ucarr
    .

    Generally there are attempts made by cognitive neuroscientists adopting phenomenological approaches (Husserl’s phenomenology). I believe Husserl was on the only rational track but it by no means extinguishes the Hard Problem just frames it in a different light that allows some form of possible approach to aspects within it or related to it.

    My personal view is that it is more likely a problem of definitions and/or category errors. Subjectivity can not be ‘given’ to another as someone else cannot be someone different. Piecing together the intersubjectivity does allow us to shed some light but I think it is ridiculous to believe we can ‘know’ in any complete sense and so the Hard Problem is more or less an extension where epistemic questions can play around.
  • Why is the Hard Problem of Consciousness so hard?

    If instead the semantics of scientific concepts were perspectival and grounded in the phenomenology and cognition of first-person experience, for example in the way in which each of us informally uses our common natural language, then inter-communication of the structure of scientific discoveries would be impossible, because everyone's concepts would refer only to the Lockean secondary qualities constituting their personal private experiences, which would lead to the appearance of inconsistent communication and the serious problem of inter-translation. In which case, we would have substituted the "hard problem" of consciousness" that is associated with the semantics of realism , for a hard problem of inter-personal communication that can be associated with solipsism and idealism.sime

    But phenomenology is not about first person experience. This is a notion that issues from the very scientific perspective in question: here is a perceiving agent, there is a stone, and if phenomenology rules our thinking in this, the perceiving agent never leaves her private phenomenal space. Phenomenology does not think like this. It takes appearance as Being. I am there and stones are there and their existence is fully acknowledged as other than myself. My scientific conceptual relations with them do not change at all. All that has changed is now we are freed from the absurd ontology of physical materialism that makes it, not hard, but impossible to describe epistemic relations, which are THE biggest embarrassment of analytic's naturalism. What is left for philosophy is clearer analysis of what makes appearance possible.
  • Why is the Hard Problem of Consciousness so hard?

    that's not what you said I said. You said:

    Your statement implies the belief commonplace subjective experiences should be easily accessible to the objectivist methodologies of science. It also implies the subjective/objective distinction is a trivial matter and should therefore be no problem for science.
    — ucarr

    I didn't say or imply either of those things.
    T Clark

    Don't confuse "easily accessible to the objectivist methodologies of science." with "easily solvable with the objectivist methodologies of science." I know you know neuroscience is hard work.

    ...As far as I can see, there's no reason to think that consciousness can't be understood in terms of principles we already are aware of. I don't see any hard problem.T Clark

    By my account, you trivialize the subjective/objective distinction when, firstly you declare the (objectivist) "principles we already are aware of" are good enough to cover both the objective and the subjective and secondly when you deny without argument the hard problem.

    ad ho·mi·nem | ˌad ˈhämənəm |
    adjective
    (of an argument or reaction) directed against a person rather than the position they are maintaining: vicious ad hominem attacks.

    Your insult, as I said, was directed against me, not against my argument. Your confirm the truth of this with your following statement,

    ...it was an insult.T Clark

    Well, an insult is a personal attack having nothing to do with a debate about ideas.

    The fact you don't recognize the difference tells me everything I need to know about whether or not to take you seriously.T Clark

    You make a lot of declarations unsupported by arguments. In this conversation you refuse to answer a central question about your assumptions. I always support my declarations with arguments. Usually I answer honestly tough questions that threaten my argument with implosion. By these standards, I'm much more serious than your are.
  • Why is the Hard Problem of Consciousness so hard?

    have an understanding of the hard problem.Moliere

    This was your understanding:

    So, whatever that is -- why my red is my red -- that's what the hard problem of consciousness is about. It's the feeliness of the world. And the thought, so my memory of what I was lead to believe at least, is that there is as yet no scientific explanation for why my red is my red (or, perhaps another way to put it, there's no scientific way to tell what my red is -- whether it is your blue or not -- yet I certainly see red)Moliere

    I see where these speculations are coming from, but the hard problem is more basic. It's: why do you experience orgasms? Why doesn't that neural activity happen without any associated experience of it?

    It's not about why your orgasms are your own and not someone else's. See the difference?
  • Why is the Hard Problem of Consciousness so hard?



    I read Chalmers as breaking from the Cartesian theater where the duality of a first person being separated from the rest of the movie is the explanation itself. ..The question is not whether we are only physical beings but whether the methods to establish what is only physical will explain experience. Chalmers is introducing a duality that is recognized through the exclusion of a phenomena instead of accepting the necessity for an agency beyond phenomena.Paine

    I like Zahavi’s critique of Chalmers’ position:

    “Chalmers's discussion of the hard problem has identified and labeled an aspect of consciousness that cannot be ignored. However, his way of defining and distinguishing the hard problem from the easy problems seems in many ways indebted to the very reductionism that he is out to oppose. If one thinks that cognition and intentionality is basically a matter of information processing and causal co-variation that could in principle just as well go on in a mindless computer–or to use Chalmers' own favored example, in an experienceless zombie–then one is left with the impression that all that is really distinctive about consciousness is its qualitative or phenomenal aspect. But this seems to suggest that with the exception of some evanescent qualia everything about consciousness including intentionality can be explained in reductive (computational or neural) terms; and in this case, epiphenomenalism threatens.

    To put it differently, Chalmers's distinction between the hard and the easy problems of consciousness shares a common feature with many other recent analytical attempts to defend consciousness against the onslaught of reductionism: They all grant far too much to the other side. Reductionism has typically proceeded with a classical divide and rule strategy. There are basically two sides to consciousness: Intentionality and phenomenality. We don't currently know how to reduce the latter aspect, so let us separate the two sides, and concentrate on the first. If we then succeed in explaining intentionality reductively, the aspect of phenomenality cannot be all that significant. Many non-reductive materialists have uncritically adopted the very same strategy. They have marginalized subjectivity by identifying it with epiphenomenal qualia and have then claimed that it is this aspect which eludes reductionism.

    But is this partition really acceptable, are we really dealing with two separate problems, or is experience and intentionality on the contrary intimately connected? Is it really possible to investigate intentionality properly without taking experience, the first-person perspective, semantics, etc., into account? And vice versa, is it possible to understand the nature of subjectivity and experience if we ignore intentionality. Or do we not then run the risk of reinstating a Cartesian subject-world dualism that ignores everything captured by the phrase “being-in-the-world”?”
  • Why is the Hard Problem of Consciousness so hard?

    I like Zahavi’s critique of Chalmers’ position:

    “Chalmers's discussion of the hard problem has identified and labeled an aspect of consciousness that cannot be ignored. However, his way of defining and distinguishing the hard problem from the easy problems seems in many ways indebted to the very reductionism that he is out to oppose. If one thinks that cognition and intentionality is basically a matter of information processing and causal co-variation that could in principle just as well go on in a mindless computer–or to use Chalmers' own favored example, in an experienceless zombie–then one is left with the impression that all that is really distinctive about consciousness is its qualitative or phenomenal aspect. But this seems to suggest that with the exception of some evanescent qualia everything about consciousness including intentionality can be explained in reductive (computational or neural) terms; and in this case, epiphenomenalism thre
    Joshs

    That's clever.

    I find myself less engaged in this matter as the pages pile up. The options seem to be:

    1) There is a hard question. Insert explanation - generally something about metacognition and qualia.

    2) There is not a hard question. Insert explanation - generally something about a category mistake or eliminativism.

    Why does it matter? Is it mainly down to the role each perspective plays in supporting a contested ontology? Either 1) a physicalist monism (therefore keeping atheism safe from woo OR 2) an ontological dualism allowing for more traditional forms of Western theism OR 3) a non-physicalist monism (idealism), mysticism and the East? 4)?

    Is this ever just about consciousness?
  • A potential solution to the hard problem

    Right, and as I said if there were no experiential dimension there would be nothing else either, so putting the question as to why there is experience is really equivalent to putting the question as to why there is anything at all, or why there is something rather than nothing.Janus

    The question "why does phenomenal experience exist" may seem analogous to the question "why does anything exist". I agree that the question can be viewed in this way. I still tend to view it this way myself occasionally. But I believe that the hard problem can be expressed in a way that distinguishes these two seemingly identical questions.

    As defined by the IEP article, "the hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is “something it is like” for a subject in conscious experience, why conscious mental states “light up” and directly appear to the subject."

    Expressed in this way, it could be viewed as a question that is not solely about the existence of conscious experience, but about why some physical states are conscious and others are not.
  • Why is the Hard Problem of Consciousness so hard?

    I'm just directly responding to the question of evidence.

    I think the hard problem of consciousness IS a hard problem. I don't disagree with you that it's a hard problem.
  • Why is the Hard Problem of Consciousness so hard?

    I don't see it as pessimistic at all or that anything is lost. What does a solution to the hard problem look like? I don't think there is a good one I can think of which doesn't imply some sort of dualism which I fundamentally disagree with.Apustimelogist

    You're right., Dualism is hopeless. The solution would be nondualism.

    I am not suggesting looking for a fundamental ontology based on computation but an explanation for why knowing about fundamental ontologies are out of reach.

    I'm suggesting such knowledge is not out of reach. To show that it is out of reach would require ignoring all the people who claim to have such knowledge, or proving they do not. . .

    I think the explanation is actually already there, it just has to be articulated and demonstrated. Like you said, experiences are primitive.

    Ah. I didn't say this and would argue against it. You're conflating consciousness and experience, but I;m suggesting that the former is prior to the latter. Bear in mind that experience-experiencer is a duality that must be reduced in order to overcome dualism. . .

    We know experiences are related to the functional architecture of our brains. We can transfer or demonstrate the concept of this kind of primitiveness into the architectures and functional repertoires of A.I. We use A.I. to demonstrate the limits of what kinds of information is transferable from the environment, what kinds of concepts are created and what information they don't or can't include, and then see what kind of metacognitive consequences this has. Does a. A.I. come up with primitive phenomenal concepts on a purely functional basis that it cannot explain, similarly to our hard problem? This is a totally plausible research program even if it may not be possible right at this moment.

    There are no primitive concepts or experiences. This was shown by Kant. For a solution one would have to assume a state or level of consciousness free of all concepts and prior to information. Don't forget that and information theory requires an information space, and the space comes before the information. .

    Not sure what you mean here but functionally, yes we are just intelligent machines. We are just brains.

    If you believe this you will never have a fundamental theory and will will have to live with the 'hard' problem. forever. I wonder what leads you to believe this when it is just a speculation. If you believe this then much of what I'm saying will make no sense to you. I would advise against making such assumptions, or indeed any assumptions at all. , .
  • Why is the Hard Problem of Consciousness so hard?

    In a nutshell: because correlation doesn’t explain consciousness.Art48
    I just received my copy of Bernardo (BK) Kastrup's 2020 book, Science Ideated. He doesn't discuss the "Hard Problem" directly, but the subject matter seems to be pertinent to this thread. So, I'll mention a few first-glance quotes & comments here.

    A. BK approaches the Science vs Philosophy controversy from a position of Analytic Idealism*1. "AI" (pardon the unintentional sentient-computer implication) sounds like a succinct description of Modern (post-17th century) philosophy : forced --- by the successes of physical science --- to focus primarily on the metaphysical aspects of Nature : e.g. Ideas ; Self-Consciousness. It accepts the material facts provided by modern physics, but interprets (analyzes) the data as it applies to the immaterial functions (conceptualization ; semiotics) of the human brain.

    B. BK says that modern Science "began attributing fundamental reality only to quantities". Then, "we began cluelessly replacing reality with its description, the territory with the map." And notes that "we now face the so-called "hard problem of consciousness" : the impossibility of explaining qualities in terms of quantities." So, he concludes that we "managed to lose touch with reality altogether".
    Note --- "Reality" as a whole system, including both Mind & Matter.

    C> He defines Analytic Idealism as "the notion that reality . . . . is fundamentally qualitative." Thus denying the basic principle of Materialism. Idealism views the world through the lens of subjective Consciousness, while Materialism views it through the lens of objective Technology.
    Note --- Qualia :the internal and subjective component of sense perceptions, arising from stimulation of the senses by phenomena. Hence, Reality converted to Ideality via physical senses, and metaphysical symbol synthesis.

    D> BK says that "Panpsychism ultimately implies universal consciousness". But then he dismisses that theory as "a halfway compromise between materialism and idealism". Instead, BK seems to favor full-on Idealism, devoid of the contamination of Physicalism. Paradoxically, it's difficult to even talk about metaphysical topics without getting entangled with the physicality embedded in common languages.
    Note --- Kastrup wrote "Why Panpsychism is Baloney", perhaps to complement his book Why Materialism is Baloney. https://iai.tv/articles/bernardo-kastrup-why-panpsychism-is-baloney-auid-2214

    Comments :
    We humans are only able to communicate the Qualia of our sensory Experience by asking : "do you see what I see?". The response must be translated from private Ideas into public Words, by following the rules of conventional language. Yet, that's where the Hard Problem begins. Our public language is necessarily built upon the material foundation of our common human sensory apparatus, that we share with apes. Even apes, such as Koko, seem to be able to communicate feelings/ideas in sign language, which can only express abstract concepts in concrete gestures. Yet, the implication that ape sentience is comparable to human consciousness has been criticized as anthropomorphic interpretation*2.

    The Science-based metaphysics of Materialism is supposed to be dealing directly with physical Reality. But, since the subject "matter" is immaterial, BK says their arguments are based on hypothetical conjectures (maps), not empirical (territory) observations. So, their boo-hiss criticism of Consciousness queries on this forum, is a case of the pot calling the kettle a "woo-monger". Consciousness is inherently subjective, hence not objectifiable under a microscope.

    My own theory of Consciousness has a "defect" similar to Panpsychism : jumbling Matter together with Mind. That's because the fundamental element of our real world is neither a physical thing, nor a metaphysical entity, but the not-yet-real Potential for both. Terrence Deacon calls it "constitutive absence", but I call it "causal information" (EnFormAction). Materialism & Spiritualism typically view Mind & Brain as incompatible opposites. But the BothAnd principle*3 allows us to see both sides of reality, where Mind & Matter are parts of a greater whole system : the enminded universe.

    I may have more to say about Science Ideated later, after I finish the book. This is just a taste, to give us some ideas to argue about in a thread on Consciousness in a material world. :smile:


    *1. Analytic Idealism in a Nutshell :
    While being a realist, naturalist, rationalist, and even reductionist view, Analytic Idealism flips our culture-bound intuitions on their head, revealing that only through understanding our own inner nature can we understand the nature of the world.
    https://www.collectiveinkbooks.com/iff-books/our-books/analytic-idealism-nutshell

    *2. Koko the Impostor :
    The apes taught sign language didn't understand what they were doing. They were merely "aping" their caretakers.
    https://bigthink.com/life/ape-sign-language/

    *3. Both/And Principle :
    My coinage for the holistic principle of Complementarity, as illustrated in the Yin/Yang symbol. . . . Conceptually, the BothAnd principle is similar to Einstein's theory of Relativity, in that what you see ─ what’s true for you ─ depends on your perspective, and your frame of reference; for example, subjective or objective, religious or scientific, reductive or holistic, pragmatic or romantic, conservative or liberal, earthbound or cosmic. Ultimate or absolute reality (ideality) doesn't change, but your conception of reality does, as you re-frame the question.
    https://blog-glossary.enformationism.info/page10.html



  • Why is the Hard Problem of Consciousness so hard?

    While I can’t know what the subjective experience of a given something is, it seems probable that most things don’t have any. I assume you agree with this. So we’re just trying to draw the most likely line as to consciousness. You say with some assurance that AI programs already have limited consciousness. Is there any evidence for this beyond their behaviors? A purely functionalist argument can’t resolve this, since it begs the question.J

    It all boils down to understanding how we know about consciousness today. Its not through the subjective experience of something. Its through its behavior. There is a thought experiment called a philosophical zombie that has the behavior of consciousness, yet lacks the subjective experience that we would associate with consciousness. The hard problem shows that its irrelevant. Since we cannot know the subjective experience of anything, we can only go by behavior. The subjective experience of something that behaves consciously is currently outside of our reach.

    Not quite sure why the hard problem rules out denying consciousness to computers at some future date, or why you describe the hard problem as “true.”J

    Mostly because I don't give weight to what 'may' happen. Ask someone 30 years ago what they thought 2024 would be like and they wouldn't even be close. So all we have to go on is today. Today, we do not have the scientific means to experience another consciously behaving entities' subjective experience. Meaning that if we have an AI that ticks all the behaviors of consciousness, we cannot claim that it does, or does not have a subjective experience. Its impossible for us to know. Since we cannot objectively evaluate a subjective experience, all we can do to measure consciousness is through another being's behavior.

    Do I think that any non-living thing can be conscious? No, I’m strongly inclined, on the evidence, to believe that consciousness is exclusively a biological property.J

    How about we reword this a bit? Can a non-biological entity have the subjective experience of a biological entity? No. They are two different physical mediums. I can play a song on a harp or a keyboard, and the fundamental experience will have an inseparable difference in physical expression. So if an AI is conscious, its subjective experience is that of a non-biological being, not a biological being.
  • Why is the Hard Problem of Consciousness so hard?

    The circularity begins when you promise that “ when we describe ourselves as being conscious we're describing that non-physiological aspect of ourselves”, and when asked which non-physical aspect of ourselves we’re describing, you answer “consciousness”.NOS4A2

    I don't think it's circular. If you asked me what "physiology" describes, the answer is physiology.

    I’m only arguing that if consciousness does not apply to the physiology, there is no other object to which it can apply.NOS4A2

    What does physiology apply to? The question doesn't make sense. Physiology is just its own thing. Similarly, if dualism is correct then consciousness is just its own thing.

    The reason I would say no such aspects exist is because there is no indication such aspects exist.NOS4A2

    There's certainly something peculiar about consciousness given that a "hard" problem of consciousness is even considered. We don't consider a "hard" problem of electricity or water after all. Of course, that might just be because consciousness is significantly more complicated than every other natural phenomenon in the universe. Or it might be because consciousness really is non-natural and that there really is a "hard" problem.
  • Why is the Hard Problem of Consciousness so hard?

    did a good job of answering these materialistic challenges to the Hard Problem, with philosophical argumentation. But scientific evidence carries more weight on this forum. So, I'd like to give it a shot, with a focus on the distinction between Physics and Meta-Physics, as postulated in my own amateur Enformationism thesis. Way may not agree with all of my arguments or evidence. :smile:

    You have to understand, if you accept the hard problem as true, you can NEVER state, "Computers do not have a subjective experience." You don't know. Can you be a computer processing AI algorithms? Nope. So if we create a machine and program that exhibits all the basic behaviors of consciousness, you have no idea if it has a subjective experience or not.Philosophim
    We determine that computers-do-not-experience-subjectively in the same way we "know" that other humans do experience the world in a manner similar to our own : by rational inference from behavior. So, the Hard Problem is not about the behavioral evidence of Consciousness, but about its lack of material properties. :smile:

    1. Consciousness is able to exist despite a lack of physical capability to do so.Philosophim
    For my thesis, Consciousness (C) is an immaterial state of awareness, that arises from a physical process, not an entity that exists as an independent thing. I compare it to the mysterious emergence of physical Phase Transitions, such as water to ice*1. Some ancient thinkers, lacking a notion of physical energy, imagined the living & thinking & purposeful Soul, as human-like agent, or as something like the invisible breath or wind that you can feel, and can see it move matter around. Modern Materialism seems to criticize attempts to explain C, based on the assumption that the explainer is referring to a Soul, that can walk around as a ghost.

    However, if you think of C as a noumenal form of Energy, or EnFormAction as I call it, then its existence is physical only in its causal consequences, not as a material object. We can't see or touch Energy, so we infer its immaterial existence from its effects on matter : changes of form or state. Those transformations are noumenal inferences instead of phenomenal sensations. Consequently, C doesn't function like a machine, but more like magic; hence the difficulty of explaining it in terms of mechanisms.

    The "physical capability" of Energy to exist is taken for granted, because we can detect its effects by sensory observation, even though we can't see or touch Energy with our physical senses*2. Mechanical causation works by direct contact between material objects. But Mental Causation works more like "spooky action at a distance". So, Consciousness doesn't work like a physical machine, but like spooky gravity, or metaphysical intention. :smile:

    *1. New research details water's mysterious phase transitions :
    https://phys.org/news/2018-03-mysterious-phase-transitions.html

    *2. Evidence of Energy :
    Therefore, although energy itself isn't visible, you can detect evidence of energy.
    https://www3.uwsp.edu/cnr-ap/KEEP/Documents/Activities/EvidenceofEnergy.pdf

    2. Demonstrate a conscious entity that has no physical or energetic correlation.Philosophim
    Again, in my thesis, Consciousness is defined as a process or function of physical entities. We have no knowledge of consciousness apart from material substrates. But since its activities are so different from material Physics, philosophers place it in a separate category of Meta-Physics. And religious thinkers persist in thinking of Consciousness in terms of a Cartesian Soul (res cogitans), existing in a parallel realm.

    Despite Life After Death interpretations, there is no verifiable evidence of C manifesting apart from an animated physical body*3. But my thesis postulates that both Physical Energy and Malleable Matter are emergent from a more fundamental element of Nature : Causal EnFormAction*4 (EFA). The Big Bang origin state was completely different from the current state, in that there was no solid matter as we know it. Instead, physicists imagine that the primordial state was a sort of quark-gluon Plasma, neither matter nor energy, but with the potential (EFA) for both to emerge later. And ultimately for the emergence of Integrated Information as Consciousness. :smile:

    *3. Consciousness after death :
    From a strictly scientific viewpoint, we don't know. There is certainly no verifiable, repeatable evidence that the consciousness continues to exist. Nor is there any particular scientific reason to believe it does.
    https://www.quora.com/What-happens-to-our-consciousness-after-we-die-Does-it-simply-cease-to-exist-or-does-it-continue-on-in-some-form

    *4. Mass & Energy are forms of Information :
    the mass-energy-information equivalence principle, stating that information transcends into mass or energy depending on its physical state;.
    https://www.sci.news/physics/information-fifth-state-matter-10638.html

    3. If consciousness is not matter and/or energy, please demonstrate evidence of its existence without using a God of the Gaps approach.Philosophim
    The existence of Matter & Energy is taken for granted, due to evidence of the senses, but the origin of the material world remains a mystery : is it self-existent, or contingent? The Big Bang theory is based on physical evidence observed 14 billion years after the hypothetical event. We now grudgingly accept that our world is temporary, only because the math sputters-out at at T=0/∞. Is that more like 12am or 12pm on the clock? The evidential Gap, beyond the evidence, can be filled with speculation of Creation, or a Tower-of-Turtles hypothesis.

    Unlike the material world, we require no math or theories to provide evidence of Self-Consciousness. It's self-evident ; mental ideas are all we know about anything. But Consciousness in other beings is not so obvious. Neurologists look for sensory signs of Awareness, such as verbal behavior, arousal, brain activity and purposeful movement. So, it's obvious that Consciousness does not exist in isolation, but is dependent on a> material body, b> neural complexity, and c> animation of body. But what is Life, and how do we know it exists? Schrodinger postulated that Life could be defined as 'negative entropy' — something not falling into chaos and approaching 'the dangerous state of maximum entropy, which is death'. Negentropy is positive Energy (or EFA), animating the material world.

    Similarly, Tononi's Integrated Information Theory quantifies Consciousness in terms of Complexity and Wholeness of living systems. Thereby, he hopes to provide quantitative evidence of its existence, and perhaps of its relative quality. My own thesis, defines Consciousness in terms of Energy (EnFormAction), and of Holistic Integration of sub-systems. Yet, our sensory evidence still requires physical inputs, just as any other form of Information reception. That's why, for behavioral observations, we require rational inferences.

    Therefore, Philosophical questions about Mind & Consciousness depend on personal reasoning; logical deduction from the meta-physical evidence of intentional activities. If you can't make that computation from available evidence, then you live in a matterful but mindless & meaningless world. And the mystery of Consciousness is dispelled, as a ghost, with a wave of dismissal. :smile:
  • How to answer the "because evolution" response to hard problem?

    So when presenting someone not familiar with the hard problem, or even has really grasped it (and is not of a mystical bent), they will quickly answer: "Because evolution has created it!" when asked, "Why is it we have sensations, thoughts, feelings associated with physical processes?".

    How does one actually get the point across why this is not an acceptable answer as far as the hard problem is concerned? Can this be seen as answering it, or is it just inadvertently answering an easier problem? If so, how to explain how it isn't quite getting at the hard problem?
  • Sleeping Through The Hard Problem of Consciousness

    Subjective substance collapsed with objective substance resolves the explanatory gap.Enrique

    Except that it doesn't. It means you're not addressing the problem. Here are some examples of what David Chalmer's calls 'the easy problem of consciousness':

    • the ability to discriminate, categorize, and react to environmental stimuli;
    • the integration of information by a cognitive system;
    • the reportability of mental states;
    • the ability of a system to access its own internal states;
    • the focus of attention;
    • the deliberate control of behavior;
    • the difference between wakefulness and sleep.

    All of the suggestions made above, including yours, address the 'easy problem'. It's because you are still dealing with the problem in objective terms.

    Chalmers uses the word 'experience' to differentiate the hard from the easy problem, as follows:

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspectis experience.

    Here, I don't agree with his terminology. 'This subjective aspect' is broader than experience - it is being. A being is the subject of experience; one of the hallmarks of beings is that they are subject of experience. And it is the 'nature of the subject' that eludes objective description - for reasons that really ought to be obvious.
  • Have we invented the hard problem of consciousness?



    I view "the hard problem" as not really a "problem". All its really doing is stating, "Figuring out how your subjective consciousness maps to your brain in an exact and repeatable model is hard."

    Well, yeah. I think some people take the wrong conclusions from it. It doesn't mean consciousness doesn't originate from your brain. We all know it does. But do we have a model that states, "If I send 3 nanos of dopamine to cell number 1,234,562 in quadrent 2 you'll see a red dog?" Not yet.

    The hard problem is simply predicting its difficulty. Part of philosophies job is to form ideas and examine if they are rational to pursue. If we consider the complexity of the mind, and the fact we would have to rely on subjective experience to create an exact model of consciousness for all things, its complexity is likely outside of our current technical and scientific know how today.

    This is why people are trying to construct different models of consciousness that can avoid this problem of "exactness". Which is pretty normal. When people meet limits, build what you can regardless.
  • Does the "hard problem" presuppose dualism?

    I'm not sure I quite understand the distinction between first person and third person perspectivesTom1352

    To illustrate by way of example: third person perspective is 'witnessing an accident'. First person perspective is 'being in an accident'.

    From Daniel Dennett:

    In Consciousness Explained, I described a method, heterophenomenology, which was explicitly designed to be 'the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science.

    Dennett's point being that there's no ineliminable difference between first- and third-person perspectives.

    The dualist would however need to explain why qualia warrants departing from physicalism, which is my understanding of the 'leap' involved in answering the hard problem.Tom1352

    'Qualia' is a word that was introduced to this argument by the 'eliminativists' and you only ever read the word in this context. I regard it as jargon and a term that Chalmers makes his case without needing:

    From David Chalmer's Facing Up to the Hard Problem of Consciousness:

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

    The way I interpret it is that he's simply talking about 'being' - as in 'human being'. 'Being' is the precondition of any statement or theory whatsoever, including Dennett's 'objective physical science'. Dennett has spent his entire career contriving to deny that this has any particular significance. That is what Chalmers says has to be 'faced up to'.
  • Solution to the hard problem of consciousness

    after a bit more reflection on questions like why does consciousness, this universe, or even existence "exists", I began to think that maybe it's our understanding of consciousness that makes the problem seem "hard".Flaw

    The point of David Chalmer's essay, Facing Up to the Hard Problem of Consciousness, is that first-person experience is not within the scope of objective, third-person descriptive analysis. So it deflates the expectation that the mind is something which can be explained by or reduced to scientific explanation, because scientific analysis is always conducted in the third person.

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel ('What it is like to be a Bat') has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.David Chalmers

    And the issue is, experience is had by subjects. And subjects, in this sense, can't be made an object, in the sense that brain and cognitive functionality can be. It's too near to us for us to know it. This leads to the 'blind spot of science' argument.
  • How to answer the "because evolution" response to hard problem?


    How does one actually get the point across why this is not an acceptable answer as far as the hard problem is concerned? Can this be seen as answering it, or is it just inadvertently answering an easier problem? If so, how to explain how it isn't quite getting at the hard problem?
    schopenhauer1
    Evolutionary theory can be a step in the direction of dissolving the hard problem , but only if we go beyond classical darwinism and conceive of organic processes not in terms of causal concatenations and re-arrangements of elements under external pressure but in terms of a more radical notion of reciprocal differences of forces.
  • Why is the Hard Problem of Consciousness so hard?

    Isn't this what they call the hard problem - How does manipulating information turn into our experience of the world? The touch, taste, sight, sound, smell?
    — T Clark

    No.
    frank

    This from "Facing Up to the Problem of Consciousness."

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. that unites all of these states is that there is something it is like to be in them. All of them are states of experience.David Chalmers

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.