Search

  • Hard problem of consciousness is hard because...

    Perception is something more than just raw phenomenal experience. To truly perceive, in the way that humans do, requires quite a bit of processing, sure. And even just perception like that isn't enough to count as consciousness in what I think is the ordinary way we mean it; that takes even more, and reflexive, processing.

    But all of that is access consciousness, the subject of the easy problem of consciousness; which is really quite harder, because you have to do empirical science to explain it, but it's philosophically easy because we can say "the rest is just empirical science" in the way that mathematicians can say "the rest is just calculation" after all the abstract work is done.

    The philosophically hard problem about phenomenal consciousness asks what exactly is it besides all of that functional stuff that gives us the subjective, first-person experience of all of that happening, and if you built a machine to do all of the same functionality, would it lack that subjective-first person experience, or would it have one just like us, and if so where does that come from and why?

    The contemporary panpsychist answer is that there isn't anything special that gives us subjective first-person experience, there just is a subjective first-person experience to everything. But what that subjective first-person experience is like varies with the function of the thing, such that only complicated reflexive systems like us have an "internal life" (because we are experiencing our own self-interaction as well as just reacting to the world). So if you built a machine to do all the same functions that human brains do, it would automatically have the same kind of subjective first-person experience that humans do, and there's no need to explain where that came from, because it's not something that just popped into existence when you built that functionality into it: the experience is built out of simpler experiences that were always there, right alongside building the function out of simpler functions.
  • Hard problem of consciousness is hard because...

    On the one hand, fields are real and modeled mathematically:Mww
    And on the other, fields are completely abstract and quantitatively incommensurable directly:Mww
    Yes. The intrinsic Either/Or aspect of our apparently dual "Reality" is what Einstein was talking about in his Theory of Relativity. What's real depends on who's looking. That's also why my personal worldview is based on a complementary Both/And perspective. For all practical purposes (science), what we perceive as concrete objects and physical effects is what is Real. But for theoretical purposes (philosophy), our perceptions of those objects are mental constructs. So discussions about Consciousness must make that distinction clear, or else, by reifying Consciousness, we run into the paradoxical "hard problem".

    Like all mammals, the human species has evolved to trust their perceptions as reliable guides to survival in the "real" world. But, unlike other mammals, humans have also evolved a rational extension of perception (conception), which allows us to see aspects of the world that do not exist in space-time. For example, we can make survival decisions for now, based on past or future. We can build instruments to extend our natural perception into aspects of space-time that are otherwise invisible and intangible, hence unreal. We can create abstract concepts, such as Unicorns and Hobbits, and act as-if they are real.

    Unfortunately, our cleverness leads us into seeing counter-intuitive and paradoxical "realities", such as quantum "wavicles". Thence, the question arises, "are they tangibly real, or merely useful ideas like mathematics?" For example, can we see or touch a magnetic field, or do we reify the field in order to explain otherwise inexplicable effects? Ancient people saw the effects of invisible Energy, and imagined invisible Spirits or Gods as the cause. Modern people see the effects of Magnetism on matter, and imagine a Force Field as the cause. Yet that field can be described, not in terms of material properties (redness, solidity, liquidity), but only of mathematical relationships (positive or negative).

    The world that rational humans live in is both concrete (real) and abstract (ideal). Moreover, abstract ideas can have real effects, as in Memetics. So we have difficulty drawing a hard line between real & ideal. Which is why my worldview is BothAnd, until it's necessary to draw a distinction, as in theories of Consciousness.


    Memetics : Memetics describes how an idea can propagate successfully, but doesn't necessarily imply a concept is factual. https://www.google.com/search?client=firefox-b-1-d&q=memetics

    BothAnd Principle : Conceptually, the BothAnd principle is similar to Einstein's theory of Relativity, in that what you see ─ what’s true for you ─ depends on your perspective, and your frame of reference; for example, subjective or objective, religious or scientific, reductive or holistic, pragmatic or romantic, conservative or liberal, earthbound or cosmic. Ultimate or absolute reality (ideality) doesn't change, but your conception of reality does. Opposing views are not right or wrong, but more or less accurate for a particular purpose. http://blog-glossary.enformationism.info/page10.html
  • Hard problem of consciousness is hard because...

    And this very scientist says that, and again I quote [ ... ] There genuinely, really is 'a hard problem of consciousness', ...Wayfarer

    Argumentum ab auctoritate ... :roll:

    ... but it's almost beyond doubt that you don't actually comprehend what it is. — Wayfarer

    Well, I have no doubt whatsoever, Wayf, that you don't comprehend the demarcation problem at all (re: your penchant for 'so much wooooo, so little defeasible corroboration'). Or elementary informal logic for that matter (since you can't help making fallacious utterances). Anyway. You get the last word - link & quote away with your badd self ...
  • Hard problem of consciousness is hard because...

    No. "The hard problem ... ", like e.g. æther, is an empty concept180 Proof

    It's the problem of explaining subjectivity.
  • Hard problem of consciousness is hard because...



    Get human thought and belief right. That's where to start. The purported 'hard problem' is dissolved - as is many other so-called 'problems' - when we quit using utterly inadequate frameworks to talk about stuff.
  • Hard problem of consciousness is hard because...


    The purported 'hard problem' is dissolved - as is many other so-called 'problems' - when we quit using utterly inadequate frameworks to talk about stuff.

    Maybe if you are a robot and wish to claim qualia is an illusion. But what is it you want to say anyway, you forgot to explain.
  • Hard problem of consciousness is hard because...

    Technically those are different things. Non-contradiction says it can’t be both true and false. Excluded middle says it can’t be anything but true or false. The two together are the principle of bivalence.


    When consciousness, as a mysteriously emerging property in itself, morphs into subconscious, creates part of the unexplained hard problem. The daydreaming while driving example is one phenomenon. Two brains are acting as one to create the same or 'one' sense of awareness level.

    Hence a person could crash because one thinks they're not driving at all. So I'm driving, but yet not driving.
  • Hard problem of consciousness is hard because...

    If you are ascribing some kind of independence to subconscious phenomena that's a pretty large leap.Pantagruel

    How did you arrive at that conclusion? I was merely talking about the hard problem of consciousness. Or, the unexplained illogical nature of same.

    Now 'independent existence' is another question. For example, the metaphysical Will in nature (Schopenhauer), or the 'independent' language of mathematics, and/or other metaphysical phenomenon that we experience/percieve in life....is that what you mean?
  • Hard problem of consciousness is hard because...

    It absolutely does address the hard problem of consciousness. The solution is called "biperspectivism". It is quite neat. I've read a number of books on systems theory and systems philosophy in the last few months, I can't remember if it was Laszlo or von Bertalanffy that had the really concise description. I actually mentioned it in another thread about neurophilosophy here:
    https://thephilosophyforum.com/discussion/6692/does-neurophilosophy-signal-the-end-of-philosophy-as-we-know-it-/p1
  • Hard problem of consciousness is hard because...

    It absolutely does address the hard problem of consciousness. The solution is called "biperspectivism". It as quite neat.Pantagruel
    I'm familiar with Laszlo , but not with that abstruse theory. However, the term sounds like Cartesian Dualism to me. His solution was "neat", in that it got the church off his back, by arbitrarily defining Non-Overlapping Magisteria. And materialistic Science has flourished for centuries since cutting itself off from Philosophy and Metaphysics. But since the Quantum revolution in Science, the overlap between Mind & Matter has become ever harder to ignore. Anyway, I'll check it out, because the notion of Complementarity is essential to my own abstruse thesis. :smile:
  • Hard problem of consciousness is hard because...

    You keep making empty statements. How does that have anything to do with this thread and what I said in the opening post?Zelebg
    The statements you refer to are empty (meaningless) to you, because you don't understand the unconventional worldview that the assertions are derived from. That's why I provide links for those who are interested enough to investigate a novel way of looking at the world.

    In the OP, you stated, as-if a matter of fact, that "At the bottom of it all is just plain mechanics, . . ." My replies have denied that assertion, and offered an alternative to the Mechanical worldview of Classical Materialism. I suppose you think the opposite of Materialism is Spiritualism. But my BothAnd philosophy accepts both the Materialism (Quanta) of Science, and the Spiritualism (Qualia) of Religion, while noting that they each exclude or ignore the other side of reality. When you can see the world as a whole, the Hard Problem of Consciousness vanishes as an illusion. :cool:


    PS__Unfortunately, my worldview has some features in common with New Age philosophy. Which is why I spend of lot of verbiage to distance myself from the NA merging of science and magic. Whatever seems like supernatural magic is actually either obfuscation or natural phase changes.

    Note : Richard Feynman quipped "If you think you understand quantum mechanics, you don't understand quantum mechanics." I believe that's because Quantum Mechanics is not mechanical at all, it's emergent. Physicist Carlo Rovelli labeled his new book Reality Is Not What It Seems. . . . from the conventional classical scientific perspective.

    BothAnd Principle : http://blog-glossary.enformationism.info/page10.html
  • Hard problem of consciousness is hard because...

    So you intend a falsification of A = A, insofar as some occasions permit A = not-A? I submit that if you’re daydreaming you’re not drivingMww

    Mww, precisely! As far as our consciousness is concerned, we are not driving, which is why we have the potential to crash and kill ourselves.

    Cognitive science says that our subconscious is driving. Hence, I'm driving and not driving at the same time. Therefore, consciousness is beyond our logical understanding.


    Same with metaphysical truths, per se: the principles of them may be found in reason a priori, and the possible objects given from those principles may be exemplified by experience, but that is not sufficient in itself to allow truths of any kind to reside in consciousness. Truth is where cognition conforms to its object, and no cognition is possible that is not first a judgement. Therefore, it is the case that truth resides in judgement, and if there is such judgement we are then conscious of that which is cognized as true.Mww

    I'm saying two things: 1. forms of qualia are essentially Kantian innate noumena, that are fixed properties in consciousness a priori. (Or metaphysical phenomena/existential phenomena that just is, and cannot be explained.) They can be described, beyond ineffable phenomena, but their nature can't be explained, particularly in the context of xnihilo.

    2: I believe you are essentially saying intellect precedes the (Metaphysical) Will. And I'm saying saying that the Will precedes intellect. In either case both are, insoluble. Yet another hard problem with consciousness.

    Why would it have one?Mww

    Consciousness would have a nature to its existence. I use the word nature because it's mutually exclusive in our abilities to logicize its existence. The nature of our existence is unknown.
  • Hard problem of consciousness is hard because...

    All the stuff about ethics and spirituality is besides any of this. This is just descriptive; any prescriptions could be paired with this. Accepting this description of the world doesn’t say anything about what is or isn’t valuable or good or etc.Pfhorrest

    My point, exactly. There’s the hard problem in a nutshell.
  • Hard problem of consciousness is hard because...

    P-zombie" is an incoherent construct because it violates Leibniz's Indentity of Indiscernibles without grounds to do so. To wit: an embodied cognition that's physically indiscernible from an ordinary human being cannot not have "phenomenal consciousness" since that is a property of human embodiment (or output of human embodied cognition). A "p-zombie", in other words, is just a five-sided triangle ...— 180 Proof

    Why would an entity that has the appearance of a regular human necessarily have phenomenal consciousness?

    That's a strong claim. It would require strong evidence.
    frank

    I neither claimed nor implied anything about "appearances" in relation to "phenomenal consciousness"; I used the concept of embodied cognition to point out that a 'p-zombie' with the same embodied cognition as a human being necessarily has the same phenomenal consciousness as a human being that's the (reflexive) output of human embodied cognitive processes I'm unaware of any explanation to the contrary (re: p-zombie sans phenomenal consciousness), and in all likelihood there isn't one; thus, I don't think the 'p-zombie' construct is conceptually coherent enough to do the thought-experimental work as advertised, namely to reify the illusion of mind-body duality (pace Spinoza et al) viz. qualia, etc aka "the hard problem of ..."
  • Hard problem of consciousness is hard because...

    For all practical purposes (science), what we perceive as concrete objects and physical effects is what is Real. But for theoretical purposes (philosophy), our perceptions of those objects are mental constructs. So discussions about Consciousness must make that distinction clear, or else, by reifying Consciousness, we run into the paradoxical "hard problem".Gnomon

    Yep. Paradoxical indeed: we think consciousness as that which that belongs to us because of our nature, then attribute to it qualities we can’t figure out how it has.
    ——————-

    what Einstein was talking about in his Theory of Relativity. What's real depends on who's looking.Gnomon

    Exactly right. In Einstein (1905), the simultaneity of relativity depends for its direct explanation on a third observer for the two participants in the events relative to each other. The relativity can only be immediately witnessed by an observer outside both, even if each participant can afterwards compare information.

    Good stuff. Fun to think about. ‘Preciate the references; mine would be different, but close enough to see each other.
  • Hard problem of consciousness is hard because...

    Or are you saying you can't imagine the p-zombie at all?frank

    I'm saying the concept is incoherent, and therefore as a counterfactual premise it renders the "hard problem" argument invalid.
  • Hard problem of consciousness is hard because...

    Instead of ad homs against the scientist, see if you can come to grips with the actual argument. But I'm not holding my breath.

    I want to understand how the way the eye works and how that corresponds to the sound of a breeze through the leaves of a tree.ovdtogt

    No time like the present for being able to study such things. Why not start with Facing up to the Hard Problem of Consciousness.
  • Logical proof that the hard problem of consciousness is impossible to “solve”


    What Chalmers thinks Experience Is
    Chalmers says “It is widely agreed that experience arises from a physical basis”. This is true. Chalmers speaks for an orthodoxy. He speaks not just for himself, but for an age, for a zeitgeist.

    Chalmers says that experience cannot be characterised by reducing it to physical terms. This, I believe, is a valid point.
    However his analysis of experience is hobbled from the start by a misunderstanding of experience and a mischaracterisation of it.
    And whilst he overtly states that experience may not be reduced to the physical many remarks that he makes indicate that in fact he does think of experience as something physical.
    His list of “relatively easy problems of consciousness” straightaway reveals an unclear grasp of what he is talking about. His list fails to identify the critical distinction between our sense of the body or brain as something physical and our sense of it as something that has a will. He divides up mental categorisation into two separate problems. He divides up cause and effect issues into two, and puts one of them in with one of the categorisation problems. He includes within this list a problem that goes right to the heart of the hard problem.
    So I would divide up these problems in the following way:
    • The ability to discriminate, categorise and integrate information in the way a piece of filter paper in a funnel set over a beaker divides up dirty water into dirt and water. It discriminates between dirt and water, integrates the separate parts of water with one another, in the beaker below, and integrates the bits of dirt with one another on the top of the piece of filter paper, and it categorises things into the categories, Pure Water and Pure Dirt.
    • The ability to focus attention in the way a blast furnace extracts iron from iron ore. Just as only the most intense stimulus gets through to the brain from the senses, so only the heaviest material in the blast furnace, iron—because it blocks the way for all other lighter material because of its weight—passes out of the trap door at the base of the blast furnace.
    • The ability to react to environmental stimuli in the way a stone reacts to the sun, by warming up. “React” is a Newtonian word. For every action there is an equal and opposite reaction. A dead frog is galvanised by electricity. Something is prodded and it yells. This is cause and effect.
    • The difference between wakefulness and sleep in the way a smoke detector behaves differently in the presence of smoke whether or not it is wrapped in a material that partially prevents smoke from getting to it.
    • The ability to choose to discriminate, integrate and categorise in a way that you please. The ability to choose to focus you attention in a way that you please. The ability to choose a certain behaviour over another.
    • The ability off a system to access and report on its mental states.

    The first four identify experience with physical processes. That Chalmers calls these problems relatively easy consciousness problems shows that he thinks of consciousness as physical—in spite of his overt avowal that it may not be understood as something physical. The second two of these identify experience with cause and effect, which is to identify consciousness with a physical process, (but, by the by, there is nothing even relatively easy about the topic of cause and effect, and so even it is a consciousness problem, it is not a relatively easy one).
    The fifth is to do with the relationship of consciousness to the will. Either Chalmers thinks we are just mechanisms, and that free will is illusory, in which case again, he is identifying consciousness with something physical, or he thinks there is such a thing as free will, in which case the fact that will is associated with consciousness makes this certainly not a relatively easy problem.
    The last of these I am going to talk about in a minute. It is, I believe, not separate from his “hard” problem but goes right to the heart of it.
    In other remarks it is revealed that Chalmers in fact thinks experience is something physical.
    Experience “arises” from the physical. Though it seems likely that he chose this word particularly so as not to identify the relationship between experience and the physical as that of causation, it does suggest a relationship of dependence, one-way dependence, of which causation is one version, indeed perhaps the paradigmatic version. Something, Y, arises from something else, X, when, whilst Y is non-identical to X, nevertheless somehow all the ingredients for Y are already contained within X. Y is independent of X in that the removal of Y would not also be the removal of X—you could take away experience but the physical would remain—but Y is not independent of X in that the removal of X would also be the removal of Y—if you took away the physical you would take away experience too. So within X, within the physical, you have everything you need to make Y, experience.
    Experience is something based on a “whirr of information processing”. In other words it is based on something that is like a computer.
    Experience is something that organisms have, which are physical things. It is located where organisms are located.
    Experience is a “fundamental property”. “Property” is a word we use for things like mass, charge, spin, colour, texture, hardness, etc.: physical things. Of course a person can have the “property” of being lazy, or spontaneous, or conscientious, but we don’t usually use the term “property” in these contexts.
    Experience is a bit like electromagnetism in that it is just a given. It is not something to be explained in the way that matter is not something to be explained. It just is. It is at the end of the line of explanation; it is an explanans, not an explanandum.
    However at the same time Chalmers argues that experience may not be explained and understood by reference to the Physical, and that it is something that is to be solved, that it is a problem, indeed a hard problem—in the way that the existence of matter say, is not something to be solved, is not a problem.
    So what is Chalmers saying? Is he saying experience is a problem? Or is he saying that—having identified experience as something fundamental, like electromagnetism, only different—he has in fact solved the essence of the problem?

    What Chalmers Thinks the “Hard” Problem of Consciousness Is
    So what is the “hard” problem, according to Chalmers?
    Experience is a kind of stuff, that has an identity with brains, or with the information processing of brains. It is in the same place as the brain. It is a mode of presentation of the physical; it is of the physical, but not the physical itself.
    Chalmers would want to say that of the two, the physical and experience, only experience is directly accessible to us. There is something underlying experience, the Physical, but we can only really infer it.
    I think however that what is going on in Chalmers’ mind, when he is identifying his problem is this. Take his example of the sound of a clarinet. He has two processes that he is comparing. One of them is what you might see through a microscope, of the timpanum vibrating, the transmission of this vibration through the bones of the ear, the pulse of electicity that travels from them to the aural region of the brain (with attendant releases of tiny quantities of chemicals)—then further, and more complex, pulses of electricity (and releases of tiny quantities of chemicals). Etc., etc. —Then, contrasted with this, there is the sound of the clarinet that we hear. All we can really say about that sound is that it is clarinet-sound-like.
    These two ideas of what is going on when we hear a clarinet, which appear so different, are in fact, puzzlingly, the same thing. Why is there this extra thing, the clarinet-sound-like sensation—over and above what is at its basis, the neurological information processing? How do they seem so different when there is some identity between them? (They are in the same place; they happen at the same time, you can correlate elements from the one —the various notes, for example—with the other; if you take away the information processing, you take away the clarinet sound too.) How is this one thing, me, both a physical thing and a thing that is made of experience? And why should there be this extra, inessential thing: experience?
    Chalmers is asking: How are these two stuffs, one of which is physical stuff, brains, or electricity, or information processing (such as what goes on inside computers), and the other of which is experience stuff—which are different—also the same, in that they are in the same place and happen at the same time?

    Why Chalmers’ Misidentifies the “Hard” Problem
    Chalmers’ identification of the “hard” problem derives from his mischaracterisation of experience.
    (A separate, tangential point: the hard problem that he is trying to get at is only one of the hard problems, the other of which concerns the will.)
    Chalmers asks how two non-identical stuffs are yet identified with one another, in that they are in the same place (and at the same time).
    I think this is quite wrong. I think the hard problem is the following: How is some stuff that is in a different place from my brain (whatever I am perceiving, or thinking about) also in my brain? How are these different stuffs, the stuff that is my brain, and the stuff I am looking at (say the shed at the bottom of my garden I am looking at now) apparently in the same place, in my brain? How do I know the world? How is my brain about my shed? —Or, if you don’t believe that the brain is “about” the world, how do we account for this apparent “aboutness”?
    Of course the shed can’t be in the same place as my brain. The literal, physical, shed, cannot be in the same place as my brain. That is one fact that seems certain. And yet it seems to be. This perception that I am having, this thought, seems to be shed-like. There is something of the outside world, inside me, inside this brain. How can this be?
    The problem Chalmers is addressing is: How is experience related to the physical? His framing of the question assumes that experience is a kind of stuff that overlays the brain. But this is a misunderstanding of the problem of experience. The problem of experience is this age-old problem, though it is expressed in a number of ways. How do I know the world? How can I, inside this brain, have access to the outside world? How are my thoughts about the world? What is the relationship of the subject to the object? How does a name mean what it means?
    To reiterate, and to characterise the difference between what Chalmers identifies as the hard problem and what I think is the hard problem, in the baldest, and the simplest way:
    Chalmers asks how can two different stuffs, physical stuff and experience stuff, be in the same place.
    I ask how two different physical objects, my brain and the shed, can make one stuff: experience. How are these non-identical things, my brain and the shed, somehow forced together, like two north poles of two magnets, into one thing. They want to spring apart. The situation is not resolved. There seems to be a contradiction.
    That is to say that I think the hard problem of consciousness—and indeed the hard problem that is at the basis of Chalmers’ hard problem—is the problem of representation.

    The Actual Hard Problem
    What I identify above as the real hard problem is really only one half of the hard problem, or one version of it. That version is, in shorthand, How is the world in my brain? How is the world somehow in here, when it can’t be in here?
    The other version is the mirror image of it. How am I out there? How is my brain where the world is? How is it that I, this brain, reaches out, is spread out, to what I am looking at, to the world all around me? How is my perspective on the world, the appearance of the world to me, the two-dimensionality of objects, as they are to me—identified with the world itself, of matter, of things that are nothing to do with me? How is the me-like version of the world identified with the non-me-like version, the world itself? How is the surface of things, which is a plane, and which is my version of Reality, identified with a volume, which is the world’s version of Reality? How is the gossamer-thin surface of things identified with the substance of things? How is this, over here, this perspective that is centred on this brain, or this eye, identified with that, over there, out there? The objects of the world seem to be ranged around my brain in a circle, or rather, in a sphere, and I go right up to the outer surface of these objects; what I know is whatever there is up to the surface of those objects, but no further: whatever is beyond that surface, or inside it, is forever mysterious to me. That is my limit, my knowledge, or whatever it is that I sense. But how is this, this sphere, that is distinct from thie objects of the world, also identified with them? How is this perspective identified with the perspectiveless world? How is a view identified with something that isn’t a view, that is just matter? Or the problem can be put like this: How do I know these objects, in that I perceive them, in that I can touch them, or touch them with my eyes—and also do not know them, in that what they really are is the substance that is behind the appearance? Or the problem can be put like this: How is the appearance of things to me quite different from the appearance of things to you—they are non-identical, as non-identical as the two different points from which the world is viewed—and yet each is identified with one and the same thing, the world? How is the greenness of that leaf, not the electromagnetic waves, but the phenomenological stuff, the stuff associated with me—out there, identified with colourless matter?
    There isn’t, in addition to the world (every object that is not the object that is my brain), and in addition to my brain, a third thing, a stuff, that is wrapped around the substances of the world, Appearance, identified with the world, but not the world itself. The question, the puzzle, is not how there is, in addition to substance, also appearance, surface, in the same place as substance. The question is, the puzzle is: how I do I reach right up to the substances of the world, when in fact I am distinct from the world?
    This is the essence of the problem that Plato and Aristotle are addressing. Their version of Chalmers’ hard problem is: There seem to be these two things, appearance (forms, universals) in the same place as substance, and yet they are different things. They are asking: how is there subjectivity out there, in the world? Or, if it not a problem, for Aristotle, then it is just a statement of how things are: The world is made of things, and those things are a combination of substance and form. Plato and Aristotle take this version of the problem as central. We (our age, of which Chalmers is a prominent proponent) takes the mirror image of their question as central: there seem to be these two things, experience and brain, and they are in the same place and yet there are different things. Or, if you like, and this seems the basis of Chalmers’ solution, and he is like Aristotle in this: the world is made of many things, some of which are people, and these things are a combination of brains (or information-processing, or electricity and chemicals) and experience.

    How the Actual Hard Problem is really not different from the Problem of Representation
    The Actual Hard Problem has two halves: How is the world in my brain? And how is my brain out there in the world?
    This question is really the same question of how there is representation. How is it that you can have two distinct things, say a picture, and what that picture is of, that yet have some sort of identity? How can one represent the other? What is it about a picture that enables it to represent that which it represents? How can one be represented by the other? How can that which is represented be picked out, identified, by something other than it? Imagine a painting, on canvas, of a horse. How does that painting reach out across air and space and take us to the horse? How does it point to the horse? How is it the case that if you travel to the painting, you also travel to the horse? How too is there something in a place other than the horse that somehow contains that horse? How is there a thing, quite other than the horse itself, that is also identified with the horse?
    Compare this problem with the problem of how a word represents a thing in the world. Consider the word “Escargot” (which was the name of a famous racehorse). How does that word refer to the horse Escargot? How is the horse Escargot referred to by the word “Escargot”? This is a different problem for the following reason. If you look at the picture of the horse, whilst it is a different object from the horse itself—it is in a different place; it is made of canvas and pigment—there is nevertheless something that might account for the identity. The two look the same. The horse has a head and a tail; the picture of the horse has a head and tail. The horse is brown. The picture of the horse is brown. The horse has four legs; the picture of the horse has four legs. They share a form, at least, though not a substance. But now look at the word “Escargot” and compare it to the horse Escargot. These two things seem to have nothing in common. You will look in vain amongst the letters, at the chemical constitution of the ink and of the paper pulp: you will find nothing of the horse Escargot, in that word. And yet the one “Escargot” does indeed seem to represent the other, Escargot. How is this possible? And how are these two different cases both seemingly examples of one and the same concept: Representation?
    The problem with this account of representation is that it doesn’t matter how identical two things are in form: this is not sufficient for Representation. Two identical twins are very similar, indeed far more similar than the canvas painting and the horse. Not only do they look the same, they are also made of the same kind of stuff (flesh and bones). And yet one twin is not the representation of the other.
    Or what about this: a pinhole camera and a horse. There is an image of the horse inside the pinhole camera. Light comes into the aperture in the front of the camera and terminates on the screen of the pinhole camera. The image of the horse represents the horse, and the horse is represented by the image inside the pinhole camera. These two things, the screen inside the camera, made of paper, and a horse-shaped pattern of light—and the horse, are distinct from one another, two separate things, and yet there is an identity, of sorts between them. How is this possible? Well, the image on the screen of the camera is caused by the horse. Light, emitted by the horse travels through the air and impresses itself on the screen. One and the same light is both initially identified with the horse, and then finally with the camera. There is a certain stuff, light, that is both identified with the horse and with the camera. Thus there are two things, two non-identical things, and a third thing, the light, that is identified with both. Thus representation is possible.
    The problem with this account of representation is that being the effect of a cause doesn’t make the one a representation of the other. A stone heats up in the sun. The warmth of the stone might have been caused by the sun but it is not a representation of the sun. You might claim it is, but first you would have to stretch the definition of “Representation” so widely that it incorporates all instances of causation. And secondly you would either have to rule out the case of words representing things, or, alternatively, you would have to trace a very tortuous and elaborate chain of causation from the thing to the word.
    Compare these cases to the case of the brain representing some object in the world (an object that is not the brain). This seems to be a case like the painting of the horse or like the pinhole camera image of the horse. If you inspect the word “Escargot” you will find nothing of the horse Escargot in it. If you inspect the brain however, you will find something of the object that that brain is perceiving (say). If I am looking at a horse there is an image of that horse on my retina, and, more than that, there is a correlate of that image in the electrical or chemical impulses that transmit this image to the visual part of the brain, and there, in the visual part of the brain, there is a correlate to those impulses. Indeed—though it doesn’t look like the image on the retina, nor indeed like the horse—it is a sort of picture of the horse. This sort of picture of the horse in the brain (though written a language opaque to us) is caused by the horse (through a chain of causation that passes from the horse, to the eye, to the brain) and it also has some formal identity with a horse: there is a something in the brain that correlates to the head of the horse, its tail, its four legs, its brownness, etc.). These perceptions, visual, aural and all the rest, are the basis of the contents of the mind. These are chopped up, reassembled in different ways, etc., filtered, processed, categorised, etc.. But the model is really an Empiricist one, a Lockean or Humean one, of things in the world writing on the tabula of the brain, or “impressing” themselves on the soft tissue of the brain.
    All these accounts of representation, the painting, the pinhole camera and the brain, are wrong. They fundamentally misunderstand representation. They are wrong because they fail to account for how a word represents.
    What is absolutely required for representation is that there is, as well as identity between that which represents and that which is represented (otherwise there is no link between the things) non-identity.
    A thing cannot represent itself. Nothing represents itself. In fact anything—except itself—in the entire universe, can represent a thing—there are no other restrictions whatsoever. Any supposed counterexample doesn’t work. Examples you might see in the philosophical literature in fact, clandestinely, split up—and must split up—that which is supposedly one thing that represents itself into two things. A critical factor in how the painting of the horse represents the horse, is that the horse is one thing, a thing in one place, and the painting is another thing, in another place. What enables the pinhole camera to represent the horse is that the pinhole camera is not the horse. What enables the brain to peceive the horse, indeed think about the horse, know the horse, is that the brain is a different lump of matter from the lump of matter that is the horse. In order that I know something I must be separate from that thing. Knowledge is a relationship, a relationship between a minimum of two things. What enables me to know the world (or rather to know everything that is not me, the knower) is that I am separate from the world. The world can’t know the world, because the world is the world. The world cannot represent the world because the world is the world. There may well be something within my skull wall that has some identity of some kind with the horse (an identity of form or an effect of which the horse is caused, or both) but what is also critical is that there is something that is entirely separate from the horse, and indeed entirely separate from whatever is inside my skull wall that has an identity with the horse, the image on my retina, or the electricity in my visual cortex.
    That I perceive the world, know the world, is that there is both some identity between me and the world, and some non-identity. Indeed that I perceive or know anything at all is that there must be both this identity and non-idenity. That therefore I introspect, that I have knowledge, not of the world, but of my experience, that I may access my experience or report on my experience (in Chalmers words) is that there is both some identity between that which knows and that which is known (and this condition is satisfied in that both of these are me, are inside my skull)—and also some non-identity. There must be a part of my brain that is not my experience, that looks at my experience, that accesses and that reports. That is why I divide myself into my experience and my brain. Another version of this division is my body or outer half, my senses, and then my inner half, my brain. Another version is a homunculus inside my brain. Another is a pineal gland inside my brain. Another is the “global workplace” inside the brain.

    How Representation is Impossible
    But all of the foregoing assumes that there is such a thing as Representation, that Representation is possible. That there is a me, in the centre of the world, registering the world, representing the world, that inside those other skulls (that belong to my family and friends, and the people I see on the street) there are representations of the world.
    That looking out into the garden, as I do now, that there is something that is doing the looking, in addition to what is being looked at. That there is a me in addition to the world.
    That this greenery, those trees, that wall and that shed, that sky—are inside my skull, and that there is something that they represent. That there is a world in addition to me.
    That Reality is two things, me and the world.
    That there is identity between those things, in that the one represents the other, and that there is non-identity between them too.
    But none of this is possible. A thing can’t be btoh identical to something else and also non-identical to it.

    The Resolution of the Hard Problem
    The actual hard problem, not the hard problem according to Chalmers, arises as the consequence of conceiving Reality to be two things, subject and object, Inner and Outer, me and the world, me in the world, Phenomenon and Thing-in-Itself (in Kantian terminology), Appearance and what appearance is of.
    The resolution of the problem is very simple to state: it is a denial of the premises, a denial that Reality is dual in this way, a denial that Reality really is a division into me and the world, a recogntion that the Inner and the Outer are the same, that there is no world apart from me, that there is no me apart from the world, that the world is not independent of me, and that I am not independent of the world, that indeed, said long ago, that Atman is Brahman.
    The resolution of the problem is very simple to state, but very hard to believe, very hard to trust in its truthfulness, very hard to imagine. This belief, that there is me and what is not me, is the foundation of all our thoughts about Reality. We cannot trust that it is true without also jettisoning everything we believe.
  • The Hard Problem of Consciousness & the Fundamental Abstraction

    Unfortunately, "consciousness" is an analogous term, and using this definition, when I define consciousness differently (as "awareness of intelligiblity"), is equivocation. If you want to criticize my work, then you must use technical terms as I use them. In saying this, I am not objecting to ypur definition in se, only to its equivocal use.Dfpolis

    -Ok I get what you mean by that word, but there is a huge practical problem in that definition.
    You define consciousness as "awareness of intelligibility", to be aware of our ability to understand. What about our ability to be aware on the first place....known in Science as Consciousness!(the ability to be aware of internal or environmental stimuli , to reflect upon them with different mind properties through the connections achieved by the Central Lateral thalamus i.e.intlligibility" and thus creating conscious content during a mental state.)

    Its looks like we have the practice of cherry picking a specific secondary mind property known as intelligibility or Symbolic thinking or Meaning and assigning the word Consciousness which is already in use for a far more fundamental property of the mind.

    To be honest I am with you on that. I always found our ability to produce meaning far more "magical" than our ability to attend consciously stimuli in the first place. After all we have a huge sensory system constantly feeding signals to our brains.

    Is this the Hard problem for you? because if that is the case a simple search will provide tones of known mechanisms on how the brain uses symbolic language and learning (previous experience) to introduce meaning to stimuli (internal or external).
    i.e. How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics
    https://www.sciencedirect.com/science/article/pii/S1364661313001228
    Huge database
    https://neurosciencenews.com/?s=how+the+brain+meaning

    Either way the practical problem of the suggested definition remains. We already have labels for that mind property and we experience an ambiguity issue since we already use the term Consciousness in a more fundamental mind property than Symbolic thinking.

    Then you will have no problem in explaining how this hypothesis, which I am calling the Standard Model (SM), conforms to the facts I raised against it.Dfpolis
    I will agree with you that it is a Working Hypothesis since we don't already have a Theory mainly because we have to many competing frameworks at this point.
    Are the facts you raised the following.
    (1) The Fundamental Abstraction of natural science (attending to the object to the exclusion of the subject);
    (2) The limits of a Cartesian conceptual space.
    If yes I have already answered that they are irrelevant to the phenomenon. We can elaborate more if you verify those facts.
    This hypothesis is the Conclusion we arrive after 35 years of systematic study of the functions of the brain.
    To be clear this is not a metaphysical claim. After all I reject all metaphysical worldviews, Physicalism/ Materialism included.
    I am a Methodological Naturalist and like science my frameworks and gaps of knowledge are shaped by our Scientific Observations and Logic solely based on Pragmatic Necessity , not because of an ideology.
    When we don't know, we admit we don't. We shouldn't go on and invent extra entities which are in direct conflict with the current successful Paradigm of Science.


    Please note that I fully agree that rational thought requires proper brain function. So, that is not the issue. The issue is whether brain function alone is adequate.Dfpolis
    -Yes a healthy functioning brain is a necessary and sufficient explanation for any property of mind known to us. We may miss many details on how specific properties correlate to specific brain functions but that's not a reason to overlook the huge body of knowledge that we've gained the last 35 years.
    The question "whether brain function alone is adequate." sounds more of a begging the question fallacy based on an general argument from Ignorance fallacy.
    Again our data and logic (Parsimony) doesn't really allow us to introduce unnecessary entities we are unable to test or verify as a solution to our current problems.
    This is a really easy way to pollute our epistemology with unfalsifiable "artifacts" (its Phlogiston,Miasma, Philosopher's Stone, Orgone Energy all over again).

    That may well be true. I do not know what neuroscientists consider hard, nor is that what I am addressing in my article. As I made clear from the beginning, I am addressing the problem Chalmers defined. That does not prevent you from discussing something else, as long as you recognize that in doing so you are not discussing my article or the problem it addresses. In saying that, I am not denigrating the importance of the problems neuroscientists consider hard -- they're just not my problem.Dfpolis
    -An important question that comes in mind is: " Is your problem relevant to our efforts to understand".
    As I explained Chalmers's problem is a fallacious teleological one. Its like me trying to find intention and purpose behind behind an unfortunate event....i.e. my house is destroyed by an earthquake.
    Those types of questions are a distraction.

    I want to focus on a specific issue common to almost all philosophers I talk to.
    You stated: ". I do not know what neuroscientists consider hard, nor is that what I am addressing in my article."
    I find this to be a serious problem for any discussion. How can you be sure about the epistemic foundations of your ideas and positions when you are not familiar with the latest epistemology on the topic? How can you be sure that we haven't answered those questions when your philosophy is based on ideas and knowledge of the past?

    In defining the Hard Problem, you quote a reputable secondary source (Scholarpedia), but I quoted a primary source. So, I will stick with my characterization.Dfpolis

    - I find my source pretty accurate because I have watched Chalmers asking the same "why" questions plus Anil Seth shares the same opinion with me. But my all means please share your primary source and I will retract my characterization "Teleological fallacy".


    There are many senses of "why." Aristotle enumerates four. I suppose you mean "why" in the sense of some divine purpose. But, I did not ask or attempt to answer that question. The question I am asking is how we come to be aware of neurally encoded contents. So, I fail to see the point you are making.Dfpolis
    -Agreed. But if Chalmers wanted answers to his ''why" questions with a different sense, he should have been Studying Cognitive Science. i.e. his first why question "Why are physical processes ever accompanied by experience?" the answer is simple. Evolutionary principles. Making meaning of your world ads an advantage for survival and flourishing(Avoiding suffering, managing pleasure etc).
    The answer on the other two why question is equally simple "because it does".(example of the electron).

    The question I am asking is how we come to be aware of neurally encoded contents. So, I fail to see the point you are making.Dfpolis
    In my opinion you fail because as you said yourself, you ignore the latest work and the hard questions tackled by Neuroscience.

    However, if you wish to call something "pseudo philosophical" or claim that it "create unsolvable questions," some justification for your claims would be courteous. Also, since I solved the problems I raised, they are hardly "unsolvable."Dfpolis
    -I was referring to Chalmers's pseudo philosophical "why" questions. Questions like "Why there is something instead of nothing" are designed to remain unanswered.
    Now what problems you raised and how they were solved???I will wait for a clarification on that interesting claim.


    I have never denied that the SM is able to solve a wide range of problems. It definitely is. The case is very like that of Newtonian physics, which can also solve many problems. However, I enumerated a number of problems it could not solve. Will you not address those?Dfpolis
    -Sure there are many problems we haven't solved (yet). Why do you think that the SM won't manage to finally provide a solution and how are you sure that some of them aren't solved already. After all,as you stated you are not familiar with the current Science on the topic.
    IS it ok if I ask you to put all the problems in a list (bullets) so I can check them?

    Again, this does not criticize my work, because you are not saying that my analysis is wrong, or even that reduction is not involved.cogency of your objection.Dfpolis
    Well I don't know if it was a critique of your work. I only address the paragraph (Article) on Reduction and Emergence"Does the Hard Problem reflect a failure of the reductive paradigm? "
    Also I addressed the following statement in your OP.
    "Yet, in the years since David Chalmers distinguished the Hard Problem of Consciousness from the easy problems of neuroscience, no progress has been made toward a physical reduction of consciousness. "
    The right answer is , yes they have been huge progress to the emerging physical nature of consciousness.

    Maybe you use "reduction" in a different sense and if you do that is a poisoning the well fallacy imho. By default we know,can verify and are able to investigate only one realm, the Physical.
    As far as we can say there are details in the physical system that we don't know or understand. Assuming extra realms is irrational without direct evidence and objective verification of their existence.

    Rather, you want me to look at a different problem. Further, with respect to that different problem, you do not even claim that the named methods have made progress in explaining how awareness of contents comes to be. So, I fail to see theDfpolis
    -As I explained if you are pointing to a different problem then you are committing a logical error. Science and every single one of us are limited within a single realm. The burden is not on Science to prove the phenomenon to be physical, but its on the side making the claim for an f an additional sub-straight. The two justified answers are "we currently don't know" Or"this mechanism is necessary and sufficient to explain the phenomenon".
    In my academic links you can find tones of papers analyzing which(and how) mechanisms enable the brain to introduce content in our conscious states. I can list them in a single post if you like.

    It is a definition, specifying how I choose to use words, and not a claim that could be true or false.bert1
    -My objection was with the word "prove", since in science we don't prove anything.
    Sorry, I just don't think you've grasped the distinction between definition and theory.bert1
    -ok I think we are on the same page on that.



    I agree. I did not say that science proved frameworks, but that we use their principles to deduce predictions. That is the essence of the hypothetico-deductive method.Dfpolis
    -Because the hard problem ...is a made up problem.(Chalmers's teleological questions).

    If you read carefully, you would see that I criticized Chalmers' philosophy, rather than basing my argument on it.Dfpolis
    -Yes you did, but you also accept a portion of it...right? In retrospect you did stated that your questions seek the "how" and I pointed out that Science has addressed many "how" questions on Brain functions and meaning/Symbolic thinking.

    -"Then you will have no difficulty in showing how my specific objections about reports of consciousness, one-to-many mappings from the physical to the intentional, and propositional attitudes, inter alia, are resolved by this theory -- or how neurally encoded intelligible contents become actually known. Despite the length of your response, you have made no attempt to resolve these critical issues"
    - Have you look in our latest epistemology and failed to find answers.?
    Can you give me an example for every single problem?

    This is baloney. I am asking how questions. The SM offers no hint as to how these observed effects occur. In fact, it precludes them.Dfpolis
    -Sure you clarified that and I pointed out the problem with your "how" questions. Many "how" questions have already been addressed and if they haven't been that is not a justification to reject the whole model (the Quasi Dogmatic Principles protects the framework at all time). After all its a dynamic model in progress that yields results and the only one that can be applied,tested produce causal descriptions and Technical Applications!

    Obviously, you have never read Aristotle, as he proposes none of these. That you would think he does shows deep prejudice. Instead of taking the time to learn, or at least remaining quiet when you do not know, you choose to slander. It is very disappointing. A scientific mind should be open to, and thirsty for, the facts.Dfpolis
    -Strawman, I never said he did. I only pointed out the main historical errors in our Philosophy. Teleology in nature(Chalmers's hard problem) and agency with properties pretty similar to the properties displayed by the phenomenon we are trying to explain.(your claim on the non physical nature of Consciousness)
    I only hope philosophers would take half of the courses on Neuroscience I have before talking about the unanswered mysteries of consciousness. Btw I am Greek. Studying Greek philosophers is my hobby.

    Nor am I suggesting that we do. I am suggesting that methodological naturalism does not restrict us to the third-person perspective of the Fundamental Abstraction. That you would think that considering first-person data is "supernatural" is alarming.Dfpolis
    -Abstract concepts do not help complex topics like this one. On the contrary they introduce more ambiguity in the discussion. Plus you strawmanned me again with that supernatural first person data.

    Mario Bunge's Ten Criticisms of contemporary academic philosophy highlighted this problem.
    Here is the list.

    Tenure-Chasing Supplants Substantive Contributions

    • Confusion between Philosophizing & Chronicling

    • Insular Obscurity / Inaccessibility (to outsiders)

    • Obsession with Language too much over Solving Real-World Problems

    • Idealism vs. Realism and Reductionism

    • Too Many Miniproblems & Fashionable Academic Games

    • Poor Enforcement of Validity / Methodology

    • Unsystematic (vs. System Building & Ensuring Findings are Worldview Coherent)

    • Detachment from Intellectual Engines of Modern Civilization (science, technology, and real-world ideologies that affect mass human thought and action)

    • Ivory Tower Syndrome (not talking to experts in other departments and getting knowledge and questions to explore from them or helping them)


    Science tells us that the brain is necessary and sufficient to explain the phenomenon even if we have loads of question to answer
    You see an issue in brain function being sufficient to explain the phenomenon.
    So here is my question. Lets assume that our current model never manages to reduce consciousness to a physical system. Does that point to a non physical function? If yes please elaborate.
  • The Hard Problem of Consciousness & the Fundamental Abstraction

    I recently published an article with the above title (https://jcer.com/index.php/jcj/article/view/1042/1035). Here is the abstract:Dfpolis

    I will try to break down every single claim in the OP(and some in your article) and ultimately try to explain why most of those "memes" in philosophy are either epistemically outdated or in direct conflict with our latest scientific understanding of the phenomenon.

    Before starting the deconstruction I always find helpful to include the most popular general Definition of Consciousness in Cognitive Science so we can all be on the same page:

    "Consciousness is an arousal and awareness of environment and self, which is achieved through action of the ascending reticular activating system (ARAS) on the brain stem and cerebral cortex (Daube, 1986; Paus, 2000; Zeman, 2001; Gosseries et al., 2011). "
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3722571/

    With the above description in mind and the tones of Neuroscientific publications found in the huge online data base (https://neurosciencenews.com/?s=how+the+brain), the conclusion that brain function is responsible for human behavior and thought processes is way more than an assumption.
    Its an established epistemology, part of our Academic curriculum for more than 35 years.

    Yet, in the years since David Chalmers distinguished the Hard Problem of Consciousness from the easy problems of neuroscience, no progress has been made toward a physical reduction of consciousness.D. F. Polis
    -That is only true for the advances in Philosophy. Almost all the breakthroughs made by relevant Scientific disciplines never make it in Neurophilosophy mainly because Philosophical frameworks that are based on the latest epistemology are part of Cognitive Science.

    Now, Chalmers's attempt to identify the Hard problem of Consciousness had nothing to do with the actual Hard problems faced by the field. In fact, the set of questions where pseudo philosophical "why" questions.
    I quote:

    "The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia).
    1.Why are physical processes ever accompanied by experience?
    2.And why does a given physical process generate the specific experience it does
    3.why an experience of red rather than green, for example? "
    http://www.scholarpedia.org/article/Hard_problem_of_consciousness

    Searching meaning in natural processes is a pseudo philosophical attempt to project Intention and purpose in nature (Agency) and create unsolvable questions. Proper questions capable to understand consciousness should begin with "how" and "what" , not why. (how some emerges, what is responsible for it etc).
    For those who are interested in the real Hard Problems of Neuroscience, Anil Seth a professor of cognitive and computational neuroscience explains in extensive detail why Chalmers "why" questions fail to grasp the real difficulties of the puzzle and identifies the real hard problems he and his colleagues are facing mainly due to the complex of the systems they are dealing with.

    https://aeon.co/essays/the-hard-problem-of-consciousness-is-a-distraction-from-the-real-one
    https://www.psychologytoday.com/us/blog/consciousness-deep-dive/202110/the-real-problem-consciousness

    This, together with collateral shortcomings Chalmers missed, show that the SM is inadequate to experience.D. F. Polis

    - I will make some points now that include some ideas in your article. TO keep it short it will be presented in bullets and feel free to demand additional info.
    1. Chalmers (as I already explained), failed to identify the real hard problems by misleading people in a conversation on purpose and intention which is fallacious when dealing with Nature.

    2. The current Working Hypothesis (SM) is more than adequate to explain the phenomenon. It even allow us to make predictions and produce Technical Applications that can directly affect, alter or terminate the phenomenon. It establishes Strong Correlations between lower level system(brain function) and high level systems(Mental states and properties).

    3. the Hard Problem doesn't reflect a failure of the reductive paradigm because this paradigm (tool of science)is not that RELEVANT to the methods we use to study Mental properties. Complexity Science and Scientific Emergence are the proper tools for the job.

    4."Epistemological emergence occurs when the consequences of known principles cannot be
    deduced. We often assume, but cannot prove, that system behavior is the result of isolated com-
    ponent behavior"
    -Thats not quite true. There is a general misconception about Strong Emergence in philosophy. First of all in science we don't "prove" frameworks, we falsify them and we accept them for their Descriptive and Predictive power. Strong Emergence is an observer relative term. Its describes a causal mechanism with unknown parameters that affect a system plus it accepts the properties of a phenomenon without asking "Why" they exist the way they do. All the Philosophical Hard Problem does is ask "why" this mechanism gives rise to that qualities. That is not a scientific or a Philosophical question.
    (here is a great video that explains the different types of Emergence : https://www.youtube.com/watch?v=66p9qlpnzzY&t=)
    In my opinion the whole "Hard Problem" objection is nothing more than an Argument from Ignorance and in many cases, from Personal Incredulity Fallacies.

    I could go in depth challenging the rest of the claims in the paper but It seems like it tries to draw its validity from Chalmers' bad philosophy.
    What I constantly see in philosophical discussion is the lack of references to the latest epistemology of the respected scientific fields.
    The Ascending Reticular Activating System, the Central Lateral Thalamus and the latest Theories of Consciousness on Emotions as the driving force (Mark Solmes, founder of Neuropsychoanalysis) leave no room for a competing non naturalistic theory in Methodological Naturalism and in Philosophy in general. Those attempts to use Quantum Physics(metaphysics in essence) in an effort to debunk the natural ontology of a Biological Phenomenon are just wrong.
    We might use the same tools (Complexity Science) to understand Consciousness and QM but that doesn't mean that our current Hypotheses on Quantum physics apply to a biological system.

    The honest answer on things we currently can't explain is "We don't know yet". We shouldn't "lets suggest the existence of advanced entity/substance/agent" just because we either ignore the latest epistemology of science or because we can not answer a "why" question.

    The current and most successful Scientific Paradigm doesn't accept made up entities as "carries" of the phenomenon in question. This is intellectual laziness. IT takes us back in bed with Aristotle. Are we going to resurrect Gods, Phlogiston, Miasma, Panacea, Orgone Energy all over again???
    Of course not because this practice offers zero Epistemic Connectedness, Instrumental Value, Predictive power etc( all 9 aspects of the systematic nature of science listed by Paul Hoyningen - Systematicity, the Nature of Science ).

    We don't have the evidence (yet) to use Supernatural Philosophy (reject the current Scientific paradigm of Methodological Naturalism) in our explanations just because we miss pieces from our puzzle. We can not go back assuming the existence of Advanced properties independent of low level mechanisms. This is what kept our epistemology from growing for centuries.
  • Solution to the hard problem of consciousness

    The explanatory gap" is misinterpreted by many philosophers as an "unsolvable problem" (by philosophical means alone, of course) for which they therefore fiat various speculative woo-of-the-gaps that only further obfuscate the issue.
    — 180 Proof

    Not at all.

    In philosophy of mind and consciousness, the explanatory gap is the difficulty that physicalist theories have in explaining how physical properties give rise to the way things feel when they are experienced. It is a term introduced by philosopher Joseph Levine.[1] In the 1983 paper in which he first used the term, he used as an example the sentence, "Pain is the firing of C fibers", pointing out that while it might be valid in a physiological sense, it does not help us to understand how pain feels.

    The explanatory gap has vexed and intrigued philosophers and AI researchers alike for decades and caused considerable debate. Bridging this gap (that is, finding a satisfying mechanistic explanation for experience and qualia) is known as "the hard problem".
    — Wikipedia

    As I've shown already in this thread, the hard explanatory problem has scientific validation, namely, that of the subjective unity of consciousness, and how to account for it in neurological terms. This is one aspect of the well-known neural binding problem, which is how to account for all of the disparate activities of the brain and body can culminate in the obvious fact of the subjective unity of experience.

    As is well known, current science has nothing to say about subjective (phenomenal) experience and this discrepancy between science and experience is also called the “explanatory gap” and “the hard problem” (Chalmers 1996). There is continuing effort to elucidate the neural correlates of conscious experience; these often invoke some version of temporal synchrony as discussed above.

    There is a plausible functional story for the stable world illusion. First of all, we do have a (top-down) sense of the space around us that we cannot currently see, based on memory and other sense data—primarily hearing, touch, and smell. Also, since we are heavily visual, it is adaptive to use vision as broadly as possible. Our illusion of a full field, high resolution image depends on peripheral vision—to see this, just block part of your peripheral field with one hand. Immediately, you lose the illusion that you are seeing the blocked sector. When we also consider change blindness, a simple and plausible story emerges. Our visual system (somehow) relies on the fact that the periphery is very sensitive to change. As long as no change is detected it is safe to assume that nothing is significantly altered in the parts of the visual field not currently attended.

    But this functional story tells nothing about the neural mechanisms that support this magic. What we do know is that there is no place in the brain where there could be a direct neural encoding of the illusory detailed scene (Kaas and Collins 2003). That is, enough is known about the structure and function of the visual system to rule out any detailed neural representation that embodies the subjective experience. So, this version of the Neural Binding Problem really is a scientific mystery at this time.
    — Jerome S. Feldman, The Neural Binding Problem(s)

    Your continual invocation of 'woo of the gaps' only illustrates that you're not grasping problem at hand. It's a hard problem for physicalism and naturalism because of the axioms they start from, not because there is no solution whatever. Seen from other perspectives, there is no hard problem, it simply dissolves. It's all a matter of perspective. But seen from the perspective of modern scientific naturalism, there is an insuperable problem, because its framework doesn't accomodate the reality of first-person experience, a.k.a. 'being', which is why 'eliminative materialism' must insist that it has no fundamental reality. You're the one obfuscating the problem, because it clashes with naturalism - there's an issue you're refusing to see which is as plain as the nose on your face.

    'Speculative woo-of-the-gaps' is at bottom simply the observation that there are things about the mind that science can't know, because of its starting assumptions. It's a very simple thing, but some guy by the name of Chalmers was able to create an international career as an esteemed philosopher by pointing it out.
    Wayfarer

    Great post! :up: Clarified some of my doubts on the hard problem of consciousness specifically that it's about the explanatory gap between physical theories and consciousness.

    The way I see it, materialistic explanations are of 2 kinds:

    1. Explanatory materialism: A phenomenon/object is explained in terms of materialism e.g. lightning is an electric discharge between and from clouds.

    2. Eliminative materialism: This is my area of interest. Depending on probably the way a theory is crafted, certain questions/concepts stop making sense of are nonsensical. Daniel Dennett's claims that consciousness is an illusion is of particular interest to me. I haven't read his original work on that topic but in the videos I saw of him conveying this point of view are more beating around the bush rather than a clear-cut statement with an argument to back it up.

    An example of the eliminative method would be category mistake kinda dismissals - what does the bark of a dog taste like? This question is declared as nonsensical. Similarly, consciousness my not be amenable to a physicalist/materialistic description and so might be rejected as meaningless. This, you might already notice, is the hard problem of consciousness - forget an explanation, we can't even translate consciousness in materialistic/physicalist terms.

    One intriguing facet to the problem is Wittgensteinian. His beetle-in-a-box gedanken experiment suggests that pure subjective experiences (consciousness for example) are such that we may simply be engaged with the issue at a synactic level - we can formulate grammatically correct sentences on consciousness - but when it comes to semantics (what we mean by "consciousness"), all bets are off.

    Wittgenstein claims, rightly so in my opinion, that not only is it possible that there are different things in each one of our boxes but that it's possible that our personal, private boxes could actually be empty (eliminative materialism, p-zombies).
  • The Hard Problem of Consciousness & the Fundamental Abstraction

    "Consciousness is an arousal and awareness of environment and self, which is achieved through action of the ascending reticular activating system (ARAS) on the brain stem and cerebral cortexNickolasgaspar
    Unfortunately, "consciousness" is an analogous term, and using this definition, when I define consciousness differently (as "awareness of intelligiblity"), is equivocation. If you want to criticize my work, then you must use technical terms as I use them. In saying this, I am not objecting to ypur definition in se, only to its equivocal use.

    the conclusion that brain function is responsible for human behavior and thought processes is way more than an assumption.Nickolasgaspar
    Then you will have no problem in explaining how this hypothesis, which I am calling the Standard Model (SM), conforms to the facts I raised against it. Please note that I fully agree that rational thought requires proper brain function. So, that is not the issue. The issue is whether brain function alone is adequate.

    Now, Chalmers's attempt to identify the Hard problem of Consciousness had nothing to do with the actual Hard problems faced by the field.Nickolasgaspar
    That may well be true. I do not know what neuroscientists consider hard, nor is that what I am addressing in my article. As I made clear from the beginning, I am addressing the problem Chalmers defined. That does not prevent you from discussing something else, as long as you recognize that in doing so you are not discussing my article or the problem it addresses. In saying that, I am not denigrating the importance of the problems neuroscientists consider hard -- they're just not my problem.

    In defining the Hard Problem, you quote a reputable secondary source (Scholarpedia), but I quoted a primary source. So, I will stick with my characterization.

    Searching meaning in natural processes is a pseudo philosophical attempt to project Intention and purpose in nature (Agency) and create unsolvable questions. Proper questions capable to understand consciousness should begin with "how" and "what" , not why. (how some emerges, what is responsible for it etc).Nickolasgaspar
    There are many senses of "why." Aristotle enumerates four. I suppose you mean "why" in the sense of some divine purpose. But, I did not ask or attempt to answer that question. The question I am asking is how we come to be aware of neurally encoded contents. So, I fail to see the point you are making.

    Stepping back, you are more than welcome to answer the questions you choose to answer and ignore those you choose not to deal with. The same applies to me. However, if you wish to call something "pseudo philosophical" or claim that it "create unsolvable questions," some justification for your claims would be courteous. Also, since I solved the problems I raised, they are hardly "unsolvable."

    The current Working Hypothesis (SM) is more than adequate to explain the phenomenon. It even allow us to make predictions and produce Technical Applications that can directly affect, alter or terminate the phenomenon. It establishes Strong Correlations between lower level system(brain function) and high level systems(Mental states and properties).Nickolasgaspar
    I have never denied that the SM is able to solve a wide range of problems. It definitely is. The case is very like that of Newtonian physics, which can also solve many problems. However, I enumerated a number of problems it could not solve. Will you not address those?

    the Hard Problem doesn't reflect a failure of the reductive paradigm because this paradigm (tool of science)is not that RELEVANT to the methods we use to study Mental properties. Complexity Science and Scientific Emergence are the proper tools for the job.Nickolasgaspar
    Again, this does not criticize my work, because you are not saying that my analysis is wrong, or even that reduction is not involved. Rather, you want me to look at a different problem. Further, with respect to that different problem, you do not even claim that the named methods have made progress in explaining how awareness of contents comes to be. So, I fail to see the cogency of your objection.

    "Epistemological emergence occurs when the consequences of known principles cannot be
    deduced. We often assume, but cannot prove, that system behavior is the result of isolated com-
    ponent behavior"
    -Thats not quite true.
    Nickolasgaspar
    It is a definition, specifying how I choose to use words, and not a claim that could be true or false.

    First of all in science we don't "prove" frameworks, we falsify them and we accept them for their Descriptive and Predictive power.Nickolasgaspar
    I agree. I did not say that science proved frameworks, but that we use their principles to deduce predictions. That is the essence of the hypothetico-deductive method.

    Strong Emergence is an observer relative term.Nickolasgaspar
    It is also a term that I did not employ.

    In my opinion the whole "Hard Problem" objection is nothing more than an Argument from Ignorance and in many cases, from Personal Incredulity Fallacies.Nickolasgaspar
    I am not sure how a problem, of any sort, can be a fallacy. It is just an issue that bothers someone, and seeks resolution. It may be based on a fallacy, and if it is, then exposing the fallacy solves it.

    I could go in depth challenging the rest of the claims in the paper but It seems like it tries to draw its validity from Chalmers' bad philosophy.Nickolasgaspar
    If you read carefully, you would see that I criticized Chalmers' philosophy, rather than basing my argument on it.

    The Ascending Reticular Activating System, the Central Lateral Thalamus and the latest Theories of Consciousness on Emotions as the driving force (Mark Solmes, founder of Neuropsychoanalysis) leave no room for a competing non naturalistic theory in Methodological Naturalism and in Philosophy in general.Nickolasgaspar
    Then you will have no difficulty in showing how my specific objections about reports of consciousness, one-to-many mappings from the physical to the intentional, and propositional attitudes, inter alia, are resolved by this theory -- or how neurally encoded intelligible contents become actually known. Despite the length of your response, you have made no attempt to resolve these critical issues.

    because we can not answer a "why" question.Nickolasgaspar
    This is baloney. I am asking how questions. The SM offers no hint as to how these observed effects occur. In fact, it precludes them.

    IT takes us back in bed with Aristotle. Are we going to resurrect Gods, Phlogiston, Miasma, Panacea, Orgone Energy all over again???Nickolasgaspar
    Obviously, you have never read Aristotle, as he proposes none of these. That you would think he does shows deep prejudice. Instead of taking the time to learn, or at least remaining quiet when you do not know, you choose to slander. It is very disappointing. A scientific mind should be open to, and thirsty for, the facts.

    We don't have the evidence (yet) to use Supernatural Philosophy (reject the current Scientific paradigm of Methodological Naturalism) in our explanations just because we miss pieces from our puzzle.Nickolasgaspar
    Nor am I suggesting that we do. I am suggesting that methodological naturalism does not restrict us to the third-person perspective of the Fundamental Abstraction. That you would think that considering first-person data is "supernatural" is alarming.

    Thank you for the time you devoted to reading my work and the effort that went into your response.
  • Why is the Hard Problem of Consciousness so hard?

    Subjective consciousness is not empirically observable. Behavioral consciousness is.Philosophim
    Behavior is not consciousness. That's stimulus and response. How do you behave when something sharp pokes into your back? How do you behave when your energy levels are depleted? These are not questions of consciousness.


    The only reason its a mystery is you think that its impossible for consciousness to come out of physical matter and energy. Why? It clearly does.Philosophim
    It's a mystery because nobody can explain it. Christof Koch can't, try though he does. You are not even offering speculations. You only say it happens in the brain. That's obviously where my consciousness is. But what is the mechanism?


    Is it some necessary desire that we want ourselves to be above physical reality?Philosophim
    Not for me. I don't care what the answer is. I just want to know what it is.


    Because if you eliminate that desire, its clear as day that consciousness is physical by even a cursory glance into medicine and brain research. I just don't get the mystery or the resistance.Philosophim
    If it was not a mystery, we would have the answer. We don't. The resistance, in my case, is that the answer of "It just does" to the question of "How does the physical brain produce consciousness?" is no answer at all. Just as we wouldn't accept that answer to "How does eating food give us energy?", we shouldn't accept it here.


    That is the Hard Problem. "Through our physical brain" is a where, not a how. "In the sky" does not tell us how flight is accomplished. "In our legs" does not tell us how walking is accomplished. "In our brain" does not tell us how consciousness is accomplished. The details are not insignificant. They are remarkably important. And they are unknown.
    — Patterner

    Sure, but its not the hard problem.
    Philosophim
    Yes it is. That's what is meant when people refer to the Hard Problem of Consciousness.

    From Chalmers' The Conscious Mind:
    Many books and articles on consciousness have appeared in the past few years, and one might think that we are making progress. But on a closer look, most of this work leaves the hardest problems about consciousness untouched. Often, such work addresses what might be called the “easy” problems of consciousness: How does the brain process environmental stimulation? How does it integrate information? How do we produce reports on internal states? These are important questions, but to answer them is not to solve the hard problem: Why is all this processing accompanied by an experienced inner life?

    From wiki:
    In philosophy of mind, the hard problem of consciousness is to explain why and how humans and other organisms have qualia, phenomenal consciousness, or subjective experiences. It is contrasted with the "easy problems" of explaining why and how physical systems give a (healthy) human being the ability to discriminate, to integrate information, and to perform behavioral functions such as watching, listening, speaking (including generating an utterance that appears to refer to personal behaviour or belief), and so forth. The easy problems are amenable to functional explanation: that is, explanations that are mechanistic or behavioral, as each physical system can be explained (at least in principle) purely by reference to the "structure and dynamics" that underpin the phenomenon.

    From Internet Encyclopedia of Philosophy:
    The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is “something it is like” for a subject in conscious experience, why conscious mental states “light up” and directly appear to the subject.

    From Scholarpedia:
    The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?
  • The 'hard problem of consciousness'.

    Your OP fails to correctly identify what makes the 'hard problem of consciousness' hard, and why David Chalmers wrote the paper Facing Up to the Hard Problem of Consciousness in the first place.

    As far as the complex processes of the body that spark a consciousness go, I suspect that activated matrices of neurons and electromagnetic (EM) fields play a part in activating dispersed areas of the brain to form coherent qualitative conscious responses.

    This would somewhat explain our preoccupation with consciousness being an ethereal non-physical thing, as EM fields are essentially invisible to human perception.
    Brock Harding

    So, here you're claiming that the motivations for even considering the hard problem of consciousness, are in reality physical! What this amounts to is saying that the hard problem really is just another of the easy problems, and that's if it's difficult, it's only because electromagnetic fields are mysterious. But again you're not describing what the 'hard problem' is.

    The key section from Chalmer's paper is, in my opinion, this one:

    The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (What is it Like to be a Bat,1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

    It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing.
    — David Chalmers

    In my analysis, the reason that this is intractable for objective analysis, is because it is a matter of subjective experience. That persons, and probably other animals, are subjects of experience, is what is at issue. But you can never get outside of, or 'objectify', the 'experience of being' that comprises the core of that state, because it is your very being. Because that can't be objectified, then it can't be dealt with by naturalism. That's what makes it a hard problem. So, at least try and describe it properly if you're wanting to explain it away.
  • The Hard Problem of Consciousness & the Fundamental Abstraction

    I started reading this carefully with some quotes and counters, then got to about section 4 and started skimming.

    First, this paper needs more focus. About half way through I forgot what you were even trying to show. You jump from this idea, to that idea from this philosopher, to over here, and I don't see a lot of commonality between them. You could probably cut your paper by quite a bit and still get to the point that you want.

    Second, maybe you do understand what the hard problem is, but I had a hard problem in seeing that.

    "I shall argue that it is logically impossible to reduce consciousness, and the intentional realities
    flowing out of it, to a physical basis."

    First, are you a neuroscientist? This is an incredibly bold claim. A neuroscientist will tell you, "We don't yet understand everything about the brain yet." Second, what about the easy problem of consciousness? We know if we give you some drugs, we can alter your conscious state. A man caught a disease and can no no longer see in color due to physical brain damage.

    There is more than enough evidence that consciousness results from a physical basis. The hard problem really boils down to "What is it like to be another conscious being?" We can look at a brain, but we can't experience the brain from the brain's point of view. Does that mean that we don't need physical medium for consciousness to exist? No, we do. We can see the physical combination of factors that consistently result in certain conscious experiences for individuals. This is how brain surgery works. What a brain surgeon cannot do is BE you. No one can as of yet do some alteration of the mind and suddenly experience what it is like to experience exactly what you do.

    "Does the Hard Problem reflect a failure of the reductive paradigm?"

    No, not at all. The hard problem reflects the failure in our ability to experience what it is like to be another conscious being. We can reduce plenty of conscious experiences to brain states. But we can't be that brain state. We can reduce that brain state to its physical components, but its subjective experience is outside of our ability to understand. Reductionism does not fail in what it does. Reductionism does not attempt to claim what a subjective experience is like. Reductionism is a ruler that measures a mile, but it cannot tell you, nor try to tell you, what it is like to be that mile having the experience of being measured.

    "I define ‘emergence’ as a logical property, viz. the impossibility of deducing a phenomenon from
    fundamental principles, especially those of physics. Emergence can be physical, epistemological,
    or ontological."

    This is not what emergence means. "Emergent properties are the characteristics gained when an entity at any level, from molecular to global, plays a role in an organized system."
    https://study.com/academy/lesson/emergent-properties-definition-examples.html

    "However, absent a solution to the Hard Problem, believing consciousness to be
    purely neural requires an act of faith."

    I can give you one better example. Plants do not have neurons. And yet we find plants react to the world in a way that we consider to be conscious. A wiki article for you https://en.wikipedia.org/wiki/Plant_perception_(physiology)#:~:text=Plants%20do%20not%20have%20brains,computation%20and%20basic%20problem%20solving.

    It has long been concluded that neurons are not needed for consciousness. Almost certainly AI will inevitably, if not somewhere already, be labeled as conscious. We'll be able to look at the program of an AI and go, "That right there is needed for the AI to be conscious." Will we know what its like to feel like a conscious AI? No. That is the hard problem, not that its consciousness can't be reduced to the physical processes it runs.

    If the point was to show that we should describe consciousness through potency and act, I confess not understanding how you got there. You kept referencing so many different philosophers and their viewpoints that I was unable to really glean your own. So many of the references just don't seem needed, and got in the way of the overall point I feel you were trying to make. I can tell you're well learned, and I know a lot of hard work went into that though. I just don't feel its very clear in making its point, seems to have some questionable assumptions and definitions, and ultimately feels like it loses its focus with a poor finish.
  • Intentional vs. Material Reality and the Hard Problem

    Please excuse the delay, I've had some sort of a tiring "bug."

    So, something like aristotelian realism about universals?aporiap

    Exactly.

    I'm not familiar with terms like 'notes of comprehension' or 'essential notes'.aporiap

    You might think of an object's notes of intelligibility as things that can be known and predicated of the object. Notes of comprehension would be those actually understood and constituting some abstraction. "Essential notes" would be notes defining an object -- placing it into a sortal.

    You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension.aporiap

    Yes. Most of what we think about are ostensible unities. We can "point them out" in some way, and they have some intrinsic integrity these are Aristotle's substances (ousia). Examples are humans, galaxies, quanta, societies, etc. Clearly some are more unified than others, but all have some dynamic that allows us to think of them as wholes.

    Extended wholes can be divided and so their potential parts are separable. Logical distinction does not depend on physical separability, but on having different notes of comprehension. The material and form of a ball are inseparable, but they are distinct, because the idea of form abstracts away the object's matter and that of matter abstracts away its form. So, that we can think of humans as material and intentional does not mean that that are composed of two substances any more than balls are.

    I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.]aporiap

    This is incorrect. Nothing in my argument prevents any algorithm from working. Another way of thinking about the argument is that it shows that consciousness is not algorithmic. In this particular case, if we want to sample every 10 ms. and removing and replacing the instruction takes 1 ms (a very long time in the computer world), all we need to do is speed up the clock by 10%.

    The critical question is whether it is the presence or the operation of the program that would cause consciousness. It it is difficult to believe that the non-operational presence of the algorithm could do anything. It is also hard to think of a scenario in which the execution of one step (the last step of the minimal program) could effect consciousness.

    Let's reflect on this last. All executing a computer step does is effect a state transition from the prior state S1 to a successor state S2. So if the program is to effect consciousness, all we need to do is start the machine in S1 and effect the transition to S2. Now it is either the S1-S2 transition itself that effects consciousness, or it is being in S2 that effects consciousness. If it is being in S2 that effects consciousness, we do not need a program at all, we only need to start the machine in S2 and leave it there. It is hard to see how see how such a static state could model, let alone effect consciousness.

    So, we are left with the possibility that a single step, that which effects the S1-S2 transition magically causes consciousness. This is the very opposite of the original idea that a program of sufficient complexity might consciousness. It shows that complexity is not a fruitful hypothesis.

    What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.aporiap

    They do use mathematical algorithms, even if they are unclear to the end user. At the most fundamental level, every modern computer is a finite state machine, representable by a Turing machine. Every instruction can be represented by a matrix which specifies, for every state, that if the machine is in state Sx it will transition to state Sy. Specific programs may also be more or less mathematical at higher levels of abstraction. The internet advertizing programs you mention represent interests by numerical bins and see which products best fit your numerical distribution of interests. Machine learning programs often use mathematical models of neural nets, generate and test algorithms and host of other mathematical methods depending on the problem faced.

    Also I don't get the point about why operating on reality representations somehow makes data-processing unable to be itself conscious. The kind of data-processing going on in the brain is identical to the consciousness in my account. It's either that or the thing doing the data processing [i.e. the brain] which is [has the property of] consciousness by virtue of the data processing.aporiap

    If does not mean that machines cannot be consciousness. It is aimed that the notion that if we model the processes that naturalists believe cause consciousness we would generate consciousness. An example of this is the so-called Simulation Hypothesis.

    Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order.aporiap

    I think my logic is exhaustive, but I will consider your example. The analogy fails because of the nature of consciousness, which is the actualization of intelligibility. While much is written about the flow of consciousness, the only reason it flows is because the intelligibility presented to it changes over time. To have consciousness, we need two factors: contents, and awareness of contents. There is no need for the contents to change to have consciousness so defined. The computational and representational theories of mind have a good model of contents, but no model of awareness.

    But, if it can't be physical, and it's not data processing, what is the supposed cause?

    I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.
    aporiap

    Convergent evolution generally occurs because certain forms are best suited to certain ends/niches and because of the presumably limited range of expression of toolkit genes. In other words because of physical causal factors.

    Still, I don't think your response addresses my question which was if the cause of hypothetical machine consciousness is not physical and it is not data processing, what is it?

    What makes different implementations of TV pictures equally TV pictures is not some accident, but that they are products with a common design goal. So, I have two questions:
    1. What do you see as the explanatory invariant in the different physical implementations?
    2. If the production of consciousness is not a function of the algorithm alone, in what sense is this (hypothetical) production of consciousness algorithmic?

    I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described?aporiap

    Yes, there are different models of computation. Even in the seminal days of computation, there were analogue and digital computers. Physical properties are not part of the computation models in the article you cite. If you read the definitions of the model types, you will see that, after Turing Machines, they the are abstract methods, not even (abstract) machine descriptions.

    I have been talking about finite state machines, because modern computers are the inspiration of computational theories of mind, and about Turing machines because all finite state machine computations can be done on a Turing machine, and its simplicity removes the possibility if confusing complex machine design with actual data processing. I think few people would be inspired to think machines could be conscious if they had to watch a Turing machine shuttle its tape back and forth.

    Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality.aporiap

    All the missing instruction argument does is force one to think though why, in a particular case, materiality cannot provide us with intentionality. It moves the focus from the abstract to the concrete.

    I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.aporiap

    We are indeed close. The problem is that there are no abstract "physical substrates." The datum, the given, is that there are human beings who perform physical and intentional acts. Why shoehorn intentionality into physicality with ideas such as emergence or supervenience? Doing so might have social benefits in some circles, but neither provides an explanation or insight into the relevant dynamics. All these ideas do is confuse two logically distinct concepts.

    Naturalists would like to point to an example of artificial consciousness, and say "Here, that was not so hard, was it? We don't need any more than a good understanding of (physics, computer science, neural nets, ...) {choose one}. Of course, there is no example to point to, and if there were one, how could we possibly know there was?

    If you want a computer to tell you it's self-aware, I can write you a program in under five minutes that will do so. If you find that too unconvincing, I could write you one that outputs a large random of digits of pi before outputting "I wonder why I'm doing this?" Would such "first-person testimony" count as evidence of consciousness? If not, what would? Not the "Turing test," which Turing recognized was only a game.

    I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem.aporiap

    I don't think it does. I think Chalmers came to the notion of the "Hard Problem" by historical reflection -- seeing the lack of progress over the last 2500 years. I am arguing on the basis of philosophical analysis that it is not a problem, but a chimera.

    Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] .aporiap

    The problem is that consciousness is not at all emergent in the sense in which viscosity and surface tension are. We know the microscopic properties that cause these microscopic properties, and can at least outline how calculate them. They are not at all emergent in the sense of appearing de novo.

    We understand, fairly well, how neurons behave. We know the biomechanics of pulse propagation and understand how vescules burst to produce release neurotransmitters. We have neural net models that combine many such neurons to provide differential responses to different sorts of stimulation and understand how positive and negative feed back can be used to improve performance -- modelling "learning" in the sense of useful adaptation.

    None of this gives us any hint as to how any combination of neurons and/or neural nets can make the leap into the realm of intentionality -- for the simple reason that none of our neuroscientific knowledge addresses the "aboutness" (reference) relevant to the intentional order.

    There is an equivocation on "emergence" here. In the case of viscosity and surface tension, what "emerges" is known to be potential at the microlevel. In the case of consciousness, nothing in our fairly complete understanding of neurons and neural nets hints at the "emergence" of consciousness. Instead of the realization of a known potential, we have the coming to be of a property with no discernible relation to known microstructure.

    Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentiallyaporiap

    Let's think this through. The image of the cat modifies my retinal rods and cones, which modification is detected by the nervous system in whatever detail you wish to consider. So, every subsequent neural event inseparably caries information about both my modified retinal state and about the image of the cat because they are one and the same physical state. I cannot have an image of the cat without a modification of my retinal state, and the light from the cat can't modify my retinal state without producing an image of the cat.

    So, we have one physical state in my eye, which is physically inseparatable from itself, but which can give rise to two intentional states <the image of the cat> and <the modification of my retinal state>.

    Of course, once the intellect has distinguished the diverse understandings into distinct intentional states and we start to articulate them, the articulations will have different physical representations. But, my point is that no purely physical operation can separate one physical state into two intentional states. Any physical operation will be performed equally on the state as the foundation for both intentional states, and so cannot separate them.

    Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain.aporiap

    That is because I hold, as a matter of experience and analysis, that the physical does not fully determine the intentional. I first saw this point pressed by Augustine in connection with sense data not being able to force itself on the intellect.. Once I saw the claim, I spent considerable time reflecting on it.

    Consider cases of automatic processing, which show that we can respond appropriately to complex sensory stimuli without the need for intellectual intervention. Ibn Sina gives citara players as his example, Lotze offers writing and piano playing as his, Penrose points to people who carry on conversations without paying attention, J. J. C. Smart proffers bicycle riding. So, clearly sensory data and its processing does not force itself on awareness.

    The evidence for "the unconscious mind" similarly shows that data processing and response can occur without awareness. Most of us have been exposed to Freudian case studies at some point. Graham Reed has published studies of time-gap experiences in which we become aware of the passage of time after being lost in thought. Jacques Hadamard provides us with an example of unconscious processing in Poincare's solution to the problem of Fuchsian functions.

    In Augustine's model, rather then the physical forcing itself on the intellect, we do not become aware until the will turns the intellect's attention to the intelligible contents. This seems to me to best fit experience.

    I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain.aporiap

    What kind?

    Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.aporiap

    While I know what theoretical constructs are, I am unsure what you mean by the measurables if not the "actual entities." How can you measure what does not exist?

    I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.aporiap

    I never give much weight to "future science."
    ------------

    The more accurate thing to say is that there are neurons in higher-level brain regions which fire selectively to seemingly abstract stimuli.aporiap

    I have no problem with this in principle. Neural nets can be programed to do this. That does not make either subjectively aware of anything.

    That seems to account for the intentional component no?aporiap

    How? You need to show that this actualizes the intelligible content of the conscious act.
  • Does the "hard problem" presuppose dualism?

    My question is if dualism isn't correct, would there be a need for two problems of consciousness?Wheatley

    I can see where you're coming from. The fact of the matter is that Dualism implies and is implied by The Hard Problem Of Consciousness. It's, in logical terms, a double implication or a biconditional: Dualism <--> The Hard Problem Of Consciousness.. Another way of expressing this biconditional relationship would be Dualism is true if and only if there's The Hard Problem Of Consciousness

    Suppose D = Dualism and H = The Hard Problem Of Consciousness

    D <--> H = (D --> H) & (H --> D)

    We know, for certain, that D --> H (This is why you're saying dualism presupposes the hard problem of consciousness and you're correct). When we assume dualism, the hard problem of consciousness is true. However, we can't prove dualism with D --> H. All we can do with the statement D --> H is to falsify dualism when ~H is true using modus tollens.

    However, we can prove dualism using the other half of the biconditional relationship viz. H --> D and applying modus ponens. If The Hard Problem Of Consciousness is true then Dualism must be true and that's what David Chalmer's is up to.

    My two cents...
  • Solution to the hard problem of consciousness

    So after a bit more reflection on questions like why does consciousness, this universe, or even existence "exists", I began to think that maybe it's our understanding of consciousness that makes the problem seem "hard".Flaw

    It's a kind of distortion or forgetting of history that this is called the "hard problem". During the enlightenment when Descartes, Hume, Kant and the like were producing masterpieces, the hard problem was "motion", that is the movement of objects. Newton was astonished that he could not give a physicalistic account of gravity.

    For whatever reason, the "hard problem" of motion has been forgotten in terms of people even knowing it used to be a problem at all. Gravity's inconceivability has just been accepted. Now we have this specific articulation of the hard problem, which at the time of the 18th century had to be admitted, by some anyway: that matter thinks.

    Yes Chalmers pointed to a hard problem, but we should not forget that gravity, electromagnetism, free will, causality and indeed a great portion of philosophy are hard problems too. Perhaps by contextualizing this issue, it will seem less specifically puzzling.

    After all, we are acquainted with experience much better than the world out there.
  • The 'hard problem of consciousness'.

    Perhaps your opinion is that we only need to solve the 'easy' problem of consciousness, and that we don't need to take the 'hard' problem seriously. I don't mind that. It sounds pragmatic.pfirefry

    Not quite. In my opinion, the hard problem of consciousness simply doesn't exist.

    Chalmer does not present any reasonable arguments to the existence of such a hard problem. His entire theory appears to me based on a gut feeling. His main concern seems well summed up in this quote:

    "Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does."

    Do note, at no point ever does Chalmer elaborate in which way and why this would be "unreasonable" and how it is "objectively" so. Quite the contrary I believe this to be a deeply subjective insight. "I can not wrap my head around the idea that I, this sentient human being, became sentient from something that isn't sentient."

    And once you simply refuse to accept this as a possibility, as Chalmer does, suddenly everything becomes an "easy problem" because no answer will ever suffice to a problem that doesn't exist.

    Another quote from Chalmer:
    The critical common trait among these easy problems is that they all concern how a cognitive or behavioral function is performed. All are ultimately questions about how the brain carries out some task-how it discriminates stimuli, integrates information, produces reports and so on. Once neurobiology specifies appropriate neural mechanisms, showing how the functions are performed, the easy problems are solved. The hard problem of consciousness, in contrast, goes beyond problems about how functions are performed. Even if every behavioral and cognitive function related to consciousness were explained, there would still remain a further mystery: Why is the performance of these functions accompanied by conscious experience? It is this additional conundrum that makes the hard problem hard.

    Here in bold, we have the big conceptual mistake. Chalmer thinks these functions are accompanied by experience. The "easy problems" that Chalmer refutes essentially claim something different: These functions are experience in itself.

    We can see this most readily in microorganisms. They possess no brain, no cognitive abilities, no central nervous system - and yet most of them are capable of receiving sensory stimuli in some forms and accordingly react to their environment. These are simple chemical and electrical mechanisms - but these simple mechanisms are enough to make a microorganism come to "life", starting to act and react in all kinds of ways - sustaining itself, avoiding threats, reproducing.

    Now we jump a couple billion years in the future and realize we consist of trillions of these cells, some more sophisticated than others, working together to sustain the entire cluster of cells. In this regard, it's no surprise that Chalmer can't wrap his head around this process. Who of us can? It's been developing and refining itself for billions and billions of years, unparalleled in complexity.
  • Why is the Hard Problem of Consciousness so hard?

    Another way to express the Hard Problem is : "how does physical activity (neural & endocrinological) result in the meta-physical (mental) functions that we label "Ideas" and "Awareness"?Gnomon

    I still see that as the easy problem, as its a very clear approach. Eventually after research, we find that X leads to Y. Its a problem, and I'm not saying its 'easy', its easy in contrast to the hard problem. Its called a hard problem because there's no discernable path or approach towards finding the answer. If you shape a question about consciousness that has a clear path forward to attempt to solve the problem, that is an easy problem.

    The word 'how' can easily allow the implicit 'why' to slip in where it shouldn't. "Why do we have subjective experience?" is a hard problem. We know how to influence and access consciousness in the brain though subjects such as in brain surgery. You can poke certain areas of the brain and ask the patient what they experience, and it will cause changes in their subjective experience. That's the how.

    'Why' is an entirely different question. Why does matter if organized a particular way create consciousness? My point is that this is no harder a problem then asking why matter behaves in any way at all. Why does hydrogen and oxygen make water? Not how. We know that. But why does it do that at all? Its simply a narrower question to the big question of "Why does anything exists at all?". People seem to mix up the "how" and "why" portion of consciousness completely into the "how" point, which causes confusion. That's why philosophers and scientists are very pointed in showing what the easy problem entails. The easy is the 'how', the hard is the 'why'.

    But, like Gravity, we only know what it does physically, not what it is essentially.Gnomon

    True. Part of human reasoning is limiting the types of questions to chase with the resources and understand we have. There are plenty of times when we reach a limit in how to proceed with further understanding of a particular nature. So we take what we understand as it is, and use it going forward. What we do understand is that gravity comes from mass. What we don't do is assume because we cannot answer the details, that there is some unidentified third property that must be responsible for it. That's a "God of the gaps" argument.

    It is not that I have an issue with people speculating that consciousness is caused by something besides the brain. By all means, speculate away! It is when people assert that because we can speculate, that speculation has validity in overriding the only reasonable conclusions we can make at this time. If someone said, "Well it just doesn't make sense to me why mass creates gravity, therefore it must be the case that 'massicalism' is inadequate to express what's really going on, and that gravity is somehow separate from mass and energy. That's ridiculous.

    The scientific fact as of today, is that consciousness is caused by the brain. There is zero evidence otherwise. The idea that consciousness is not caused by the brain is pure speculation, and speculation has no weight to assert anything besides the fact that it is merely speculation.

    Recent scientific investigations have found that Information is much more than the empty entropic vessels of Shannon's definition. Information also is found in material & energetic forms.Gnomon

    Of course. If the brain is physical, this is the only reasonable conclusion. Further, computers have clearly shown that information can be stored and manipulated with matter and energy.

    The "physical capability" of Energy to exist is taken for granted, because we can detect its effects by sensory observation, even though we can't see or touch Energy with our physical senses*2. Mechanical causation works by direct contact between material objects. But Mental Causation works more like "spooky action at a distance". So, Consciousness doesn't act like a physical machine, but like a metaphysical person.Gnomon

    The only disagreement I have with you is that I believe we act exactly like physical machines, only more advanced. I do not see anything about humanity that is separate from the universe, but is one of the many expressions of the universe.

    Again, in my thesis, Consciousness is defined as a process or function of physical entities. We have no knowledge of consciousness apart from material substrates. But since its activities are so different from material Physics, philosophers place it in a separate category of Meta-Physics. And religious thinkers persist in thinking of Consciousness in terms of a Cartesian Soul (res cogitans), existing in a parallel realm.Gnomon

    Fantastic breakdown! The only addendum I would make is "But since its activities are not fully understood in terms of material physics".

    But my thesis postulates that both Physical Energy and Malleable Matter are emergent from a more fundamental element of Nature : Causal EnFormAction*4(EFA). The Big Bang origin state was completely different from the current state, in that there was no solid matter as we know it. Instead, physicists imagine that the primordial state was a sort of quark-gluon Plasma, neither matter nor energy, but with the potential (EFA) for both to emerge later. And ultimately for the emergence of Integrated Information as Consciousness. :smile:Gnomon

    I also have no problem with constructing other language terms to describe consciousness. The only problem is when someone believes that a change in language undermines the fact of its underlying physical reality. Also, my understanding is that this primordial state is also matter and energy. It is a 'thing', and until we can find the state of a thing that exhibits itself differently from matter and/or energy, it fits in one of those two categories.

    The evidential Gap, beyond the evidence, can be filled with speculation of Creation, or a Tower-of-Turtles hypothesis.Gnomon

    This is true. As long that speculation does not forget it is speculation and asserts that it must be so.

    However, Philosophical questions about Mind & Consciousness depend on personal reasoning (Inference) from that physical evidence. If you can't make that deduction from available evidence, then you live in a matterful but mindless & meaningless world. And the mystery of Consciousness is dispelled, as a ghost, with a wave of dismissal.Gnomon

    Again, fantastic contribution. Agreed.

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.