• Malcolm Lett
    76
    Most of the philosophical stances w.r.t. the nature of consciousness are derived from faulty intuition. What we call the explanatory gap, is only an intuitional gap.

    At least, that's an argument that I want to test the waters on.

    I have a materialistic theory of consciousness that I believe provides a good in-road into explaining phenomenal consciousness. I'll give the details to that theory in another post soon. For now, I wanted to understand how such a claim would be taken - its strength. Let me explain....

    I take a materialistic stance w.r.t. to the nature of both access and phenomenological consciousness. I believe they can be explained through physical processes that we already understand. The usual argument against such a stance is that it leaves an explanatory gap - that consciousness "feels" a certain way that cannot be explained mechanistically / representationally / reductively / and other variations on the theme.

    Point number 1:
    • Our intuition is the source of that complaint. We have no logical grounding to claim yay or nay to whether computational physical processes could yield conscious feels. I think it was Ned Block in his "The Harder Problem of Consciousness" that argued strongly that we have no grounding for such discussions. So why do we hold so strongly to such views? Clearly it's our intuition. It just "seems wrong".

    There's nothing inherently wrong with using our intuitions. Most of philosophy and science is driven by it. I'm a software engineer by trade, and my intuition is usually the first thing I rely on to identify the cause of a bug. But intuition is only good as a starting point, and must be followed up by analysis, empiricism, or both.

    It's worth asking why our intuition suggests what it does w.r.t conscious feels and mechanistic explanations. And to what extent we should trust it? We have only one source of information about conscious experience - our own. Not even of yours, or theirs, just my own. A data point of one. We have no knowledge of any sort w.r.t the experience or lack thereof of any other process. Thus, it's natural that our intuition should suggest that the only other kind of thing that experiences anything like what we experience are those things that are very closely the same as ourselves. Our intuitions should be critiqued in the same way that we critique the ChatGPTs of the world - they're just extrapolating and mimicking what they've already seen. When the IntuitionGPT has only ever seen one example, it'll assume that that's the only outcome.

    How trustworthy is that intuition? It's not. Kant's explanation that we cannot objectively know anything about the world is proving ever more prescient. Neuroscience has confirmed that our perceptions are hallucinations designed to approximate certain aspects of the outside world - they're optimized to feed us the information we need for our survival - they're not optimized to accurately represent the world (see Hoffman's Interface Theory of Perception for one particular explanation).

    Point number 2:
    • Our perception of consciousness is equally subject to the same perceptual hallucinations as all other perceptions.

    This second point will likely not be accepted by all. I am biased here by my own theories of the mechanisms involved. But at least you must accept that the truth of this point is both possible, and likely; given our more recent understanding of the sheer fallibility of almost everything that goes on in the brain.

    In brief, I hold that the content of consciousness is a high-level summary of the general "goings on" within the brain. A Higher Order Thought (HOT), or a Higher Order Perception (HOP), if you will, but without most of the prematurely decided upon details of the variants of those theories. Exteroceptive, interoceptive senses provide perceptual state to the brain, the brain processes that in certain ways, resulting in both brain state (at any given moment in time) and brain behavior (brain dynamics over time). I believe that the content of consciousness is a dimensionally reduced summary of all of that. And that conscious contents is "for" the same kind of thing that externally-focused perceptions are "for" - for providing sufficient information about the (external or internal) world in order to aid survival. Thus, just like for other perceptions, perception of consciousness is a hallucinated approximation of what's going on in the brain. The brain thinks it knows what it's doing, but in reality it has hardly any clue.

    I could well be wrong, but I'm probably not. Not given everything else we know.

    There's another point in favor of such a view. If you ignore any practical details for a moment, you could easily argue that the brain must surely have direct access to its own state. But that's impractical. The state of the brain is huge. Seriously huge. And most of that is there already for the purpose of considering the outside world. It's too huge for the brain to process it for the purpose of considering itself. Any attempt for the brain to consider 100% of its own state would lead to an infinite regression on the size of the brain. Sorry Lucy, I loved that movie, but it's totally unrealistic.

    Point number 3:
    • We are delusional when it comes to our perception of consciousness.

    Kant, the Interface Theory of Perception (a la Hoffman), the Predictive Perception theory (a la Friston and others), all say the same thing: our perceptions of the external world are only guesses. But there's one significant thing that differentiates perception of the outside world from perception of consciousness. The outside world gives a reality check. If our perceptual approximation is too different from the actual outside world, we won't survive. We'll stub our toes against rocks that we didn't perceive. We'll fall into holes when we perceived a flat land. We'll waste energy running away from imaginary dragons.

    Conscious perception (ie: perception of consciousness) doesn't have the same kind of reality check. Everything in the brain is states flowing backwards and forwards across populations of neurons. There's no solid rocks. If a conscious perception hallucinates the existence of a particular brain state, and leads to a certain brain behavior as a consequence, this just leads to different state flows. Sometimes those state flows will be so bad that they harm survival, but many others will just flow away. Brain states are soft. They're more lenient than the outside world. There's just more room for variations of brain states and brain behaviors. I might also be wrong about this point. I'm in seriously tenuous water. But any argument otherwise is equally tenuous.

    The most we can say for certain is that our perception of consciousness may be completely delusional.

    What does this mean?

    It means that the way it "feels" to be conscious is a result of our delusional conscious perception. The accuracy that such a feeling has to convey information about what is truly happening inside is highly questionable. ie: accuracy of conscious perception should be treated with the same Kantian spectacles as with all other perceptions. This is the complaint against introspection on steroids. Not only is introspection highly suspect for judgements about brain function in general, it is also highly suspect for judgements of introspection itself.

    Point #1 was that our discomfort with some theories of consciousness is driven by our intuition, but that our intuition is guided by a data sample of 1. Point #3 says that, not only is the data size 1, but that even that 1 sample is not to be trusted.

    Again, what does this mean?

    Well, there's the solipsistic outtake - we should stop talking about this because we can't know anything.
    My personal outtake is somewhat more selfish - I want to use this as an argument for why materialism can be accepted as an explanation for consciousness.

    In any case, what do you think about the argument overall?
  • Joshs
    5.7k


    the way it "feels" to be conscious is a result of our delusional conscious perception. The accuracy that such a feeling has to convey information about what is truly happening inside is highly questionable. ie: accuracy of conscious perception should be treated with the same Kantian spectacles as with all other perceptions. This is the complaint against introspection on steroidsMalcolm Lett

    Kant recognized that the fundamental organizing principles
    making the material world intelligible to science are not located in materiality itself but are given beforehand. One can apply a Kantian approach to questions concerning the embodied nature of cognition and the organizing role of affectivity and subjective point of view in formulating empirical concepts about the world. Doing so leads to
    the recognition that empirical knowledge of ‘materiality’ is inextricably tied to what ‘matters’ to an embodied organism
    relative to its ways of interacting pragmatically in its physical and social environment. Subjective valuation and point of view cannot be split off from material facts; such ‘feeling’-based frames of reference define the qualitative meaning of our concepts. A fact, like a tool, is meaningless outside of what we want to do with it, what larger purposes and goals we are using it for. Every fact ( the definition of a point) can be understood within an indefinite array of potentially incommensurable accounts. Which account is true depends on what we are using the account for.

    I’m not saying there are no real facts in the world. I’m saying that embodied human practices are crucial part of what it means to know the real world.
  • Mww
    4.9k
    In any case, what do you think about the argument overall?Malcolm Lett

    Overall, not too bad, except for the false attributions of Kantian metaphysics. It would have been better to go your own way and leave him out of it.

    Which is merely a friendly way of saying my opinions would have been happier….
  • NotAristotle
    384
    I take a materialistic stance w.r.t. to the nature of both access and phenomenological consciousness. I believe they can be explained through physical processes that we already understand. The usual argument against such a stance is that it leaves an explanatory gap - that consciousness "feels" a certain way that cannot be explained mechanistically / representationally / reductively / and other variations on the theme.Malcolm Lett

    I think consciousness is constituted by physical processes, but then I also think the explanatory gap is reputable. I do not see why these two views are at odds. I think Chalmers believes consciousness is constituted by physical processes but (to my knowledge) he also proposed the explanatory gap. So again I am not sure what is at stake in denying an explanatory gap.

    Can you say how physical processes would explain consciousness? That is, can you bridge the explanatory gap?
  • Corvus
    3.3k
    The most we can say for certain is that our perception of consciousness may be completely delusional.Malcolm Lett

    Thoughts can be delusional too.
  • Count Timothy von Icarus
    2.8k
    A good explanation shows in some way why something is necessary. I do not see how something "computing really hard," ever necessitates the emergence of first person subjective experience.

    I am not particularly convinced by eliminitivist lines of argument. They seem to be a sort of bait and switch, or a fundemental misdiagnosis of the problem. They show all the ways in which conciousness is not what folk psychology takes it as, and provide a lot of information about current thinking in neuroscience, but I don't think any of this actually gets at the fundemental question of "why does subjective experience exist?"

    My response would be: "ok, my thoughts are not what they seem. Ok, there are lots of plausible theories in neuroscience, global workspaces make sense, recursion and "high level summary," make sense. That's all good. But how does this explain how something mechanistically produces first person subjective experience? That my experience might be different from how I describe it doesn't really say anything about why it necessarily exists given x, y, z, etc."

    So then we see the next move: "well, because conciousness is so different from what it seems to be, it turns out that your need for an explanation in terms of necessity is just a bad hunch. There is no reason for you to trust that what you think is an incomplete explanation is actually incomplete."

    But then you could literally apply this to any explanation of any phenomena. "Actually, the explanation is perfect, it just seems bad because your thoughts don't work the way you think they do," undermines all claims about the world.

    If our core intuitions can be this wrong, and there is "nothing to explain," then I have no idea why we should be referring to neuroscience for explanations in the first place. We only have a good reason to think science tells us anything about the world if our basic intuitions have some sort of merit.

    Epiphenomenalism adds another wrinkle. If mechanism is understood in current terms then it follows that mental life can have no causal powers. But then, if what we experience and think has absolutely no effect on how we behave then there is no reason for us to think our perceptions and thoughts have anything to do with the real world. Why would natural selection ever select for accuracy? What we think or experience is completely irrelevant to survival given the causal closure principle, mental events never determine physical outcomes and so the accuracy of mental experience can never be something selected for.

    Hoffman, who you mention, doesn't touch on this problem, but it's particularly acute. He just assumes that the way things "seem to us" on our "dashboard" plays a causal role in survival. The causal closure principle denies this. Of course, Hoffman ends up rejecting mechanistic explanations for other reasons, but he could have just stopped here with this disconnect.

    If epiphenominalism is true, then we have no grounds for our faith in science, mathematics, etc. and no good grounds for the mechanistic view that leads to epiphenomenalism in the first place. Epiphenomenalism is self defeating.

    Now if we don't assume epiphenomenalism, then we appear to have something like strong emergence. But if we have strong emergence, then we need to explain how it works. Yet, Kim's work suggests will be likely impossible in the current mechanism -substance framework, so there does seem to be an explanatory gap here in that some sort of paradigm shift seems needed to resolve this issue.
  • NOS4A2
    9.3k


    It’s a good start to a good argument. My only quibble is the brain/body dualism.

    It’s common to situate consciousness in the brain, but because we’re not brains nor are we disembodied, consciousness cannot be reduced to states of the brain. Consciousness would be fundamentally different without bones, for instance, and the phenomena available to those who are standing is markedly different than those who are laying down. Most of the body is required in order to live, let alone be conscious, so all of it needs to be included in a materialist conception of consciousness lest he falls victim to the same dualism he accuses of dualists.
  • unenlightened
    9.2k
    Well, there's the solipsistic outtake - we should stop talking about this because we can't know anything.Malcolm Lett

    Well this is the fundamental difficulty of such arguments: "How come you know so much about how deluded we all are?" If the world we see is not the world, how can you talk about the world? It looks like some esoteric wisdom you have to claim there.

    Now me, I claim that I am real and the world is real, and I don't know everything, but I know how many beans make five and that shit smells.
  • Patterner
    1k
    Point number 1:
    Our intuition is the source of that complaint. We have no logical grounding to claim yay or nay to whether computational physical processes could yield conscious feels. I think it was Ned Block in his "The Harder Problem of Consciousness" that argued strongly that we have no grounding for such discussions. So why do we hold so strongly to such views? Clearly it's our intuition. It just "seems wrong".
    Malcolm Lett
    You can explain the mechanics of walking in all the detail you want. Every muscle fiber; every muscle group; every blood vessel; every nerve. What every tiny component is doing at every moment; what every large component is doing at every moment. When you're done, you can demonstrate how it all comes together, producing walking.

    Then, you can take a lightbulb out of your pocket, hold it in your hand, and begin walking. As you walk, you can explain that all of these things you just described in such detail also produce electricity, as the lightbulb starts to glow.

    I have a problem with this. Everything you said about the mechanics of walking accounts for the walking. I'm going to need a heck of an explanation as to how all of those things are doing the double duty of producing walking and generating electricity. Do we notice any of the events that you said are part of walking doing other things at the same time they are helping to produce walking, but have nothing to do with producing walking? From my understanding of electricity, nothing you described accounts it. I'm no electrician, but I know the general idea. And I can Google all day long, but I'm not finding anything that suggests the mechanics that produces walking also explains the production of electricity.

    I'm going to start looking for another explanation. Maybe you're wearing something under your clothing that generates electricity with the movement of your body. Maybe there's a wire coming out of the back of your shirt that's plugged in somewhere, and goes to your hand where you hold the bulb. Maybe it turns out you're a robot, and have batteries inside your body.

    You can also explain the workings of the brain in as much detail as you want. And that can be pretty extensive. This is from Darwin's Black Box, by Michael Behe:
    When light first strikes the retina a photon interacts with a molecule called 11-cis-retinal, which rearranges within picoseconds to trans-retinal. (A picosecond is about the time it takes light to travel the breadth of a single human hair.) The change in the shape of the retinal molecule forces a change in the shape of the protein, rhodopsin, to which the retinal is tightly bound. The protein’s metamorphosis alters its behavior. Now called metarhodopsin II, the protein sticks to another protein, called transducin. Before bumping into metarhodopsin II, transducin had tightly bound a small molecule called GDP. But when transducin interacts with metarhodopsin II, the GDP falls off, and a molecule called GTP binds to transducin. (GTP is closely related to, but critically different from, GDP.)
    That's the beginning of the beginning of the beginning of the beginning of the series of events and processes that explains/describes how we detect a certain range of the electromagnetic spectrum. Add some other events, and we can distinguish different frequencies within that range. More events explain how patterns of our perceptions are stored in our brain. And still more explain how what is stored becomes part of the algorithm that chooses which action we take when stored patterns are perceived again.

    Then you explain that all of these things you just described in such detail also produce my subjective experience of red. I have the same problem I had with your lightbulb. Everything you said about the mechanics of vision accounts for vision. How are all of those things doing the double duty of giving us vision and generating subjective experience?.

    This problem is, in fact, more difficult than the walking/electricity problem. Walking is a physical process, ultimately dependent on the physical properties of particles and laws of physics. Electricity is electrons. Physical things. Particles. The defining property of these particular particles, their negative charge, accounts for different things in different circumstances. In some circumstances, it accounts for electricity.

    Consciousness, on the other hand, is not an obviously physical process, clearly built up from the physical properties or particles and laws of physics. We can't point to any property or event in the whole process of vision, and say, "There! That is redness." Whatever we point to will be a physical thing that plays a part in the explanation of perceiving part of the spectrum, distinguishing wavelengths, remembering what we've seen, using memories to help make decisions...
  • Wayfarer
    22.6k
    I have a materialistic theory of consciousness that I believe provides a good in-road into explaining phenomenal consciousness.Malcolm Lett

    An explanation comprises explanans and explanandum. The explanandum is what it is that needs to be explained, and the explanans is that which provides the explanation. But then, any act of explanation, including the explanation of consciousness, is a conscious act. This means that consciousness is also a part of the explanans. When we articulate a theory or a model to explain consciousness, we are doing so using our conscious understanding, reasoning, and cognitive faculties.

    This dual role of consciousness leads to a sort of circularity: we use consciousness (as part of explanans) to explain consciousness itself (the explanandum). It's akin to trying to illuminate a light bulb with its own light—it's both the source and the object of the inquiry.

    Eliminative materialism doesn't address this problem. Instead, it ignores it, which is why Daniel Dennett's first book on the subject, Consciousness Explained, was parodied by many of his peers as Conciousness Ignored.

    Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.” — Thomas Nagel, Review of From Bacteria to Bach and Back
  • goremand
    91
    If our core intuitions can be this wrong, and there is "nothing to explain," then I have no idea why we should be referring to neuroscience for explanations in the first place. We only have a good reason to think science tells us anything about the world if our basic intuitions have some sort of merit.Count Timothy von Icarus

    The problems of phenomenal consciousness are to begin with the result of tension between different intuitions. It's like you have a bunch of witnesses and their testimonies don't add up to a coherent story, one of them has to be wrong. It's no good saying "if you doubt one, you have to doubt them all, so let's just not".
  • Jack Cummins
    5.3k
    The concept of intuition in the gap between rational knowledge and the sensory basis of empiricism may have been a wide area opened by Kantian thinking. Intuition may be important but open to scrutiny, especially in relation to clarity. It may be obscured by factors in socialisation, as well as fear and fantasises. Intuition may be a way of going beyond logic, but it may also be clouded by fears and fantasies. So, it may be an important aspect in the conception.of ideas and ideals but require more substantial basis in knowledge and evidence.
    .
  • Count Timothy von Icarus
    2.8k



    The problems of phenomenal consciousness are to begin with the result of tension between different intuitions

    Not sure what you mean here. For most of the history of philosophy it wasn't really much of an issue. There are things. Of these, some are living. Of the living things, some are animals and have sensation. That's just part of their essence.

    The Hard Problem only slowly comes into focus with the attempt to reduce all things to extension in space and motion. It even sort of goes back under the radar again with Newton, because now you have fundemental forces that can act at a distance, which led to people posting a similarly sui generis "life force," to explain conciousness.

    But "things are only extension in space and motion," or "all that exists can be explained in terms of mathematics and computation," are not basic intuitions.

    Neither is, "how does things being very 'complex' or involving lots of integrated information processing result in first person perspective?" a question of a violation of a basic intuition, it's a question of the explanation being extremely murky with no specific causal mechanism identified.
  • goremand
    91
    But "things are only extension in space and motion," or "all that exists can be explained in terms of mathematics and computation," are not basic intuitions.Count Timothy von Icarus

    I'm not sure exactly how you make the distinction between "basic/core" and "regular" (historical popularity maybe?), but those ideas of space and motion are certainly products of intuition.

    It's obvious that if you frame something as "intuition vs X", then X will always lose. But the neuromaniac eliminativist perspective is also the product of intuition, intuition isn't a big happy family to be collectively dismissed or embraced.
  • Count Timothy von Icarus
    2.8k


    I'm using "intuitive" the way it is generally used throughout philosophy. Something is intuitive, a noetic "first principle," if we cannot conceive of it being otherwise. 2+2 is intuitively 4. It is intuitive that a straight line cannot also be a curved line, that a triangle cannot have four sides, etc.

    There is nothing intuitive about the statement "when lots of information gets processed in a very complex way the result produces first person subjective experience." This is not intuitive in the way 2+2=4 is, so it requires demonstration, showing how the claim follows from first principles or empirical observations based on these same intuitive inference rules.

    To say, "well I can't demonstrate it in a way that makes sense, but this is just because your intuition is broken," undermines virtually all truth claims, because now we can no longer feel certain about the principle of non-contradiction, inference rules, mathematics, etc. The work around of claiming "x is true, it just seems to not be because your reason is broken," can be applied equally to any claim, e.g. that we are actually light from the Pleroma trapped in a material prison, that 2+2=7, etc.

    That everything is extension and motion is not an intuition. It is not intuitive that "color isn't real," for instance. People don't say, "color isn't real, this is obvious and could not possibly be otherwise." It's rather an inference from atomism/corpuscularism, which itself is created as a solution to the apparent unity of the universe and its equally apparent multiplicity and change (The One and the Many problems).
  • goremand
    91
    Something is intuitive, a noetic "first principle," if we cannot conceive of it being otherwise. 2+2 is intuitively 4. It is intuitive that a straight line cannot also be a curved line, that a triangle cannot have four sides, etc.Count Timothy von Icarus

    I didn't realize the bar was set so high, so then all it takes is for someone to claim that they can conceive of something being false, and it ceases to be intuitive? Presumeably the eliminativist has already done this, so are the claims they deny then dethroned? Or are they not included in this "we"?
  • Malcolm Lett
    76
    It's wonderful how writing something down helps you clear up what you've been thinking. I had no idea I was going to bring up solipsism when I started to write the OP. But the outcome of my question was almost a given the moment I finished writing it:
    * I'd foolishly argued that we can't know anything.

    But I do see a hope. As I see it, there's approximately three things at play:
    * perception
    * intuition
    * analysis

    Classical philosophers and neuroscience have claimed that our perception is flawed. We all know that our intuitions are a good start, but should never be relied upon without verification. That leaves analysis, and all the wonderful legacy of arguments to and fro about the power of analysis to overcome the limitations within our perception and our intuition. I'm not even going to attempt to address any of that.

    The outtake for me is that, should a suitable new analysis come to light, it can supplant our (likely faulty) intuition about the feels of consciousness.

    I suppose that is exactly what Dennet was attempting to do (and apparently failed at) in his critique of conscious feels and the hard problem.
  • Count Timothy von Icarus
    2.8k


    I am not really sure what you're trying to to get at here. What counts as intuitive might be debated, but certain statements like "a line of points cannot be simultaneously continuous and discrete," or "2+2=4," can largely be agreed upon. Are you claiming we lack good warrant for believing these sorts of things?

    Eliminitivism, in its most extreme form, does violate these sorts of intuitions. This would be the claim that "you don't actually experience anything, see blue, hear sounds, etc." But does anyone actually advocate this? Dennett himself calls this type of eliminitivism "ridiculous," in "Conciousness Explained."

    The problem with the claims of more plausible forms eliminitivism aren't necessarily that they are counter intuitive, it's that they claim that conciousness has been adequately explained when it hasn't been.

    Ok, well we might debate what counts as adequate explanation here. But what is not a good response is to say, "yes, it does seem inadequate, but that's only because human reason is ultimately deficient." This essentially amounts to saying "I do not need to offer a convincing explanation or demonstration, because such a thing is not possible, but you should still accept the truth of what I'm saying."

    This is like Luther's response to Erasmus. Erasmus says "a God who predestines — forces — man to sin, and then punishes him for it seems evil."

    To which Luther responds: "yes, but it only seems evil because our reason is deficient due to the fall." This is not an explanation though.
  • schopenhauer1
    10.9k
    Ok, well we might debate what counts as adequate explanation here. But what is not a good response is to say, "yes, it does seem inadequate, but that's only because human reason is ultimately deficient." This essentially amounts to saying "I do not need to offer a convincing explanation or demonstration, because such a thing is not possible, but you should still accept the truth of what I'm saying."

    This is like Luther's response to Erasmus. Erasmus says "a God who predestines — forces — man to sin, and then punishes him for it seems evil."

    To which Luther responds: "yes, but it only seems evil because our reason is deficient due to the fall." This is not an explanation though.
    Count Timothy von Icarus

    Killed two birds with one stone there. :snicker:.
  • bert1
    2k
    Our perception of consciousness is equally subject to the same perceptual hallucinations as all other perceptions.Malcolm Lett

    There is a difference, it seems to me. Perceptual hallucinations are complex, we construct a model which turns out to be contradicted by further data. Loads of stuff going on, plenty of room for error. But consciousness of consciousness is maximally simple, no? It doesn't specify any particular experience. We might be wrong in perceiving a lion in the grass, it might just be a patch of grass. But we can't be wrong that we have experienced something-or-other, i.e. a world. And to go one step further, when we turn consciousness on itself, in experience of experience, where the subject is the object, there is no gap for a mistake to exist in.
  • goremand
    91
    I am not really sure what you're trying to to get at here. What counts as intuitive might be debated, but certain statements like "a line of points cannot be simultaneously continuous and discrete," or "2+2=4," can largely be agreed upon. Are you claiming we lack good warrant for believing these sorts of things?

    Eliminativism, in its most extreme form, does violate these sorts of intuitions.
    Count Timothy von Icarus

    What is "these sorts" referring to here? Eliminativists do not reject 2+2=4 or other mathematical a priori stuff, that sort of thing is not in doubt here. It seems you are bunching some intuitions together into a group, but I don't understand the criteria for membership.

    This would be the claim that "you don't actually experience anything, see blue, hear sounds, etc." But does anyone actually advocate this?Count Timothy von Icarus

    In my opinion, any eliminativist worth the name would of course advocate this. And why not?

    Dennett himself calls this type of eliminitivism "ridiculous," in "Conciousness Explained."Count Timothy von Icarus

    I don't know that Dennett is an eliminativist, if so I think he is in the closet about it. I've always found him to be strangely diplomatic and "soft-selling" in expressing his views, it makes sense to me that he would disavow what you describe as "extreme". Maybe this partly explains his success, his books do seem to sell.
  • Gnomon
    3.8k
    In brief, I hold that the content of consciousness is a high-level summary of the general "goings on" within the brain.Malcolm Lett
    Good deal! That's another way of saying what I mean by : "Consciousness*1 is the function of brain activity". In math or physics, a Function*2 is a relationship between Input and Output. But, a relationship is not a material thing, it's a mental inference. Also, according to Hume, correlation is not proof of causation.

    So, simply noting the correlation between low-level "goings-on" and high-level awareness-of-what's-happening is still a leap over the Intuitive Gap. The "hard Problem" remains : physically, how do you get from neural Inputs (energy) to mental Outputs (awareness)?

    Because of that Causal Gap, some have dismissed Consciousness as a "delusion", in the sense that there is nothing physical in the output. However, as you noted, we could say that we get from IN to OUT by intuition*3, in the sense of metaphysical In-sight or Inner-vision. But that's not a material explanation of the steps between Input and Output.

    Intuition is not physical vision --- traceable step by step from outer senses to inner sensations --- but a mysterious metaphysical way of knowing what's "going-on" inside the brain, without EEG or MRI. Unfortunately, that still doesn't suit your preference for a "materialistic theory". Do you have any ideas about how to fill the particular-to-holistic Intuition Gap? What's the "rule" for correlating impersonal sensory inputs to personal meaningful outputs? I'm still working on that ellipsis myself. :smile:

    *1. Content of Consciousness :
    Consciousness, at its simplest, is awareness of internal and external existence. ___Wikipedia
    https://en.wikipedia.org › wiki › Consciousness
    Note --- Where is Awareness or Meaning or Cognition in the material substrate? Could those functional features exist, potentially, within the Energy that transforms into Mass : E=MC^2? If so, that would provide a Physical, not Material, agency to explain the high-level manifestation of the power of Intuition to "summarize" (from concrete matter to abstract ideas) what can't otherwise be seen.

    *2. Function :
    A function (f) consists of a set of inputs, a set of outputs, and a rule for assigning each input to exactly one output.
    https://www.utrgv.edu/cstem/utrgv-calculus/functions/definition-of-function/index.htm

    *3. Intuition :
    Often referred to as “gut feelings,” intuition tends to arise holistically and quickly, without awareness of the underlying mental processing of information. ___Psychology Today
    https://www.psychologytoday.com › basics › intuition
  • Malcolm Lett
    76
    But consciousness of consciousness is maximally simple, no? It doesn't specify any particular experience. We might be wrong in perceiving a lion in the grass, it might just be a patch of grass. But we can't be wrong that we have experienced something-or-other, i.e. a world. And to go one step further, when we turn consciousness on itself, in experience of experience, where the subject is the object, there is no gap for a mistake to exist in.bert1

    I understand this view. But I think it's an over simplification. On the one hand, given that the brain is itself, it should have no trouble knowing itself. In practice, there are a number of problems with that notion.

    1) Firstly, there's strong neurological and behavioral evidence that our access consciousness doesn't have access to everything that goes on in the brain. So, even if it were possible for the brain to observe everything about its own activity, the brain doesn't do that - at least not to the extent that we have conscious access to it.

    2) Take a hypothetical brain, and imagine that every neuron of it's say, 1 billion neurons, is devoted to some form of 1st-order behavioral control in relation to the environment or the body. Now, imagine that this brain is going to develop the ability to observe itself. It's full state at any given moment is determined by the interactions between its 1 billion neurons, via its, say, 100 billion synapses. That's a large data space. The world is already pretty complex, and its existing 1 billion neurons are all needed just to understand that. So how many more neurons does the brain need to understand its own activity? Even if we assume conservatively a 1:1 relationship - that 1 billion additional neurons is enough to understand the activity of the first 1 billion neurons, now the brain is twice the size. Oh, and also, now the data space that needs to be monitored is twice the size. So the brain needs to double again. etc. etc. until infinity.

    Well, that's obviously intractable. What is feasible? Well, instead of observing to such a low-level, we capture just some sample of brain activity. This is likely to be reduced by 1) limiting the scope of which parts of brain activity are observed, and 2) capturing a dimensionally reduced abstraction. The rest has to be inferred, which opens itself up to hallucinations.

    3) There's problems too with simple connectivity issues. If you imagine a section of brain that is devoted to some 1st-order body function. In order for some other section of the brain to monitor this first section, there needs to be additional connections going out from the first. If we assume naively that there is one brain region devoted to meta-management, then it needs to get connections from all the brain regions that it cares about, which puts a strong limiting factor on how much data it can collect about the rest of the brain activity. And again, the rest has to be inferred = hallucinations. Now, the naive assumption of a single 2nd-order data collection center is almost certainly wrong. But some degree of differentiation certainly does occur in the brain, so the problem still exists to the degree that differentiation does occur in the brain.
  • Malcolm Lett
    76
    Good deal! That's another way of saying what I mean by : "Consciousness is the function of brain activity". In math or physics, a Function is a relationship between Input and Output. But, a relationship is not a material thing, it's a mental inference. Also, according to Hume, correlation is not proof of causation.Gnomon

    Intuition is not physical vision --- traceable step by step from outer senses to inner sensations --- but a mysterious metaphysical way of knowing what's "going-on" inside the brain, without EEG or MRI.Gnomon

    If I understand you correctly. I think this is the non-reductive thesis - that the whole of consciousness is more than the sum of its parts, and thus that it cannot be fully explained by its parts alone. My apologies if I've misunderstood you, but I'll talk to that point anyway.

    As I understand it. The non-reductive thesis about something, paraphrased as "more than the sum of its parts", says that something cannot be entirely explained by its parts and their interactions because it has some additional qualities that are not explained by those parts and/or their interactions. Thus, consciousness being an example of such a thing, consciousness cannot be explained via the existing reductive methods of science.

    I'm yet to see an argument that proves the non-reductive thesis - though I probably just haven't read enough.

    What I have seen is this:
    1) Convincing arguments that consciousness might be more than the some of its parts. (Note: not arguments that it is is)
    2) Lots of people saying in various ways that they cannot conceive of how a reductive explanation could explain consciousness.
    3) #2 being used as a logical leap to conclude that consciousness definitely is non-reductive.

    Some take #2 to conclude that consciousness isn't even physical in the traditional sense. Others accept that everything is still physical in nature, but instead suggest in one way or another that our science is incomplete - that we need non-reductive ways of theorising about things. Those discussions usually then trail off into meaninglessness - they eliminate the rationalisation mechanisms in science to understand how first-principles lead to bigger things, the particular-to-holistic process that you mentioned. And so the arguments conclude self-gratifyingly that consciousness cannot be explained mechanistically. The non-reductivity thesis creates the explanatory gap, by refusing to accept explanations.

    My approach is to eschew the debates and to just provide such an explanation. I've started a discussion in https://thephilosophyforum.com/discussion/15091/the-meta-management-theory-of-consciousness if you're interested. There I've provided details of just such an explanation. The ellipsis to which you refer.
  • Wayfarer
    22.6k
    Those discussions usually then trail off into meaninglessnessMalcolm Lett

    The problem I see with reductive materialism is really pretty simple. It is that the scientific approach that it assumes is defined entirely in terms of objectivity. It is what I describe as 'objective consciousness'. It is, of course, fantastically successful in an objective sense, but not necessarily in an existential sense. There is a vast scope of issues which are amenable to objective analysis, but the problems of philosophy, which are essentially existential in nature, may not be among them.

    This goes back to the founding paradigm of modernity, which is Galilean objectivity and the universal reach of physical laws, combined with Cartesian geometry. That forms the basic paradigm of the materialism you're advocating. But as I explained elsewhere, it is analogous to a two-dimensional description of a three-dimensional shape, in that there is a dimension missing. By assigning reality to what is objectively material, the role of the perceiving subject, which synthesises and combines the information about the objective to generate what we understand as 'reality', is omitted or overlooked. But then, as the only criteria that are deemed acceptable are objective in nature, there is no way to demonstrate what, exactly, has been omitted or left out, which is a hard problem.
  • bert1
    2k
    On the one hand, given that the brain is itself, it should have no trouble knowing itself. In practice, there are a number of problems with that notion.Malcolm Lett

    If consciousness was a brain process, then I would agree with you, the brain knowing itself would be riddled with opportunities for mistakes, illusions etc. I'm just pretty sure consciousness is not a brain activity.
  • Gnomon
    3.8k
    I'm yet to see an argument that proves the non-reductive thesis - though I probably just haven't read enough.Malcolm Lett
    After centuries of debates on the provenance of Consciousness, I doubt that you will find a slam-dunk argument either way. In most such discussions, the debater tends to end up at his own starting point. Materialism begins from the assumption that Matter is all there is, hence Mind must be a kind of matter. Idealism assumes that Mind is all that exists, so Matter must be a form of Mind. But my non-authoritative hypothesis, as an amateur philosopher, is that both Mind and Matter are forms of primordial Energy/Information (the power to transform). In other words, Consciousness is caused by Causation, not Substance.

    What you call "Non-Reductive", I call "Holism" or "Systems Thinking". And your linked thread has a diagram showing a Feedback Loop, which is a major factor in multi-part Systems operation. Self-recursive flows of Information/Energy are the key to novel features & functions of a complex System, that emerge from inter-operation of parts that do not have that never-before-seen characteristic. A common example is Water, an inter-operative system of atomic oxygen & hydrogen, neither of which display the molecular properties of fluidity and wetness. But, working together, those atoms undergo Phase Transitions (transformations) from Gas to Liquid to Solid, due to energy inputs & outputs.

    If you are interested in reading more along the lines of non-reductive Holism, I'll suggest two ground-breaking books : A & B below. They will not prove anything empirically, but then Mind is not an empirical topic, it's a philosophical subject. :smile:

    Note --- "Emergence" is a dirty word for Reductionist thinkers. They seem to think it means "magic". But it simply refers to physical transformation, such as a new species, with different physical & behavioral features, stemming from the lineage of an older species.

    A. Holism and Evolution (1925), by naturalist Jan Smuts, is mostly about how Life (a novel property) emerged from eons of evolutionary transformations of Matter. His technical term "holism" was quickly adopted by New Agers, so a different term, "Systems Theory", was coined by scientists, to avoid the "woo" factor of meditating hippies.
    B. I Am a Strange Loop (2007), by Douglas Hofstadter -- cognitive & computer scientist -- is about how feedback loops (self-reference) in a dynamic cyclic structure may eventually produce the novel quality of Self-consciousness. It's a strange, but compelling and multi-disciplinary, exploration of Mind/Matter, by the author of the profound but bizarre Goedel, Escher, Bach. Your own term, "Meta-Management", may be an unintentional reference to a feedback loop.


    Embracing Systems Thinking :
    A Holistic Approach to Problem Solving
    https://www.linkedin.com/pulse/embracing-systems-thinking-holistic-approach-problem-solving-brewton/

    I Am a Strange Loop :
    One of our greatest philosophers and scientists of the mind asks, where does the self come from — and how our selves can exist in the minds of others. Can thought arise out of matter?
    https://valsec.barnesandnoble.com/w/i-am-a-strange-loop-douglas-r-hofstadter/1100299015?ean=9780465030798

    EVOLUTIONARY EMERGENCE due to sequential trans-form-actions :
    6562821888050d001c288dd6.webp
  • Patterner
    1k
    As I understand it. The non-reductive thesis about something, paraphrased as "more than the sum of its parts", says that something cannot be entirely explained by its parts and their interactions because it has some additional qualities that are not explained by those parts and/or their interactions. Thus, consciousness being an example of such a thing, consciousness cannot be explained via the existing reductive methods of science.

    I'm yet to see an argument that proves the non-reductive thesis - though I probably just haven't read enough.
    Malcolm Lett
    I try to find analogies. If I saw a skyscraper made entirely of liquid H2O, I'd know something was up. The properties of liquid H2O cannot explain a skyscraper. I know something else is at work.

    Consciousness is a very different situation. H2O and skyscrapers both have physical properties, and no suggestion of non-physical properties. Neither has any non-physical properties that present a mystery, and need explanation. Even processes like flight, metabolism, and vision can be seen to come from purely physical foundations. Subjective experience cannot. The properties of matter that we know of, and have measured to an amazing degree, do not suggest subjective experience.

    The argument for reductionism I hear most often is, just because we haven't figured it out with our sciences yet, doesn't mean we won't. My opinion is the fact that we haven't should not be considered evidence that we will. Nor is it evidence that the things we are aware of because of our sciences are the only things that exist, or the only things involved. The different nature of subjective experience, on the other hand, suggests something different is involved.
  • Wayfarer
    22.6k
    The argument that 'the whole is more than the sum of its parts' goes back to Greek philosophy, Aristotle in particular. He observes that living organisms embody a principle of self-organisation which is lacking in non-living things. The pattern of organisation that artifacts exhibit is due to an external cause, namely, the design of the manufacturer. But in living organisms, that pattern is intrinsic to it, and cannot be explained with reference to any particular organ, or to its constitutive elements. I think that is still an issue that is recognised in current biology, as I understand it, and it tends to undercut the reductionist idea that a whole is nothing but the addition of its parts. It is constituted by its parts, and by the organising principle that orders it.
  • Count Timothy von Icarus
    2.8k


    I'm yet to see an argument that proves the non-reductive thesis - though I probably just haven't read enough.

    I don't think there is anything like proof for either case. However, I do think there are very strong arguments for not assuming the reductionist view is true until decisively proven otherwise. For the following reasons:

    Smallism is the idea that "facts about large things are reducible to facts about smaller parts." Wholes are defined by their parts, rather than vice versa. Whatever is "fundamental" in the universe must exist on the smallest scales. It preferences "bottom-up" explanations over "top-down" ones.

    Certainly, smallism has its appeal and some empirical support. A common way we are able to understand things better is by breaking complex things down into constituent parts.

    However, there is no prima facie reason that smallsim or reductionism should be a preferred "default" in the sciences or metaphysics. "Bigism," the preferencing of the universal and "top-down" arguments, parts being defined in terms of wholes, is just as supportable.

    Further, the empirical support and track record of reductionism is simply not that strong. Chemistry is a mature field and quantum mechanics has been around for a century now. Yet molecular structure has not been reduced to physics, and there are arguments that it will never be. 1 Indeed, even within the realm of physics there are ongoing debates regarding the nature of apparent emergence in quantum level phenomena.

    The waters are further muddied here because exactly how to define "scientific reductions" is an area of much debate. Additionally, scientific unifications (the explanation of disparate phenomena in terms of more general principles) are often misunderstood as reductions. Unifications though, would tend to support a sort of "bigism," and there have been many of these.

    The whole idea of fundamentality adds another wrinkle. For example, in quantum field theory the fields that fill the entire universe are more fundamental than particles - the whole more essential than the part. Indeed, the Italian physicist G.M. D'Ariano likens "particles" to the shadows on the walls of Plato's cave, claiming that fields and relational information hold a higher ontological ground. 2 There are good arguments that computation isn't decomposable in the way assumed by smallism either ("more is different"), and there is a lot of support for pancomputationalism in the physics community. 3 If the pancomputationalist view is correct in certain major respects, it would seem that smallism is simply a bad presupposition, a useful view for understanding some sorts of problems, but flawed as metaphysical doctrine. At the very least, this would seem to caution against common views that seem to assume reductionism and smallism are true until decisively proven otherwise.

    Neuroscience tends to be very bottom up, particularly because we lack good top-down theories for major phenomena like consciousness. Physics tends to have a lot of top-down explanations. Although I am not aware of any polling on this, I would not be surprised to learn that reductionism is more popular in the special sciences, and among non-scientists, than with physical scientists themselves.


    I would add that Jaegwon Kim's arguments against the possibility of strong emergence given current reductionist accounts of physicalism make it extremely difficult for anything like strong emergence to exist. But, assuming we are concious, and assuming panpsychism isn't true, I would take this to suggest that even if something like smallism is true, it will nonetheless require some sort of major paradigm shift that allows for some sort of "emergence-like" phenomena to occur to resolve this impasse. That is, something like what Einstein did for physics, reshaping our fundemental conceptions instead of trying to make the world fit into them.
  • bert1
    2k
    I would take this to suggest that even if something like smallism is true, it will nonetheless require some sort of major paradigm shift that allows for some sort of "emergence-like" phenomena to occur to resolve this impasse.Count Timothy von Icarus

    Smallism I think is probably false for this reason, and others.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment