Okay, well Dennett's view is that we don't need to understand the hard problem, i.e. it's not a separate problem that will remain once all the easy problems are solved, but rather a conceptual problem arising from ignorance. — Kenosha Kid
But you know that is a stance he (you) are taking on this, not necessarily the case, right? I mean it isn't a forgone conclusion that there is not a hard problem. — schopenhauer1
But my main point further, is certainly Dennett isn't even coming close to answering it by criticizing certain theories on the physical mechanisms and their subjective equivalent "illusionary" aspects, as they are reported by individuals. — schopenhauer1
If there's no question to answer, it would be odd to answer it. — Kenosha Kid
Why the hard question is seldom asked
One explanation for the neglect of the hard question is that science in this area proceeds from the peripheries towards the interior, analysing the operation of transducers and following their effects inwards. Start with the low hanging fruit; it is a matter of proximity, non-invasiveness and more reliable manipulability—we can measure and control the stimulation of the peripheral transducers with great precision. Research on the efferent periphery, the innervation of muscles and the organization of higher-level neural motor structures, can be done, but is more difficult for a related reason, which has more general implications: controlled experiments are designed to isolate, to the extent possible, one or a few of the variable sources of input to the phenomenon—clamping the system, in short—and then measuring the dependent variables of output. Accomplishing this requires either invasive techniques (e.g. stimulation in vivo of motor areas) or indirect manipulation of subjects' motivation (e.g. ‘press button A when you hear the tone; press button B when you hear the click; try not to move otherwise’). In the latter case, researchers just assume, plausibly, that conscious subjects will understand the briefing, and be motivated to cooperate, and avoid interfering activities, mental or skeletal, with the result that they will assist the researcher in setting up a transient virtual machine in the cortex that restricts input to their motor systems to quite specific commands.
Similarly, working on the afferent side of the mountain, researchers brief subjects to attend to specific aspects of their sensory manifold, and to perform readily understood simple tasks (usually, as quickly as possible), with many repetitions and variations, all counterbalanced and timed. The result, on both the afferent and efferent fronts, is that subjects are systematically constrained—for the sake of science—to a tiny subset of the things they can do with their consciousness. Contrast this with non-scientific investigation of consciousness: ‘A penny for your thoughts’, ‘What are you looking at now?’, ‘What's up?’
This is all obvious, but it has a non-obvious side effect on the science of consciousness: it deflects attention from what is perhaps the most wonderful feature of human consciousness: the general answer to the hard question, ‘And then what happens?’ is ‘Almost anything can happen!’ Our conscious minds are amazingly free-wheeling, open-ended, protean, untrammelled, unconstrained, variable, unpredictable, … . Omni-representational. Not only can we think about anything that can occur to us, and not only can almost anything (anything ‘imaginable’, anything ‘conceivable’) occur to us, but once something has occurred to us, we can respond to it in an apparently unlimited variety of ways, and then respond to those responses in another Vast [11, p. 109] variety of ways, and so forth, an expanding set of possibilities that outruns even the productivity of natural languages (words fail me). Of course, on any particular occasion, the momentary states of the various component neural systems constrain the ‘adjacent possible’ [12] to a limited selection of ‘nearby’ contents, but this changes from moment to moment, and is not directly in anybody's control. It is this background of omnipotentiality that we take for granted, and cordon off accordingly in our experimental explorations. It is worth noting that we have scant reason to think that simpler nervous systems have a similar productivity. Most are likely to be ‘cognitively closed’ [13] systems, lacking the representational wherewithal to imagine a century or a continent, or poetry, or democracy, … or God. The famous four Fs (feed, fight, flee and mate) may, with a few supplements (e.g. explore, sleep) and minor suboptions, exhaust the degrees of freedom of invertebrates.
Probably even our closest relatives, the chimpanzees and bonobos, have severely constricted repertoires of representation, compared with us. Here is a simple example: close your eyes right now and imagine, in whatever detail you like, putting a plastic wastebasket over your head and climbing hand-over-hand up a stout rope. Easy? Not difficult, even if you are not strong and agile enough—most of us are not—to actually perform the stunt yourself. Could a chimpanzee engage in the same ‘imagining’ or ‘mental time travel’ or ‘daydreaming’? I chose the action and the furnishings to be items deeply familiar to a captive chimp; there is no doubt such a chimp could recognize, manipulate, play with the basket, and swing or climb up the rope, but does its mind have the sort of self-manipulability to put together these familiar elements in novel ways? Maybe, but maybe not. The abilities of clever animals—primates, corvids, cephalopods, cetaceans—to come up with inventive solutions to problems have been vigorously studied recently (e.g. [14–16]), and this research sometimes suggests that they are capable of trying out their solutions ‘off line’ in their imaginations before venturing them in the cruel world, but we should not jump to the conclusion that their combinatorial freedom is as wide open as ours. For every ‘romantic’ finding, there are ‘killjoy’ findings [17] in which clever species prove to be (apparently) quite stupid in the face of not so difficult challenges.
One of the recurrent difficulties of research in this area is that in order to conduct proper, controlled scientific experiments, the researchers typically have to impose severe restrictions on their animals' freedom of movement and exploration, and also submit them to regimes of training that may involve hundreds or even thousands of repetitions in order to ensure that they attend to the right stimuli at the right time and are motivated to respond in the right manner (the manner intended by the researcher). Human subjects, by contrast, can be uniformly briefed (in a language they all understand) and given a few practice trials, and then be reliably motivated to perform as requested for quite long periods of time [18]. The tasks are as simple as possible, in order to be accurately measured, and the interference of ‘mind-wandering’ can be minimized by suitable motivations, intervals of relaxation, etc.
The effect, in both speaking human subjects and languageless animal subjects, is to minimize the degrees of freedom that are being exploited by the subjects, in order to get clean data. So, huge differences in the available degrees of freedom are systematically screened off, neither measured nor investigated.
This explains the relative paucity of empirical research on language production in contrast with language perception, on speaking in contrast with perceiving, parsing, comprehending. What are the inputs to a controlled experiment on speaking? It is easy to induce subjects to read passages aloud, of course, or answer ‘Yes’ and ‘No’ to questions displayed, but if the experimenter were to pose a task along the lines of ‘tell me something of interest about your life’ or ‘what do you think of Thai cuisine?’ or ‘say something funny’, the channel of possible responses is hopelessly broad for experimental purposes.
Amir et al. [19] attempted to find an fMRI signature for humour in an experiment that showed subjects simple ‘Droodle’ drawings [20–22] that could be simply described or given amusing interpretations (figure 1).
— Facing up to the hard question of consciousnessbDaniel C. Dennett Published:30 July 2018
That's two different conversations. I would like to know Dennett's straight-ahead answer to it. — schopenhauer1
This seems akin to building a house. Someone comes along and says, 'Hey, nice foundations but when are you going to build a house?', then later, 'Hey, nice walls, but when are you going to build a house?' Then up goes the roof and voila a house. — Kenosha Kid
Non-reductive physicalism is pretty standard in philosophy of mind. Is that what you're describing here? — frank
For instance, seeing a car as a car rather than some generic smudge of colour in a background of smudges of colour is an important aspect of the disputed qualia of 'this car'. As Isaac described, we already know much about how the brain recognises objects, so the hard aspect of this is pushed back to purely the subjective appraisal of the quale and not the derivation of any of its properties: a hard problem of the gaps. Likewise other shapes, colour, orientation, distance, name, and everything else that makes up the contents of our subjective experiences. What we're left with is a question of how a particular part of the brain does one particular thing, out of all the almost countless other things the brain is doing to construct our subjective experience that are becoming clear. — Kenosha Kid
That's how I'm describing Dennett's position, — Kenosha Kid
At the moment, we don't fully understand how the brain works... our understanding has no roof, maybe some missing walls so to speak, and that's used by mystics as an excuse to separate out the hard problem and insist it's not being answered. — Kenosha Kid
a hard problem of the gaps — Kenosha Kid
That word, "irreducible", has a very particular connotation for a lot of us, and it's not a nice one. — Srap Tasmaner
However, the easier questions aren't even approaching the answer, so how can it "close off" the hard problem when it never ventured the realm of answering it? — schopenhauer1
Now YOU have to be charitable enough to realize that hard questioners AREN'T denying the science of the findings of cognitive neuroscience. — schopenhauer1
It's ignoring it and then pointing to some other line of thought. — schopenhauer1
you are in trouble — schopenhauer1
No, that's not a theory. That's a hypothesis, a postulate, a proposal. — Philosophim
Lets clarify then. First, a "convincing argument" means a rational argument concluded with deduction. Deductions must then be applied and tested against reality to ensure we had the entire picture, and that the deduction holds when faced with other people, or use in reality. — Philosophim
For example, we could deduce in physics that if X object is applied Y force in a vector, it will accelerate at Z speed. So we go outside, we do that, but it doesn't work. We think about it for a moment and we realize we didn't take into account the wind. So we go indoors without any wind, and it turns out our deduction works. We just forgot to take wind as a factor. — Philosophim
If you make a claim about reality, you must test it against reality. — Philosophim
We have not discovered any application of "deduction or rational argument" that consciousness exists apart from the brain. — Philosophim
Finally, I am not a logical positivist. I am not accusing you of holding any particular philosophy, — Philosophim
But that again is merely your insistence that the hard problem is separable and distinct. You're not demonstrating that Dennett isn't answering the question; you're disputing the grounds on which he answers it, just as he disputes the grounds on which you ask it. — Kenosha Kid
Now, yes. But the answers that cognitive neuroscience yields were once thought to be inseparable aspects of that hard problem. Now they're not, hence: hard problem of the gaps. — Kenosha Kid
It's not a distinct question, so it's not some unrelated line of thought either. It's what people who are actually interested in the phenomenon are doing while people who are interested in their own belief systems wet themselves. — Kenosha Kid
No I'm not. Human beings are made of the same stuff as other animals and the medium-sized dry goods in our environment — Srap Tasmaner
we are the sort of animals we are because of exactly the same processes of evolution that result in other animals being the way they are. — Srap Tasmaner
And when we're not unconscious, we're conscious. — Srap Tasmaner
That fact doesn't trouble me in the least. Why on earth should it? It's exactly as interesting as the rest of natural science, but it's not shocking or troubling in some way. I honestly have no idea why people think it is. — Srap Tasmaner
Not recognizing the legitimacy of the other side. — schopenhauer1
Again, you have to at least recognize that "hard problmers" are recognizing this too. — schopenhauer1
The trouble is it depends on a dualist—and ultimately unworkable—theory of consciousness. The underlying intuition is that consciousness is an added extra—something additional to and different from the physical processes on which it depends. Searching for the NCCs relies on this difference. On one side of the correlation you measure neural processes using EEG, fMRI or other kinds of brain scan; on the other you measure subjective experiences or 'consciousness itself'. — Blackmore
The sort of viewpoint I gather you're espousing is that, no, these will always be interpreted as merely correlates of the thing, but never the thing itself, god forbid. So while all of the content of an experience might be accounted for neurological correlates, and the start of an experience might be preceded by neurological correlates, these correlates cannot constitute the having an experience itself, they can only be little helpers. — Kenosha Kid
In other words, hard problemers have it back to front. Dennett agrees with the above: there's no separable hard problem to answer. NCCs aren't correlates but the thing itself, not individually but as a messy whole. The likes of Strawson misrepresent this as a claim that 'consciousness does not exist', but in fact it's an affirmative claim that consciousness is real, not an added sprinkle of magic on top of real stuff. — Kenosha Kid
Hard problemers wouldn't even discount that the neurological correlate is the thing itself. Rather, it would be why this metaphysical case exists that the neurological underpinnings is experiential. — schopenhauer1
Yep it "causes" experience. Not debated. How is it metaphysically the same as experience is the question. — schopenhauer1
That's not a definition of the hard problem I have heard of before. The formulation I've always come across is the one that might admit correlates of consciousness in neurology, but never consciousness itself. — Kenosha Kid
Neurology is a physical discipline. It is not its job to satisfy metaphysicists any more than it's its job to satisfy creationists or dualists. — Kenosha Kid
ne. It is not its job to satisfy metaphysicists any more than it's its job to satisfy creationists or dualists. If you're in principle satisfied that the science can isolate what consciousness is, not just correlates (including causal) of consciousness, but want a deeper understanding of why a thing that is something is that thing, which is not a question specific to consciousness at all, you ought to look to other metaphysicists, surely? Is there a specific aspect to consciousness that makes this special? — Kenosha Kid
It fixes the conceptual problem at issue. Hacker makes a concrete proposal that doesn't assume dualism. — Andrew M
Irreducible Consciousness. — Srap Tasmaner
NCCs aren't correlates but the thing itself, not individually but as a messy whole. — Kenosha Kid
'Irreducible Mind' — Wayfarer
True but Dennett is a philosopher to be fair, and not a strict neuroscientist. It would not be out of the realm of possibility for other philosophers to engage him in these kind of (philosophical) questions. And I recognize this might be a legitimate neuroscience question, it is a legitimate philosophical question. — schopenhauer1
It's when a philosopher handwaves it and then narrowly focuses on the correlates when clearly the question is not about the mechanisms of how the correlates integrate, but how it is that this correlation exists in the first place, that's when there is the continual ignoring of question or talking past each other. — schopenhauer1
Neurology is a physical discipline. It is not its job to satisfy metaphysicists any more than it's its job to satisfy creationists or dualists. If you're in principle satisfied that the science can isolate what consciousness is, not just correlates (including causal) of consciousness, but want a deeper understanding of why a thing that is something is that thing, which is not a question specific to consciousness at all, you ought to look to other metaphysicists, surely? Is there a specific aspect to consciousness that makes this special? — Kenosha Kid
So how could the idea itself be identified with anything physical, when the physical representation is arbitary? You could invent a whole arbitrary system of symbols, but if it followed the rules, it would be valid even if noboby else understood it. And those rules are real, but I can't see how they're physical in nature. — Wayfarer
A brain is tangible (to a consciousness); a consciousness is not tangible (to any consciousness).
Therein lies a, or maybe the, pivotal ontological difference—even when eschewing the issue of whether a consciousness can hold non-epiphenomenal, hence top-down, effects upon its own substratum of brain. — javra
Tangentially, I’ll add that this thread's persistent reference to brains is overlooking the fact that even amebas hold an awareness of other: such as in an ameba’s capacity to discern what is relative to itself a predator from what is a prey. — javra
Nonetheless, the physical brain and all it does will forever be tangible percepts which we perceive as other relative to us as the consciously aware observers. — javra
If, simplistically put, a living brain is identical to a consciousness, they then should both be either tangible or, else, intangible. But they hold different ontological properties in this respect; they are not identical. — javra
I’m sometimes accused of ducking questions, obviously I’m not alone. — Wayfarer
Well, you could see this thread for an example of taking the idea further: even electrons have awareness of each other. As an intermediary point: even trees are aware of one another. The point befits the fact that human consciousness is a sophisticated kind of mammalian consciousness, which is a sophisticated kind of animal consciousness, which is a sophisticated kind of biological reactivity, which is a sophisticated set of chemical reactions, which are sophisticated sets of electromagnetic particle interactions. — Kenosha Kid
If I'm reading you right, you're talking about the third-person/first-person barrier. That is true. If you want to know what consciousness is, that is a third-person question. — Kenosha Kid
Likewise an explanation for consciousness doesn't need to feel like consciousness. — Kenosha Kid
There's a difference between substance and function. There is a difference, for instance, in an electron and the movement of an electron. There is a difference between a computer and an executing program. You can't just look at the object, you have to look at what it does if you want to explain e.g. electric current, a machine learning algorithm, or consciousness. — Kenosha Kid
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.