And what are the consequences of dismantling qualia? What happens to our manifest image? Does pain no longer hurt? Does red no longer look red? The absurdity of this leads me to believe that the entire project to dismantle qualia is fundamentally flawed. — darthbarracuda
The absurdity of this leads me to believe that the entire project to dismantle qualia is fundamentally flawed. — darthbarracuda
I feel and think and act in certain ways, and these ways are not always quantifiable, they are often qualitative — mcdoodle
But then, at the moment I am much more excited in opposing the 'causal closure of the physical', which people of a natural-scientific inclination seem to presume awfully easily. They can't provide me with a forecast-successful empirical model of the little spider that keeps crawling across my desk - but say that physics is causally closed - it beggars belief. — mcdoodle
Both the scientist's and robot's blue has qualities that are unique to the blue phenomenon at their respective terminuses. The robot's blue isn't a quale, because we reserve that word for a conscious experience of blue, which we don't believe the robot has. There's no reason to believe at this point that conscious experience isn't a peculiarity of the exact matter, structures and processes of functining brains. — Terrapin Station
But where is the "terminus"? — tom
We know, via computational universality, that consciousness *cannot* be a peculiarity of an exact state of matter. — tom
Also, depending on how you define consciousness, there exist conscious entities that don't possess qualia - e.g. all non-human animals. — tom
The fact remains that qualia are unpredictable and indescribable — tom
In my view there's no good reason to buy that there's an unconscious mind per se. I certainly buy that there are unconscious brain processes, but unconscious thoughts, concepts, desires, etc. are essentially an "other minds" problem, with the added complication that we can't even know (by acquaintance) the suggested phenomena from a first-person perspective. — Terrapin Station
If we hit someone over the head and knock them out we say they are "unconscious" and to claim they are still conscious without any empirical evidence is either semantic splitting of hairs or mystical metaphysical mumbo jumbo along the lines of saying everything is consciousness. — wuliheron
It seems like you just didn't read my response that closely. I said, "I certainly buy that there are unconscious brain processes." So yeah, the person who is knocled out is unconscious, and their brain is still active. What I disagree with is that there's any good reason to say that those unconscious brain processes amount to mentality, so that they're having thoughts etc. just where they're not aware of them. — Terrapin Station
I don't think we know any such thing, especially not via "computational universality." For one, that would surely rest on a mistaken ontology of mathematics. — Terrapin Station
Computational universality is a principle of physics. It has nothing to do with mathematics, or its ontology. All known laws of physics, and all future laws will respect this principle.
We know that a universal computer can be put in 1-to-1 correspondence with the human brain (or any other finite physical system). — tom
In my view there's no good reason to buy that there's an unconscious mind per se — Tom
The curious case of the robot and the scientist... — Tom
Why would the robot need to experience blue in order to know what blue represents? Why couldn't it represent a 475nm wavelength of EM energy with some other symbol in it's memory and still "know" what the scientists "knows"?The curious case of the robot and the scientist.
Consider a faulty scientist and a faulty robot. The scientist is an expert in light, but was born with a rare condition affecting her optic nerve, that makes it unable to transmit blue light signals. The robot has a loose wire, so it too is unable to transmit blue light signals from its camera. The scientist is fixed by a doctor, and the robot is fixed by an engineer.
So, what has changed? Both the robot and the scientist can now recognise blue and are able to use that recognition to perform certain tasks. Both the robot and scientist experience blue.
But, only the scientist now *knows* what it is like to experience blue, the robot does not. There are also a couple of curious aspects of this experience that she notices - she, despite her extensive knowledge, could not predict what the experience was going to be like, and she can't describe it either.
Only the scientist possesses the quale of blue.
It seems a bit easy just to deny qualia exist, rather than recognise there is a potentially deep philosophical problem to solve. — tom
I think it is anthropomorphic to claim that all non-human animals don't possess consciousness to some degree. When we share nearly 99% of our genes with chimps, what is it about that 1% that prevents the chimp from having consciousness? How do you explain how a chimp can know that when another chimp is staring in another direction, then they look in that direction too. This must mean that they can model other chimps' mental activity - that they know that other chimps have access to information that they might not until they look in the same direction.But where is the "terminus"? Both robot and scientist are being affected by blue light - i.e. some atoms are being affected.
We know, via computational universality, that consciousness *cannot* be a peculiarity of an exact state of matter.
Also, depending on how you define consciousness, there exist conscious entities that don't possess qualia - e.g. all non-human animals.
The fact remains that qualia are unpredictable and indescribable - very odd indeed! — tom
The most recent evidence in physics is that the brain maximizes entropy — wuliheron
Computational universality is a principle of physics. It has nothing to do with mathematics, or its ontology. — tom
What would that even mean? (And physics research on brains?) Maybe if you'd link to the research you have in mind, it would make more sense than your attempted paraphrasing of it.
Anyway, nothing you said after that in any way implies (and it doesn't even seem to have the slightest bit to do with) the idea of mental content that we're not aware of. — Terrapin Station
The curious case of the robot and the scientist.
Consider a faulty scientist and a faulty robot. The scientist is an expert in light, but was born with a rare condition affecting her optic nerve, that makes it unable to transmit blue light signals. The robot has a loose wire, so it too is unable to transmit blue light signals from its camera. The scientist is fixed by a doctor, and the robot is fixed by an engineer.
So, what has changed? Both the robot and the scientist can now recognise blue and are able to use that recognition to perform certain tasks. Both the robot and scientist experience blue.
But, only the scientist now *knows* what it is like to experience blue, the robot does not. — Tom
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.