## Flaw in Searle's Chinese Room Argument

• 599
Half-way reductionism. The argument is looking at arbitrary level of abstraction. If you're gonna reduce it, reduce it all the way down. So, computation is not just manipulation of symbols, computation is just interaction between electromagnetic fields, like everything else.

But, computation can give rise to virtual realities, and so while other physicalist theories meet their explanatory end at the bottom of reductionism, the theory of virtual consciousness stands at the door to a realm of increasing complexities and almost unlimited possibilities.

We are virtual, people, I’m telling you.
We are not only relatives with monkeys, we are also relatives with Pacman!!
• 1.6k
But, computation can give rise to virtual realities

What does that even mean? Computation does nothing but flip bits. If a conscious observer interprets the bit flipping as a cat video, that's the contribution of consciousness. By itself, all the computer does is flip bits. In fact to have virtual reality you have to write a program that inputs a string of bits; and lights up a display screen with frequencies amenable to the human eye, in patterns the human brain can interpret and the human mind can experience. Humans write the programs to create the virtual realities out of meaningless bit patterns.

Mere bit flipping by itself does none of that. It just flips bits according to rules. If this one's on turn that one off. Rule-based bit flipping.
• 8.9k
Virtual reality is existentially dependent upon reality... which is not 'virtual'.

Meh.
• 8.9k
Mere bit flipping by itself does none of that. It just flips bits according to rules. If this one's on turn that one off. Rule-based bit flipping.

Othello for computers.

:wink:
• 822
An associated question: What if the computer tells you it is aware of itself and not simply aware to the extent it can answer questions? What would be your test for self-awareness?
• 1.6k
Othello for computers.

I'm afraid I don't understand the remark.

An associated question: What if the computer tells you it is aware of itself and not simply aware to the extent it can answer questions? What would be your test for self-awareness?

print("Hey I'm sentient in here. Send pr0n and LOLCats!")

• 822
• 10.1k
this op is entirely nonsensical - it doesn't convey anything about the original argument, nor any insight into what might be wrong with the original argument.
• 599
But, computation can give rise to virtual realities
— Zelebg

What does that even mean?

Virtual reality is a simulated experience that can be similar to or completely different from the real world.

Humans write the programs to create the virtual realities out of meaningless bit patterns.

The virtual reality program I’m talking about was also created by humans, only unbeknown to them and against their will, by their own brains.

Computation does nothing but flip bits.

Atoms do nothing but follow simple laws of physics, and yet here we are.

With emergent properties there are different levels of abstraction. Each one tells its own story in its own context, so general description of interactions at one level of abstraction does not explain interactions or causations on other levels.

Imagine an actual and simulated ball and switch. When either the actual ball falls on the actual switch or the virtual ball falls on the virtual switch, an actual light-bulb turns on. We have to look at the abstraction level of effective causation, and here it is not about flipping bits, it’s about two virtual entities interacting.

Virtual ball and virtual switch. The difference between these two virtual entities and their actual counterparts is superficial, but virtual entities are not only material like actual ones, they are better, they are also "immaterial" like only virtual entities can be.
• 599
this op is entirely nonsensical - it doesn't convey anything about the original argument, nor any insight into what might be wrong with the original argument.

Uhh. Searle postulates computation is just symbol manipulation. I simply explain why that very first step is a flaw, it’s arbitrary. See more details in the post above.
• 10.1k
symbolic order can’t be reduced to, or explained in terms of, physical laws.
• 1.6k
Virtual reality is a simulated experience that can be similar to or completely different from the real world.

Who or what is it that's having the experience?

I don't doubt that VR can be fully immersive, either now or in the near future. But who's having the experience? Descartes thought through this question in 1641. He said, What if everything I see and experience is nothing more than an illusion created for me by an evil daemon? Even in that case, there is still an "I" that is experiencing the illusion. Your analysis doesn't begin to touch the question of who or what the experiencer is.

Here is Descartes's direct quote. Note that he's anticipated your idea by almost 400 years.

But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.

https://www.shmoop.com/study-guides/literary-critics/rene-descartes/quotes

A deceiver, be it 17th century daemon or 21st century VR program, can create many realistic sense impressions for you to experience. But you are always the one who experiences. You always exist separately of any illusion you might be experiencing.
• 10.1k
‘The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave.’

And speaking of Descartes, he anticipates such arguments:

if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs - for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action.
— Rene Descartes
• 1.6k
And speaking of Descartes, he anticipates such arguments:

Great quote. Descartes anticipated the Turing test. And with "... even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others," he's making the distinction between weak and strong AI. Single purpose versus general purpose intelligence. Smart guy that Descartes.
• 10.1k
Yes Ive always been an admirer notwithstanding the many problems associated with ‘the Cartesian model’. But he was genius nonetheless.
• 599
What if the computer tells you it is aware of itself and not simply aware to the extent it can answer questions? What would be your test for self-awareness?

If the program could spontaneously, without any learning or explicit programming of any philosophical or sentience related data, arrive to some thought along the lines of “I think therefore I know I exist”, or question like “Is your red the same as my red?”

Basically, observing its own curiosity towards its own existence and functionality should be revealing above anything else I can think of.
• 599
Here is Descartes's direct quote. Note that he's anticipated your idea by almost 400 years.

He’s talking about simulation in the context of epistemological uncertainty. I’m talking about simulation in the context of ‘virtual entities’ to address “explanatory gap”. https://en.m.wikipedia.org/wiki/Explanatory_gap

Who or what is it that's having the experience?

Virtual machine, a kind of program, called ego, self, or consciousness. The machine called subconsciousness hosts or emulates virtual machine called consciousness inside of itself, so for the consciousness it is practically existing in the simulation created by its “master-machine”, which can feed sensations, emotions, and cognition signals by direct transfer, by fake signal, by modulated signal, or it can invent whatever new context with signal-meaning pairs, like colors or sounds, and present it to the “slave-machine” as actual reality.
• 366
I agree.

We are like machines.

I'm not saying we're toys, but we're petty life that progressively gets smarter, or dumber. A machine can age quickly, and has greater half-life.

I believe that natural machines exist.

A natural machine can calculate who's focusing on me in their thoughts - an example of it's type of math.

I think that "natural machines" collect data from the universe by using living things as a Turing Machine.

I don't see the point in making self conscious or intelligent machines. I think beyond aethetics, it's a waste of time.

It's probably more beneficent to make a good 'dumb-robot'. Such as one that expells fire and rolls really fast at a target it fixates on. It may become more effective than a human-like one weilding any weapon.
• 599
The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave.

It’s impossible to even begin with it because you would first need to know what is "mind", "understanding" or "consciousness". And as I said, reduction of computation to “symbol manipulation” is arbitrary point of view, incomplete, flawed and misleading.

And speaking of Descartes, he anticipates such arguments:

His conclusion is already shown to be false by the work done with AI. In any case, I haven’t made any arguments yet, they have almost nothing to do with what Descartes was musing about.
• 10.1k
His conclusion is already shown to be false by the work done with AI.

I'm no expert, but I worked at an AI startup for 3 months recently, organising their documentation. And Descartes' argument - made in 1630! - has direct bearing on the issues they're facing. I would try and explain it to you, but from past experience with you, any attempt will be met with 'but you don't understand English', so I'll pass.

I believe that natural machines exist.Qwex

I believe in puff the magic dragon and anti-gravity.
• 366

You're entitled to that belief, however, I wouldn't say it applied to this thread.
• 591
signal-meaning pairs,

Whatever flaws you might ever turn up, the point is Searle caught cognitive scientists confusing semantics with syntax. Signal-meaning pairs, as you put it, with signal-signal pairs. Understanding, with signal-processing. Intentionality, with script-reading, or program following.

If that's what you are doing too, as I expect, you are in the respectable company of nearly everybody. It's a catastrophically tempting confusion.

The semantic ability that humans alone (so far) excel in, and start delighting in in infancy, is kind of like a game of word-fetch. Understanding how words are tossed into the world and predicting where they have landed. When AI robots can interact with the world with enough facility to find a ball in a garden, they might be in the position to start learning to sniff out meanings.

You seem to be hoping to by-pass that evolution and achieve results merely by suggestive labelling of the modules of an obviously non-conscious computer system. So Searle's argument clearly hasn't alerted you to any important difference between semantics and syntax. And, unfortunately, it isn't guaranteed to have that effect.
• 599
If that's what you are doing too, as I expect, you are in the respectable company of nearly everybody. It's a catastrophically tempting confusion.

I said Searle’s reduction of computation to “symbol manipulation” is arbitrary point of view, incomplete and misleading. It’s like arguing chemistry is just stupid atoms following laws of physics, so they can not possibly give rise to things like biology, language or consciousness. Where is the confusion?
• 591
It’s like arguing chemistry is just stupid atoms following laws of physics, so they can not possibly give rise to things like biology, language or consciousness. Where is the confusion?

Oh, so right there. Searle doesn't say that symbol manipulation can not possibly give rise to consciousness. Only that it needs to at least produce meaning within the system (have a proper semantics). And showing symbol manipulation isn't showing a semantics.
• 599

[edit:fixed]
That’s worse. Then Searle's argument makes a wrong presupposition that it is an adequate model of how understanding works. It’s like first postulating little gremlins make the world go around, just to deny it - but there are no little gremlins, so the world stands still.
• 591
That’s worse. Then it makes a wrong presupposition

What does? My post or Searle's argument?

... that his example

What, the Chinese Room?

... is an adequate model of how understanding works.

So - not the Chinese Room?

Can you be a bit clearer?
• 599

Searle's argument.
• 10.1k
It’s like arguing chemistry is just stupid atoms following laws of physics, so they can not possibly give rise to things like biology, language or consciousness. Where is the confusion?

So you think that atoms do "give rise" to language and consciousness?

The problem with all your posts is that they contain many unstated premisses, which, going on the evidence of what you do actually say are quite contentious and problematical in themselves. It's as if you're having a conversation in your mind with an imaginary opponent, and then you post your side of the conversation on the forum. But nobody else here has been having that conversation with you, so we don't know what your argument is, or what you're opposing.

Your OP in this case was extremely sketchy, as if you assume, not only that everyone reading understands Searle's Chinese Room argument, but also must be able to see what your objections to it are. But in this case, even the second sentence in the OP appears to state something which I believe is contentious, so I will try and spell out exactly what.

There used to be a poster here who was expert in biosemiotics 1, and he posted a link to a very good article on this subject, The Physics and Metaphysics of Biosemiosis by Howard Pattee, which I will quote from here.

The concept of Biosemiotics requires making a distinction between two categories, the material or physical world and the symbolic or semantic world. The problem is that there is no obvious way to connect the two categories. This is a classical philosophical problem on which there is no consensus even today. Biosemiotics recognizes that the philosophical matter-mind problem extends downward to the pattern recognition and control processes of the simplest living organisms where it can more easily be addressed as a scientific problem. In fact, how material structures serve as signals, instructions, and controls is inseparable from the problem of the origin and evolution of life. Biosemiotics was established as a necessary complement to the physical-chemical reductionist approach to life that cannot make this crucial categorical distinction necessary for describing semantic information. Matter as described by physics and chemistry has no intrinsic function or semantics. By contrast, biosemiotics recognizes that life begins with function and semantics.

Biosemiotics recognizes this matter-symbol problem at all levels of life from natural languages down to the DNA. Cartesian dualism was one classical attempt to address this problem, but while this ontological dualism makes a clear distinction between mind and matter, it consigns the relation between them to metaphysical obscurity. Largely because of our knowledge of the physical details of genetic control, symbol manipulation, and brain function these two categories today appear only as an epistemological necessity, but a necessity that still needs a coherent explanation. Even in the most detailed physical description of matter there is no hint of any function or meaning.

The problem also poses an apparent paradox: All signs, symbols, and codes, all languages including formal mathematics are embodied as material physical structures and therefore must obey all the inexorable laws of physics. At the same time, the symbol vehicles like the bases in DNA, voltages representing bits in a computer, the text on this page, and the neuron firings in the brain do not appear to be limited by, or clearly related to, the very laws they must obey. Even the mathematical symbols that express these inexorable physical laws seem to be entirely free of these same laws.

The article then goes on to discuss foundational issues in the philosophy of biology and the nature of living things, which are not relevant here. The point this passage makes, is that the 'symbolic or semantic' order is not continuous with, or explicable in terms of, the laws which describe physical entities. In other words, that you can't assume that life and mind can be understood through the perspective of the laws of physics and chemistry alone. There's something else at work which can't be explained in those terms.

But this actually undermines the kind materialism which I think you're trying to argue for. I mean, it's natural to assume nowadays that science has a basic grasp of how physical processes give rise to life and mind - but that is what I'm questioning here. I don't think that it's a clear-cut scientific matter at all, it's not as if science has the basic outline worked out and it's just a matter of filling in the gaps. There's something the matter with the basic assumption about the nature of life, mind, reality, and so on, and that something is 'materialism'.

------
1. Biosemiosis is a field of semiotics and biology that studies the prelinguistic meaning-making, or production and interpretation of signs and codes in the biological realm ~ Wikipedia.
• 599
So you think that atoms do "give rise" to language and consciousness?

Yes. Why do you ask, what do you think?

The problem with all your posts is that they contain many unstated premisses, which, going on the evidence of what you do actually say are quite contentious and problematical in themselves.

So you say, can you show? Pick one issue, quote me, and actually point out what you think is the problem.
• 10.1k
So you say, can you show? Pick one issue, quote me, and actually point out what you think is the problem.

I did that already. Look at your OP again. It's like, oh, a comment made in a philosophy tutorial when you're discussing Searle as part of a conversation. It kind of makes sense, but it's like a stream of thought rather than an argument.

Why do you ask, what do you think?

Read the passage I quoted and think about it. It relates to the topic of the OP.

Philosophy asks us to question the very things we generally take for granted. The things that 'everyone knows' are true, that we just take for granted as being obvious facts. Some of them might not turn out to be so obvious.
• 1.6k
He’s talking about simulation in the context of epistemological uncertainty. I’m talking about simulation in the context of ‘virtual entities’ to address “explanatory gap”.

You haven't explained the explanatory gap, you've only waved your hands at it.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal