• debd
    42
    The Chinese room argument says that -
    Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
    Now consider the room to be our brain and the person is replaced by a chain of neurons. The visual symbols of the Chinese alphabets are converted into a series of action potentials which are transmitted by a chain of interconnected neurons. This gives rise to the conscious understanding in our brain. But no individual neuron have the understanding of Chinese or have any idea what these symbols mean, it is just opening/closing the ion channels in response to a neurotransmitter and passing on the action potential to the next neuron. The same thing occurs in even in the whole network of neurons. This is analogous to the person following a set of instruction.

    Hence it appears to me that consciousness is the property of whole systems on not of its isolated part, this has already been posited out as systems reply to Searle.
  • god must be atheist
    5.1k
    I don't know if this makes a difference, but the guy following instructions could be a machine. He does not need a mind.

    A person who has a mind, and speaks Chinese and reads and writes in Chinese too, has an AI program. It processes not only simple input-output instructions, but makes decisions on avaliable other data.

    For instance: You pass in the Chinese question "How is the weather today?" the instruction set may direct the dumb actor to say "nice" or "awful", but it will never say "12 Kilograms." When it's a choice of "nice" and "a\wful", the program alone can't decide. It needs a third input into the instruction set, "check the weather and answer accordingly."

    If it's a simple translation set of instructions, the machine will be stuck with not knowing whehter to say "nce" or "awful". A human who has access to the third piece of information can pick the proper symbol.

    I don't know what this proves or unproves, because, frankly, I don't follow the logic that brings you to the conclusion that the consciousness is the whole thing, not one of its part. That conclusion absolutely escapes my understanding.
  • god must be atheist
    5.1k
    Okay, I finally read the second last paragraph. In the human mind, according to my belief anyway, there is a conceptualization what "wheather" is, and what "nice" and "awfu" are. There may be not one single neuron responsible for the conceptualization, but there is differentiation of concepts, and they can't all involve all the neurons, and they can't all involve the same, albeit limited number of, neurons.

    Hence, I reject the concept that the Chinese room purports to prove according to the example you brought up.
  • debd
    42
    In the human mind, according to my belief anyway, there is a conceptualization what "wheather" is, and what "nice" and "awfu" are.god must be atheist

    This conceptualization of "nice", "awful" etc is due to the activity of one or more neuronal networks in our brain. But any single neuron within the neuronal network is unaware of what it is conceptualizing.
  • Outlander
    1.8k


    You don't just jump from "a single neuron" to "full human consciousness" like that.
  • debd
    42
    You don't just jump from "a single neuron" to "full human consciousness" like that.Outlander

    Yes, I agree. I am trying to draw an analogy in which a neuron is the man in the chinese room and our whole brain is the room itself. Both the man and the neuron have no understanding of chinese yet the brain will understand chinese, hence the room should too.
  • RogueAI
    2.4k
    That begs the question: how does a system of X (neurons, switches, q-bits, whatever) become conscious? And we're back to the Hard Problem.
  • Harry Hindu
    4.9k
    Both the man and the neuron have no understanding of chinese yet the brain will understand chinese, hence the room should too.debd
    It seems to me that to solve this riddle, we need a concise definition of "understanding".

    The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
    Then the Turing Test isn't very good at determining some system's understanding of some symbol-system.

    Ironically, the instructions themselves is a symbol-system and the man in the room understands the instructions, but not Chinese, so the instructions are not for understanding Chinese, but what do when you see a certain scribble.

    So the man in the room does understand something, but not Chinese. This leads one to posit that understanding is possessing instructions for interpreting some symbol.
  • Banno
    23.1k
    In A nice derangement of epitaphs Davidson argues that language is not algorithmic.

    Searle is arguing much the same thing with the Chinese room.

    How would the Chinese room deal with nonsense? How would it translate A Spaniard in the works?
  • Kenosha Kid
    3.2k
    Hence it appears to me that consciousness is the property of whole systems on not of its isolated part, this has already been posited out as systems reply to Searle.debd

    Am I reading this right..? Are you using the Chinese room argument to suggest that individual neurons aren't conscious?
  • Harry Hindu
    4.9k
    Am I reading this right..? Are you using the Chinese room argument to suggest that individual neurons aren't conscious?Kenosha Kid
    Am I reading this right..? Are you suggesting that we have billions of conscious entities within one brain? I wonder, which neuron in my brain is my consciousness?
  • Harry Hindu
    4.9k
    How would the Chinese room deal with nonsense? How would it translate A Spaniard in the works?Banno
    Is the problem that the sentence is actually nonsense, or that there are no instructions for translating such an arrangement of scribbles? What does it mean for some string of scribbles to be nonsense?
  • debd
    42
    I think that consciousness or understanding or perception at a particular point of time is the function of the structural and physiological state of the neuronal network at that point in time. I confess that I have no idea about the actual architecture of the neural network, but I do think such state functions of neuronal networks or even interconnected neuronal networks give rise to perception or understanding.

    Oliver Sacks in his book The River of Consciousness wrote about patients with Parkinsonism with bradykinesia who had altered perception of time. Time flowed more slowly for these patients in their Parkinsonian state. They were able to recognize this change in their temporal perception only when they were relieved of this Parkinsonian state by medication or deep brain stimulation. The region in the brain responsible for this change in temporal perception has been grossly identified to be the basal ganglia and substantia nigra. So a directed electrical stimulation in the brain can change the perception of time.
  • SophistiCat
    2.2k
    In A nice derangement of epitaphs Davidson argues that language is not algorithmic.

    Searle is arguing much the same thing with the Chinese room.
    Banno

    I think Searle's thought experiment was rather a reaction to reductive takes on consciousness, particularly computational, functionalist ones:

    I think that consciousness or understanding or perception at a particular point of time is the function of the structural and physiological state of the neuronal network at that point in time.debd

    Now consider the room to be our brain and the person is replaced by a chain of neurons.debd

    There are other variants of the thought experiment that are an even better fit for this, such as Ned Block's Chinese Nation thought experiment, where a large group of people performs a neural network computation simply by calling a list of phone numbers. The counterintuitive result here is that a functionalist would have to say that the entire system thinks, understands language, feels pain, etc. - whatever it is that it is functionally simulating - even though it is very hard to conceive of e.g. the Chinese nation as a single conscious entity.

    But I think this people-as-computer-parts gimmick is a red herring. Of course a part of a system is not equivalent to the entire system - that was never in contention. A wheel spoke is not a bicycle either. The real contention here is whether something that is not a person - a computer, for example - can have a functional equivalent of consciousness.
  • RogueAI
    2.4k
    There are other variants of the thought experiment that are an even better fit for this, such as Ned Block's Chinese Nation thought experiment, where a large group of people performs a neural network computation simply by calling a list of phone numbers. The counterintuitive result here is that a functionalist would have to say that the entire system thinks, understands language, feels pain, etc. - whatever it is that it is functionally simulating - even though it is very hard to conceive of e.g. the Chinese nation, as a single conscious entity.

    But I think this people-as-computer-parts gimmick is a red herring. Of course a part of a system is not equivalent to the entire system - that was never in contention. A wheel spoke is not a bicycle either. The real contention here is whether something that is not a person - a computer, for example - can have a functional equivalent of consciousness.

    Another issue is that the contents of a computer's mind (if it has one) are immune from discovery using scientific methods. The only access to knowledge of computer mental states would be through first-person computer accounts, the reliability of which would be impossible to verify. Whether machines are conscious will forever be a mystery. This suggests that consciousness is unlike all other physical properties.
  • TheMadFool
    13.8k
    This is a well-considered statement. I second it but that would mean, if taken only a step further, that an actual Chinese Room is, well, conscious - is itself a mind capable of understanding and all that jazz that mind/consciousness is about. Yet, that seems an extraodinary claim to make - thinking here about superorganisms.

    Your take on this also implies that, if we flip it around, that consciousness maybe an illusion; after all, if one is under the impression that a Chinese Room is incapable of understanding then, we too must be incapable of doing so.
  • SophistiCat
    2.2k
    Another issue is that the contents of a computer's mind (if it has one) are immune from discovery using scientific methods. The only access to knowledge of computer mental states would be through first-person computer accounts, the reliability of which would be impossible to verify. Whether machines are conscious will forever be a mystery. This suggests that consciousness is unlike all other physical properties.RogueAI

    How is this issue different from not having a first-person experience of another person's consciousness? Unless your real issue is that it's a computer rather than a person - but that is the same issue that Chinese Room-type thought experiments try to capitalize on (confusingly, in my opinion).
  • debd
    42
    There are other variants of the thought experiment that are an even better fit for this, such as Ned Block's Chinese Nation thought experiment, where a large group of people performs a neural network computation simply by calling a list of phone numbers. The counterintuitive result here is that a functionalist would have to say that the entire system thinks, understands language, feels pain, etc. - whatever it is that it is functionally simulating - even though it is very hard to conceive of e.g. the Chinese nation as a single conscious entity.SophistiCat

    Yes, that's why I used the analogy of the brain and its constituent neurons which we consider to be conscious.

    The real contention here is whether something that is not a person - a computer, for example - can have a functional equivalent of consciousness.SophistiCat

    Consider a neuronal network that is responsible for the perception of time as in the case of Parkinson's patients I described earlier. Now consider a biological neuron in this network is replaced by an artificial one. This is already being done, although not at the level of the neurons but more crudely with deep brain stimulation and responsive neuro stimulation. Hopefully, technology will advance sufficiently to let us do this at the neuronal level. Now, these patients don't consider any otherness in their perception except that it normalizes from the diseased state, even when they know that implants are present within their brain. And now we keep replacing biological neurons with electronic ones - sort of like the ship of Theseus. I would argue that as this part biological part electronic construct retains its time perception, if we ultimately replace the whole network, it will retain the same perception. We can extend this to consciousness itself although the network will be much more complicated.
  • debd
    42
    Thanks, this is what I was trying to articulate.
  • debd
    42
    How is this issue different from not having a first-person experience of another person's consciousness? Unless your real issue is that it's a computer rather than a person - but that is the same issue that Chinese Room-type thought experiments try to capitalize on (confusingly, in my opinion).SophistiCat

    I second this. If we consider the statement of a supposedly conscious computer to be unreliable then the same should apply to any other person also and then everyone other than ourselves may be a zombie.
  • TheMadFool
    13.8k
    Thanks, this is what I was trying to articulate.debd

    :up:

    It's perfect as you wrote it. All I did was gild the lily.
  • RogueAI
    2.4k
    That's true. We assume other people are conscious because they look like us, and are biological organisms, like ourselves. But we don't know for sure. How can we? That does not, however, change my point about the internal mental states of computers forever being a mystery.
  • Caldwell
    1.3k
    The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
    Now consider the room to be our brain and the person is replaced by a chain of neurons.
    debd
    So are we just gonna ignore the fact that the person in the room passed the program instruction, and not the understanding of the Chinese language?
    Sometimes I feel that thought scenarios like this is more like a sleight of hand in logical argument -- as long as the reader keeps losing track of what is being said, the argument keeps its force.
  • Harry Hindu
    4.9k
    I think that consciousness or understanding or perception at a particular point of time is the function of the structural and physiological state of the neuronal network at that point in time.debd
    What makes a neuronal network conscious but not a silicon network? Sounds like biological bias to me.

    Also, this seems to be 3rd person view of understanding. What is the 1st person view of understanding or consciousness or perception. I know I'm conscious, understanding and perceiving by different means than you would know I'm conscious, understanding and perceiving. Why?

    This suggests that consciousness is unlike all other physical properties.RogueAI
    That does not, however, change my point about the internal mental states of computers forever being a mystery.RogueAI
    Only because of thinking of mind and body in conflicting dualistic terms creates the problem in the first place.
  • RogueAI
    2.4k
    OK, how would you go about verifying that a computer is conscious?
  • Harry Hindu
    4.9k
    Step one would be to define consciousness in a way that addresses why first person evidence of consciousness is different than third person evidence of consciousness.
  • RogueAI
    2.4k
    That begs another question: why don't we have an agreed upon scientific definition of consciousness yet? Maybe 100 years ago that would have been asking too much, but at this stage in the game? It's remarkable we still can't define what consciousness is, and yet another sign that the phenomenon is outside the "realm" of science.
  • Banno
    23.1k
    I think Searle's thought experiment was rather a reaction to reductive takes on consciousness, particularly computational, functionalist ones:SophistiCat

    Well, yes. That's what an algorithm is.

    Minds, Brains, And Programs

    This version has replies to critics.

    My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
  • Banno
    23.1k
    Is the problem that the sentence is actually nonsense, or that there are no instructions for translating such an arrangement of scribbles?Harry Hindu

    The problem is more that "A nice derangement of epitaphs" could not be translated into Chinese without losing the joke. Hence, there are aspects of language that are not captured by such an algorithmic translation process.
  • debd
    42
    So are we just gonna ignore the fact that the person in the room passed the program instruction, and not the understanding of the Chinese language?Caldwell

    No, just that the person passing the instruction is replaced by a neuron or a network of neurons passing the instructions without having any intrinsic understanding of chinese even though the brain as a whole does.
  • debd
    42
    This version has replies to critics.

    My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
    Banno

    If he internalizes all the rules in his head(brain) then effectively he is learning and understanding chinese. This is what we do when we learn a new language. Memorizing all the rules does not allow me to answer questions like "How do you feel today?", "What are you grateful for today?". This has been posited as a reply to Searle by D Cole.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.