• TheMadFool
    13.8k
    I've already asked them to do that as well as define understanding, but they only seem willing to keep asserting their unfound notions.

    They also ignore the fact that the man in the room still understands the language the instructions are written in and how the man learned THAT language, and then they're failure to define understanding and consciousness, this thread is just a bunch of smoke and mirrors. Interesting how you can learn another language using your language, hmmm?
    Harry Hindu

    I think the person in the Chinese Room, his knowledge of language, any language for that matter, isn't important. If I recall correctly, he doesn't know Chinese at all. All that this person represents is some mechanical computer-like symbol manipulation system that spits out a response in Chinese to a Chinese interlocutor and that's done so well that it appears the Chinese Room understands Chinese.

    Perhaps this isn't the the right moment to bring this up but the issue of Leibniz's identity of indiscernibles seems germane. The Chinese Room is indistinguishable from a Chinese person - they're indiscernible - but does that mean they're identical in that the Chinese Room is ontologically a Chinese person? The issue of Nagel's and others' idea of an inner life as part of consciousness crops up.
  • Harry Hindu
    5.1k
    I think the person in the Chinese Room, his knowledge of language, any language for that matter, isn't important. If I recall correctly, he doesn't know Chinese at all. All that this person represents is some mechanical computer-like symbol manipulation system that spits out a response in Chinese to a Chinese interlocutor and that's done so well that it appears the Chinese Room understands Chinese.TheMadFool
    First you say that knowledge of any language isn't important, then go on to explain how some entity knows Chinese or not.

    Seems like we need to know how the "mechanical computer-like symbol manipulation system that spits out a response in Chinese" learned how to do just that.

    Perhaps this isn't the the right moment to bring this up but the issue of Leibniz's identity of indiscernibles seems germane. The Chinese Room is indistinguishable from a Chinese person - they're indiscernible - but does that mean they're identical in that the Chinese Room is ontologically a Chinese person? The issue of Nagel's and others' idea of an inner life as part of consciousness crops up.TheMadFool
    The difference is that the instructions in the room are not the same instructions that a Chinese person used to learn Chinese. People are confusing the instructions in the room with instructions on how to use Chinese. Since the man in the room already knows a language - the one the instructions are written in, he would need something that shows the Chinese symbol and then the equivalent in his language - you know, like how you use Google translate.
  • Harry Hindu
    5.1k
    Seems to me that missing component here is memory. You need a space to store the symbolic relationships between the scribble/sound of a word and what it points to. The man in the room possesses memory. This is how he understands the language the instructions are written in.

    The memory of what to do when a chinese symbol enters the room is on the paper with the instructions. It retains the information of what those symbols mean, which is write this symbol when you see that symbol, which is not the same instruction set in a Chinese person's memory for interpreting these symbols. This is because symbol-use is arbitrary as you can use any symbol to point to anything. Limitations do arise, however, when you want to use those symbols to communicate. You have to not only remember how you are using the symbols, but how others use the same symbols.
  • TheMadFool
    13.8k
    First you say that knowledge of any language isn't important, then go on to explain how some entity knows Chinese or not.

    Seems like we need to know how the "mechanical computer-like symbol manipulation system that spits out a response in Chinese" learned how to do just that.
    Harry Hindu

    I'm no linguist but I believe there are syntactic rules that govern all languages - these are computable i.e. can be reduced to an algorithm.

    Semantics is, forgive my ignorance here, of two types: 1. Concrete and 2. Abstract. By concrete meanings I refer to ostensive definitions which is basically an exercise in matching words with objects. Abstract meanings are extracted patterns from, among other things, concrete meanings. Computers are fully capable of both assigning names to objects and pattern recognition.

    All in all, computers are capable of both syntactical and semantic aspects of language. What this means is that language can be reduced to computation. If one wants to make the case that consciousness is something special then you can't do it using language.
  • Harry Hindu
    5.1k
    If one wants to make the case that consciousness is something special then you can't do it using language.TheMadFool
    :up:
  • debd
    42
    At this point I'd like you to consider the nature of consciousness, specifically the sense of awareness, particularly self-awareness. The consciousness we're all familiar with comes with the awareness of the self, recognition of one's own being and existence, which unfortunately can't be put into words as far as I'm concerned. It's quite clear that the Chinese Room is, from the way it operates, aware, albeit in a very limited sense, of its external environment in that it's speaking Chinese fluently but is it self-aware?TheMadFool

    Let us consider our sense of awareness as I'm most familiar with that. there are multiple levels of self-awareness in humans, the lowest being not in a coma or vegetative state and the highest being that of metacognition. How would you know if I am self-aware or not? You can only do that by looking to a comparator, yourself. At the crudest level we can do that by comparing behaviors - you compare my behavior with yourself and as it is fairly similar you assume with a certain degree of probability that I must be self-aware too.

    With advancing technology, you can examine more closely and reduce the degree of uncertainty in your assumption. Along with behaviour, our EEG patterns are also similar, along with behavior and EEG, our fMRI are similar, along with behavior, EEG and fMRI our MEG patterns are also the same. Hence the uncertainty goes down. Ofcourse you can never be completely sure but with the increase in resolution/dimensions with which we can look into our brain, the uncertainty decreases. There are multiple neurological disorders in which there are varying degrees of loss of awareness, they are diagnosed in a similar way.

    What is the underlying mechanism of self-awareness? I concede it is not yet known. Areas within the brain and their interconnections whose lesions lead to loss of self-awareness have been identified but we still lack their functional description as to how they do so. However, this does not mean we will never be able to. Our is brain is much more complex than it was assumed previously and only now the human connectomics project had been able to map the anatomical connections between different areas of the brain. And that is just the anatomical description. A functional description will be more difficult because we don't have a non-invasive way of doing it.
    There are multiple large scale networks within our brain, a special one being the default mode network. Anatomically this has been identified with self-awareness. Actually what is going on inside the network is difficult to know but we will get there eventually.
  • TheMadFool
    13.8k
    That's quite an elaborate description of how brain function has been studied. Thanks.

    My problem is this: the brain is a chinese room for just as the person who doesn't understand chinese inside the chinese room, neurons too don't understand chinese. The Turing test employed, we'd have to conclude that the chinese room is a chinese person. However, is the chinese room conscious in the sense we are in that direct, immediate, non-inferential, self-evident sense? Is the chinese room a p-zombie in that it lacks that inner life philosophers of consciousness talk about?
  • creativesoul
    12k
    Hence, there are aspects of language that are not captured by such an algorithmic translation process.Banno

    Meaning being the most important one.
  • debd
    42
    We don't know. Unless we actually know how consciousness occurs within ourselves, we won't be able to judge the presence of inner life in anything else. Evidently there is some sort of information processing going on within us which is responsible for all this but we only have broad anatomical descriptions, not detailed or functional enough either to replicate or in my opinion to base a theoretical framework.
  • TheMadFool
    13.8k
    ]
    We don't know. Unless we actually know how consciousness occurs within ourselves, we won't be able to judge the presence of inner life in anything else. Evidently there is some sort of information processing going on within us which is responsible for all this but we only have broad anatomical descriptions, not detailed or functional enough either to replicate or in my opinion to base a theoretical framework.debd

    Too bad. Thanks!
  • Harry Hindu
    5.1k
    How would you know if I am self-aware or not? You can only do that by looking to a comparator, yourself.debd
    Seems to me that I have to first know that I am self-aware. What does that mean? What is it like to be self-aware? Is self-awareness a behaviour, feeling, information...?
  • debd
    42
    I believe it is all of it. We are not born self-aware, we learn to be self-aware and we can lose it as easily. It begins with the basics, the identification of the self from our environment and gradually increases in further complexity till we have metacognition. We are born with a basic sensory and motor map of our body but most of the rest are learnt through repetition, experience and memory. Without these where would not be a sense of self. Oliver Sacks had described a patient who only lived in the present. He had no idea of causality and could not draw inferences as he could not form any memories. Neither he had metacognition.

    Gamma frequency oscillations in paralimbic network and the default mode network in our brain has been identified to cause self-awareness in us.
  • debd
    42
    Even if someone is actually able to construct a Chinese room, I believe it is impossible to ascribe the notion of “understanding” the language in the sense we understand language. Our understanding of language is intimately associated with our particular neural structure developed through evolution. Our sense of “understanding” language comes from neural function in the Wernicke’s area of the brain. People with damage to the Wernicke’s area suffer from Wernicke’s aphasia – they have great difficulty or are completely unable to understand spoke or written language. Two computers might communicate or even understand but it will be always be different from the understanding of language that we perceive because ours is forever tied to our particular neural structure. It is the functioning of the Wernicke’s area that is giving us this “understanding”.
  • Caldwell
    1.3k
    @debd
    so
    he has understood Chinese

    And yet...
    Memorizing all the rules does not allow me to answer questions like "How do you feel today?", "What are you grateful for today?".
    ...so he has not understood Chinese
    This doesn't strike you as problematic?
    Banno

    Yes, this is a good way of putting it. That's why early on in the thread I tried to make a distinction between saying that the person "passed the program instruction" and that same person understanding the Chinese language. There is a big difference and the way the scenario is worded is a gloss over this distinction.
  • SophistiCat
    2.2k
    I wonder if functionalism with respect to the mind in general might fail for a similarly banal reason? Might we be overly optimistic in assuming that we can always replicate the mind's (supposed) functional architecture in some technology other than the wetware that we actually possess? What if this wetware is as good as one can do in this universe? We might be able to do better in particular tasks - indeed, we already do with computers that perform many tasks much better than people can do in their heads. But, even setting aside the qualia controversy, it is a fact that nothing presently comes close to replicating the mind's function just as it is in actual humans, in all its noisy, messy reality. What if it can't even be done, other than the usual way?
  • Saphsin
    383
    I think analyzing the mind from the angle of a complex biological phenomenon is the right way to go. This is taken by some critics as back peddling to some metaphysical essentialism inherent in biological systems, like the old vitalists, but I don’t see a reason if we haven’t yet concluded on a unifying principle with other physical substrates.
  • debd
    42
    That's because we are replacing the chinese room with the brain and the person inside the room is being replaced by a nerual network.
  • debd
    42
    Well, the neural network and the full connectome of the nematode Caenorhabditis elegans have been fully mapped. Simulating this neural network produces an identical response to different experiments when compared to a biological worm. If we are able to do this for our own brains we can expect similar results. But C. legans has only 302 neurons, orders of magnitude less than humans and the network complexity doesnot even come close. However, it is possible and hopefully we will be able to achieve this.
  • Harry Hindu
    5.1k
    That's because we are replacing the chinese room with the brain and the person inside the room is being replaced by a nerual network.debd
    Your missing an important component - the instructions. The instructions are in the room, along with the man, but are two separate entities inside the room. What "physical" role does the instructions play inside the brain if the human is the entire neural network? And isn't the entire neural network really the brain anyway? So you haven't coherently explained all the parts and their relationship with each other.
  • debd
    42
    The instructions within the room and the person along with any other paraphernalia forms the information processing part inside the chinese room, analogous to a neural network within our brain. You are conflating instruction as a separate entity within our brain, something that the neural networks must follow to interpret the chinese symbols. But there is no such separate instruction set that the neurons follow, not atleast for learning chinese. Instead the particular anatomical and physiological state of our neurons allows us to learn chinese in this particular case or to learn to drive in another case.
  • Harry Hindu
    5.1k
    But there is no such separate instruction set that the neurons follow, not atleast for learning chinese.debd
    Neural networks weren't born knowing Chinese, English or any other language. The neural network had to learn those instructions, which means that the instructions were initially external to the neural network. How does a neural network acquire instructions for learning a language, and where do the instructions go when they are learned, understood, or known?

    How did a neural network learn to do what it does? It doesn't perform the same function as other cells in the body. What allowed it to do what it does and not some other job that some other type of cell does?
  • debd
    42
    How did a neural learn to do what it does? It doesn't perform the same function as other cells in the body. What allowed it to do what it does and not some other job that some other type of cell does?Harry Hindu

    Cellular differentiation is a result of evolution. There are multiple different cells each performing specialized functions within our body.

    Neural networks weren't born knowing Chinese, English or any other language. The neural network had to learn those instructions, which means that the instructions were initially external to the neural network. How does a neural network acquire instructions for learning a language, and where do the instructions go when they are learned, understood, or known?Harry Hindu

    For a sufficiently complex neural network, the basic underlying physiology and anatomy remains the same for learning a language as it is for estimating a trajectory and throwing a ball. Take the example of the C. elegans neural network. It has been shown to learn to balance a pole. No separate instruction set was provided, only the reward was specified - in a the natural environment this reward will ultimately be the survival of the organism.
  • SophistiCat
    2.2k
    Well, the neural network and the full connectome of the nematode Caenorhabditis elegans have been fully mapped. Simulating this neural network produces an identical response to different experiments when compared to a biological worm. If we are able to do this for our own brains we can expect similar results. But C. legans has only 302 neurons, orders of magnitude less than humans and the network complexity doesnot even come close. However, it is possible and hopefully we will be able to achieve this.debd

    Sure, but can this technology scale up many orders of magnitude to simulate human brain? And just as importantly, is such a neural net simulation fully adequate? It may reproduce some behavior, modulo time scaling factor, but not so as to make the simulation indistinguishable from the real thing - both from outside and from inside (of course, the latter would be difficult if not impossible to check).

    I am not committed to this view though - just staking out a possibility.
  • debd
    42
    It is difficult, but possible. If we are able to fully map and simulate the brain then we can ascribe a certain degree of probability that it will have a qualia similar to us. The difficulty is in the inherent invasive nature of such mapping.
  • Isaac
    10.3k
    It is difficult, but possible. If...debd

    The start of your second sentence contains a contingency which contradicts the first.
123Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.