• TheMadFool
    13.8k
    It is said that computers can't understand like we humans "do".

    I believe the Chinese Room Argument (CRA) illustrates this point. A machine may pass the Turing Test but it wouldn't be "real" conscious intelligence because it doesn't understand the semantics of, say, a conversation between it and a human being.

    What is understanding anyway?

    In my opinion there are two aspects of what we call human "understanding":

    1. We have an emotional response to words. Every word with meaning elicits an emotional response. The response may be positive or negative, that doesn't matter. All that is important to note is that semantics and what we call "understanding" has an emotional dimension

    2. Association is important to "understanding" something. The word "water" is associated with thirst, drowning, flood, etc. It is with these associations that we form a complete "understanding" of a given word or term

    There may be other components to "understanding" which I'm not aware of at this moment.

    My question is are these two not replicable on computers?

    Emotion is not replicable on computers but Pavlov trained dogs to salivate to the ring of a bell. Can we take if further and do the same to machines?

    Association of words is feasible with computers, isn't it?

    So, it seems, at least on preliminary examination, that we can make machines/computers "understand" like we humans "do".

    Your comments...
  • Mww
    4.5k
    I’m not all that up on computers.....maybe they’ve progressed far beyond my knowledge of them, dunno, don’t care. Still, I wonder......if I were to tell one, in no uncertain terms, “oh fercrissakes...stick a sock in it!!!!!!!!!”........what would it actually do?

    Because I have very different views of what human understanding is, it’s place and function in the human cognitive process, I have to say, no, we’re not going to make a machine that understands like we humans do. Which says nothing about the possibility that a machine we make that has enough computing power may very well make itself into something that understands like we humans do. Then what of the three robotic laws?

    What about the not-so-old adage that computer/artificial intelligences should have to have a form of wetware before it can be considered similar to intelligences based on pure biologics?
  • Inis
    243
    So, it seems, at least on preliminary examination, that we can make machines/computers "understand" like we humans "do".TheMadFool

    As an aside, the Chinese Room is physically impossible.

    Computers may understand as we do already, but as you point out, human understanding seems to be associated with other aspect of the human condition, like emotions.

    There is another famous thought experiment, whose name escapes me. In a version of it, we have a scientist who is an expert in light, and her robot companion. By chance the scientist cannot see red light due to some genetic defect, and the robot cannot detect red light because of a loose connection. Both are repaired, and both can now detect red light, but there is a difference. The scientist now also knows what it is like to see red, and she is struck by two peculiarities in her new knowledge. What it is like to see red is totally unpredictable, and totally inexplicable. The robot gains no such what-it-is-like knowledge.

    I think this what-it-is-like knowledge is the issue that needs to be understood and programmed if we want to create true Artificial General Intelligence.
  • TheMadFool
    13.8k
    Yes. This is another dimension of understanding that I forgot.

    Direct experience is also part of the semantic content of words/terms. We ''understand'' what is, say, red because we have direct experience of redness.

    This isn't too difficult for us to replicate on machines is it?

    Emotions are missing on computers. But, as biology, has informed us they're just different chemicals acting on certain receptors in our body. This is possible to replicate on machines.

    What is ''understanding'' on a human basis anyway?

    We say we ''understand'' the, say, theory of relativity and how exactly do we do that? By drawing analogies between the theory of relativity and what we already know. We say gravity deforms space-time just like a bowling ball deforms a plastic sheet.

    See?

    Understanding is based on association between the new stuff (unknown) and the old stuff (known).

    That's why, it seems to me, quantum physics is difficult to understand. Even physicists find it difficult. There are no experiences in our past that can be associated with quantum phenomena.

    So, as I said, we need a computer to know how to draw analogies - associate data. This probably would look like human ''understanding''.
  • Mww
    4.5k
    we need a computer to know how to draw analogies - associate data. This probably would look like human ''understanding''.TheMadFool

    While this may be necessary, is it at the same time sufficient? I have no issue with C.I./A.I drawing from memory as its means of analogy, much the same as humans draw from what we may call memory in order to compare sense data with experience. But I think these conscious operations are constrained to an empirical environment alone, and as such, cannot fully represent human understanding.

    Under what conditions do you think it possible a C.I./A.I will ever have the capacity to reflect internally on its own awareness? It’d be funny, wouldn’t it, if that sort of intelligence had no analogy from which to draw with respect to self-reflection, thus found no need for it? (A compuer’s ability to investigate whether or not a correct analogy had been drawn is not the same thing) Or, if such intelligence formulated its own self-generated feedback loop in order to investigate it’s self......is that really the same thing as a human’s proclivity towards introspection having no particular objective for it whatsoever?

    Also, if it be granted that emotions, or what we call feelings, are not cognitions and that the drawing of analogies is always a form of synthetic cognition and that all C.I./A.I. “understanding” is predicated on analogy.....how can such an intelligent ever have the full representation of understanding belonging to humans, if it cannot be assigned an innate value system?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.