• Shawn
    12.6k
    Turing devised a system to analyze behavior that in my opinion self-affirms a negative existential. In terms of behavior a computer, according to Turing, was always in the mode towards a human being of imitation. There's even a movie about this, called, The Imitation Game. But, computerized behavior is neither a game or one derived from deeper analysis as if standing in front of a mirror and getting a deeper resolution picture at your features, in my opinion.

    And, so, I have thought about this deeply, and think that imitation isn't all that Turing machines would only be able to accomplish. Thinking a little deeper, if a Generalized-Artificial-Intelligence computer can define behavior of a human being outwardly, then doesn't that de facto prove that it would have to have its own sentient behavior towards this human being gazing into their own picture without any surroundings?
  • Shawn
    12.6k
    Generic psychologies need not apply.
  • fishfry
    2.6k
    Turing devised a system to analyze behavior that in my opinion self-affirms a negative existential. In terms of behavior a computer, according to Turing, was always in the mode towards a human being of imitation. There's even a movie about this, called, The Imitation Game. But, computerized behavior is neither a game or one derived from deeper analysis as if standing in front of a mirror and getting a deeper resolution picture at your features, in my opinion.Shawn

    The phrase "imitation game" is from Turing himself, in his 1950 paper.

    The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.

    https://academic.oup.com/mind/article/LIX/236/433/986238

    What leaps out (to me, anyway) is that a closeted gay man in 1950's England is interested in how well a third party can determine someone's gender. After all his entire life was premised on being able to pass as heterosexual in an intolerant society.

    Secondly, for Turing the essence of AI is the ability to deceive. But we know that some humans are better at deception than others; con artists, sociopaths, and successful poker players.

    Perhaps in retrospect we can say that the Turing test tells us as much about Turing's own psychology (and psychological response to the social attitudes of his time) as it does about whether machines can think. Perhaps we need a new standard. What if there were a true general AI that was honest to a fault? "Are you human?" "No, I'm a program running on a supercomputer, thanks for asking." End of game.

    And, so, I have thought about this deeply, and think that imitation isn't all that Turing machines would only be able to accomplish. Thinking a little deeper, if a Generalized-Artificial-Intelligence computer can define behavior of a human being outwardly, then doesn't that de facto prove that it would have to have its own sentient behavior towards this human being gazing into their own picture without any surroundings?Shawn

    Not in my opinion. The weak point in the Turing test is the human questioner! The first chatbot, Eliza, was written by computer scientist Joseph Weizenbaum to show that machines CAN'T think; that simple chatbot logic can emulate human conversation. You'd say, "I hate my mother," and it would respond, "Tell me more about your mother."

    Weizenbaum was shocked to find that many of the people he showed the program to told it their deepest secrets and believed they were speaking to a psychologist.

    ELIZA's creator, Weizenbaum regarded the program as a method to show the superficiality of communication between man and machine, but was surprised by the number of individuals who attributed human-like feelings to the computer program, including Weizenbaum’s secretary.

    https://en.wikipedia.org/wiki/ELIZA

    The question is whether ANY behavioral standard can be regarded as evidence of sentience. Philosophers have invented the concept of a philosophical zombie: a lifelike robot with perfect human behavior and no inner life at all.

    https://en.wikipedia.org/wiki/Philosophical_zombie

    We see this in contemporary society. People are all too willing to believe that computers can think, that true AI is just around the corner, that "computers will soon be smarter than us." It's just hype. We don't even know what consciousness is. How do you know your next door neighbor is conscious? "Hi fishfry, nice day." "Hi Fred, sure is." "See you later." "You too." I think to myself: "What a sentient fellow. He surely must possess what Searle would call intentionality." But of course I know no such thing. He looks human so I assume he's conscious on no evidence at all. Computer scientist and blogger Scott Aaronson calls this "meat chauvinism." He agrees with Turing, that machines might someday think. I disagree.
  • Gregory
    4.6k
    Searle asked if you could tell by looking at hardware whether a computer was doing addition or "quaddition". I don't know. Software does seem to have a life of its own. Although it seems dependent on hardware, it might have the degree of conscious a black widow spider has
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.