• TheMadFool
    6.3k
    One well-known test for Artificial Intelligence (AI) is the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human. No mention of such an AI being conscious is made.

    A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.

    It seems almost impossible to not believe that a true AI is just a p-zombie in that both succeed in making an observer think they're conscious.

    The following equality based on the Turing test holds:

    Conscious being = True AI = P-Zombie

    If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.
  • Hanover
    5.7k
    If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.TheMadFool

    If Bob shoots Joe and it in every way appears motivated by jealousy, does there still remain a possibility that it was not? If you concede there is such a possibility, then you are conceding that behavior is not a perfect reflection of intent, and more importantly, that intent is unobservable.

    The point being that behavior does not tell us exactly what the internal state is, which means it's possible that one have a behavior and not have an internal state and it's possible that one have an internal state and have no behavior.
  • Outlander
    363
    If something damp leaves moisture on my finger when touched is there no such thing as water?
  • bongo fury
    475
    I heard there is a growing online campaign to seek a posthumous apology from Turing for his Test.

    :snicker:
  • InPitzotl
    276
    One well-known test for Artificial Intelligence (AI) is the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human. No mention of such an AI being conscious is made.

    A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.
    TheMadFool
    I think it's important to point out that those are two completely different things.

    "All" a computer needs to do to pass a Turing Test is say the right words in response to words a judge says. In the way it's typically imagined (there's debate on what a TT really is; let's just ignore that), the thing behind the terminal might be a human or might be a computer. Either way the only access the judge has to he/she/it per the rules is texting via a terminal. So a judge might ask something like, what's a good example of an oblong yellow shaped fruit? And if the response is "A banana", that's something a human could have said. Call that "level 1".

    But here's the problem. If we take a "level 1" program and just shove it into a robot, what do you suppose we'd get? It'd be silly to presume you'd get anything other than this... a (hopefully) non-moving robot, incapable of doing anything useful, except possibly over a single channel of communication where it can receive and send sets of symbols equivalent to native language text (e.g. English). If, say, I brought a bowl of fruit, placed it in front of the robot, and typed into this channel, "show me which one is the banana"... then it doesn't matter how well the thing did in the Turing Test, I should expect no action. And that kind of feeble robot is certainly incapable of fooling anyone that it's a human. Before the "oh, that's just a minor detail... suppose we", let's actually suppose we. What do we have to add to this robot to get it to fool a human?

    In this example, one thing we might expect the robot to be capable of is, when asked to pick out the banana from the bowl of fruit, that the robot would just reach out and either touch the banana or pick it up. So let's say it does that... then what more is it doing than level 1? Well, it's not just processing string data... now it's observing the environment, associating requests with an action, identifying the proper thing to do when asked to show me which is the banana, and being capable of moving its robot arm towards the banana based on its perception. Now it's not just spitting out words; it's a more semantic thing. It's not just the string "banana" that it has to respond with, but rather, it has to respond with "showing me" (aka, performing some action that counts as "showing me") one of a set of objects (i.e., be capable of identifying where that set is) that is the actual banana (associating the word banana with "the entity" that the word is about). That's a bit more involved than just passing a Turing Test... the two aren't equivalent. You need to do a lot more to build a good p-zombie than just trick people behind a terminal. P-zombies are at least "level 2."
  • Sir2u
    2.1k
    So a judge might ask something like, what's a good example of an oblong yellow shaped fruit? And if the response is "A banana", that's something a human could have said. Call that "level 1".InPitzotl

    I think that Turing meant that you could have a conversation as opposed to a question/answer session with it. It would be have to access vast amounts of data quickly and come up with the correct sentences, phrases to be able convince anyone that it was a human.

    But here's the problem. If we take a "level 1" program and just shove it into a robot, what do you suppose we'd get? It'd be silly to presume you'd get anything other than this... a (hopefully) non-moving robot,InPitzotl

    I don't remember ever reading anything he wrote or that was written about him that could indicate that he thought AI would be judged by its use in robots. The test was actually based on a game.

    when asked to pick out the banana from the bowl of fruit, that the robot would just reach out and either touch the banana or pick it up. So let's say it does that... then what more is it doing than level 1? Well, it's not just processing string data... now it's observing the environment, associating requests with an action, identifying the proper thing to do when asked to show me which is the banana, and being capable of moving its robot arm towards the banana based on its perception.InPitzotl

    The observation creates more strings of data for it to process, and make decisions about. Any artificial sense would produce data to be processed. A true AI would have to have a lot of processing power just for that. But for a robot to be able to move you need very little processing power. The two things are not equivalents, AI is not a robot and robots do not have to be AI

    That's a bit more involved than just passing a Turing Test... the two aren't equivalent.InPitzotl

    If I just want to talk to the little black box on my desk because I am lonely, the test works fine. The test does not say that AI has to convince someone by being there in PERSON and convincing them that it is human. That would involve more that just AI, things like appearance, smell, body movement and lots of other human quirks.
  • InPitzotl
    276
    I think that Turing meant that you could have a conversationSir2u
    I think you're missing the point. Yes, the TT involves having a conversation; but the conversation is limited only to a text terminal... that is, you're exchanging symbols that comprise the language. But the TT involves being indistinguishable from a human to a (qualified) judge. And if your computer program cannot answer questions like this, then it can't pass the TT. Over a terminal, though, all you can do is exchange symbols, by design.
    It would be have to access vast amounts of data quickly and come up with the correct sentences,Sir2u
    Mmm.... it's a little more complex than this. Fall back to the TT's inspiration... the imitation game. Your goal is to fool a qualified judge. So sure, if it takes you 10 minutes to figure out that a banana is a good response to an oblong yellow fruit, that's suspicious. If it takes you 10 seconds? Not so much. But if it takes you 5 seconds to tell me what sqrt(pi^(e/phi)) is to 80 decimal places, that, too, is suspicious. You're not necessarily going for speed here... you're going for faking a human. Speed where it's important, delay where it's important.
    I don't remember everSir2u
    I'm not writing a paper discussing Turing; I'm responding to the OP in a thread on a forum. In that post, there was one paragraph talking about an AI passing a TT. The next paragraph, we're talking about p-zombies. All I'm doing is pointing out that these are completely different problem spaces; that passing the TT is woefully inadequate for making you a good p-zombie.
    The observation creates more strings of data for it to process, and make decisions about.Sir2u
    Technically, yes, but that's a vast oversimplification. It's analogous to describing the art of programming as pushing buttons (keys on a keyboard) in the correct sequence. Yeah, programming is pushing buttons in the right sequence, technically... but the entire problem is about how you push the buttons in what sequence to achieve what goal.

    Think of this as skillsets. Being able to talk goes a long way to being a good p-zombie, but it's only one skillset; and mastering just that skill isn't going to fool anyone into thinking "you're" conscious. That "more strings of data" and "decisions about" you're talking about here is another skillset; say, it's a vision analog and you equipped your robot with a camera. That's all well and good, but that skillset is literally about discerning objects (and like "useful things") from the images. English is a one dimensional thing with properties like grammar; images are two dimensional things which convey information about three dimensional objects, which has no grammar... but rather, there are rules to both vision per se and to object behaviors. There's also acting; and reducing that to just moving a part is another oversimplification. Touching a banana isn't a function of "moving a part", but "moving a part appropriately such that it attains the goal of touching the banana"... that involves the vision again, and planning, and the motion, and adjustment, and has to tie in to object recognition and world modeling as well as relating that English text from that first skillset to appropriate actions to respond with. That's another skillset, and it's a distinct one. Furthermore, once your p-zombie starts interacting, it's not truly "just a computer" with "inputs" and "outputs" any more... it's really something more like "two computers" dancing... one with sensors and servos and silicon, but the other, the dance partner, is made of objects, object properties, and physical mechanics. The interaction required to just act towards attaining a goal, a must for fooling a human into thinking you're a conscious agent, is so mechanically interfused with what you're acting with that the entire system becomes important (consider that your actions have effects on your senses and that modeling that, which requires modeling your dance partner, is a required part of the skillset).
    The test does not say that AI has to convince someoneSir2u
    Sure, but that's required to be a p-zombie.
    That would involve more that just AI, things like appearance, smell, bodySir2u
    ...well not quite. The p-zombie isn't trying to fool you into thinking that it's a human; it's just fooling you into thinking it's conscious.
  • TheMadFool
    6.3k
    If Bob shoots Joe and it in every way appears motivated by jealousy, does there still remain a possibility that it was not? If you concede there is such a possibility, then you are conceding that behavior is not a perfect reflection of intent, and more importantly, that intent is unobservable.

    The point being that behavior does not tell us exactly what the internal state is, which means it's possible that one have a behavior and not have an internal state and it's possible that one have an internal state and have no behavior.
    Hanover

    So the Turing test is flawed? Behavior is not a reliable indicator of consciousness? Doesn't that mean p-zombies are possible and doesn't that mean physicalism is false?

    You are taking the Turing test too literally. The idea of an AI fooling a human has a much broader scope - it includes all interactions between the AI and humans whether just chatting over a network or actual physical interaction.
  • InPitzotl
    276
    You are taking the Turing test too literally.TheMadFool
    I can only reply that I've seen people choke on this point. Also, the term Turing Test is a term of art with a literal meaning, so I'm not sure how taking it literally can be a bad thing. p-zombie is also a term of art with a distinct meaning. Surely it's better to just be clear, especially if people get confused, right?
  • TheMadFool
    6.3k
    I can only reply that I've seen people choke on this point. Also, the term Turing Test is a term of art with a literal meaning, so I'm not sure how taking it literally can be a bad thing. p-zombie is also a term of art with a distinct meaning. Surely it's better to just be clear, especially if people get confused, right?InPitzotl

    In my humble opinion...

    Its like a finger pointing away to the moon. Dont concentrate on the finger or you will miss all that heavenly glory. — Bruce Lee
  • InPitzotl
    276
    In my humble opinion...
    "Its like a finger pointing away to the moon. Dont concentrate on the finger or you will miss all that heavenly glory." — Bruce Lee
    TheMadFool
    Alright, let's turn this into a question then. In your original post, you said this:
    the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human.TheMadFool
    ...after which you offered:
    The following equality based on the Turing test holds:
    Conscious being = True AI = P-Zombie
    TheMadFool
    ...so, that reads like it possibly suggests this:

    Conscious being = True AI = Passes Turing Test = Fools human interloculators into believing you are having conversations with another human = P-Zombie

    So the question is... was that your intent in this thread?
  • TheMadFool
    6.3k
    Indeed those are my words but surely you could've taken my words in a much broader setting.
  • InPitzotl
    276

    I'm not after a gotcha or a fight; just demonstrating that there's genuine room for confusion here. I'll take your response as a no, so hopefully that would clear things up about your intent.
  • TheMadFool
    6.3k
    I'm not after a gotcha or a fight; just demonstrating that there's genuine room for confusion here. I'll take your response as a no, so hopefully that would clear things up about your intent.InPitzotl

    Are you saying my post is confusing? Well, I did try to keep my wordcount to a minimum. Perhaps that's where the fault lies.
  • bongo fury
    475
    Well, I did try to keep my wordcount to a minimum.TheMadFool

    :ok:

    Perhaps that's where the fault lies.TheMadFool

    Never, ever.
  • VagabondSpectre
    1.9k
    What did the reanimated corpse Rene Descartes say when asked if he was conscious?

    Reveal
    I zom, therefore I be!


    On a serious note, I'm inclined to agree with your last statement from the OP.

    Maybe our reports of our own conscious experiences are those of a P-zombie; we're hard-wired to believe that we are conscious..

    Silly though it may seem, the notion does hold with the way we work on a neurological level: we have stored memories that exist as a somewhat static record, and every new milliseconds our brains generate new frames of cognition (perception, learning, inference, action).

    If we imagine cutting the lifespan of an individual into a series of discrete frames (quite a lot of them I guess) - if we could freeze time -, is a single frame of a live person "conscious"? (by anyone's measure...).

    If we merely juxtapose two adjacent frames and flick back and forth between two states (maybe there is some measurable change, like some neurons sending or receiving a new signal), does that create consciousness? Perhaps on the most minimal order of the stuff that consciousness is made of?

    The hard problem is pretty hard indeed... Even panpsychism starts to make sense after too long...

    I'm tend to air on the side of self-delusion. That consciousness is something "real" (as in, woo woo magic and other presumptive metaphysical rigamarole) is almost certainly delusional. That it's "something" at all beyond a mere report or belief does seem somewhat plausible, but I would not be surprised if we're just self-delusion all the way down.
  • TheMadFool
    6.3k
    Maybe our reports of our own conscious experiences are those of a P-zombie; we're hard-wired to believe that we are conscious..VagabondSpectre

    Someone in another thread had the opinion that consciousness/mind could be an illusion. I take it that he meant there's a physical basis for the phenomenon of consciousness.

    This idea of consciousness being an illusion reminds me of Searle's chinese room argument. Searle contends that though true AI will be able to fool a human in conversation, that in itself doesn't prove that it has the capacity to understand like humans. All the AI is doing, for Searl, is mechanically manipulating symbols and the AI never actually understands the meanings of the symbols and their combinations.

    That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? Speaking from my own experience, understanding/comprehension seems to start off at the very basic level of matching linguistic symbols (words, spoken or written) to their respective referents e.g. "water" is matched to "cool flowing substance that animals and plants need", etc. This is cleary something a computer can do, right? After such a basic cognitive vocabulary is built what happens next is simply the recognition of similarities and differences and so, continuing with my example, a crowd of fans moving in the streets will evoke, by its behavior, the word and thus the concept "flowing" or a fire will evoke the word/concept "not cold" and so on. In other words, there doesn't seem to be anything special about understanding in the sense that it involves something more than symbol manipulation and the ability to discern like/unlike thing.

    If we were to agree with Searle on this then the onus of proving understanding/comprehension is not just symbol manipulation falls on Searle's supporters.

    :smile:
  • Harry Hindu
    3.3k
    A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.TheMadFool
    What does it mean to be physically indistinguishable? Are there other ways of being distinguishable or indistinguishable?

    If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.TheMadFool
    I don't know. What is "consciousness"?
  • TheMadFool
    6.3k
    What does it mean to be physically indistinguishable? Are there other ways of being distinguishable or indistinguishable?Harry Hindu

    Good question but how might I word it to be more explicit than that? Perhaps physical in the sense that the p-zombie has a head, trunk, limbs, internal organs - identical in every sense of bodily parts?

    What is "consciousness"?Harry Hindu

    What is consciousness? Perhaps best elucidated as the difference between sleep and awake states.
  • Harry Hindu
    3.3k
    So are we talking about distinguishing between body types or waking and sleeping states?

    My computer goes to sleep sometimes and then wakes up when I move the mouse or hit a key on the keyboard.

    What if someone is dreaming? Are they conscious?
  • TheMadFool
    6.3k
    So are we talking about distinguishing between body types or waking and sleeping states?Harry Hindu

    Indeed, what else could "physical" mean?

    My computer goes to sleep sometimes and then wakes up when I move the mouse or hit a key on the keyboardHarry Hindu

    Could the computer be conscious?

    What if someone is dreaming? Are they conscious?Harry Hindu

    You overlooked non-REM sleep.
  • Harry Hindu
    3.3k
    So are we talking about distinguishing between body types or waking and sleeping states?
    — Harry Hindu

    Indeed, what else could "physical" mean?
    TheMadFool
    Waking and sleeping states aren't physical states?

    Could the computer be conscious?TheMadFool
    Well, you did define consciousness as the difference between waking and sleeping states, so it seems to be the case, yes.
  • TheMadFool
    6.3k
    Waking and sleeping states aren't physical states?Harry Hindu

    I want to clarify what consciousness is.

    Also, what are alseep and awake states then, if not physical?

    Well, you did define consciousness as the difference between waking and sleeping states, so it seems to be the case, yes.Harry Hindu

    So, a standard issue computer is capable of consciousness? I guess we're not seeing eye to eye on what consciousness means.

    Why don't you give it a go? What is consciousness to you?
  • Harry Hindu
    3.3k
    So, a standard issue computer is capable of consciousness? I guess we're not seeing eye to eye on what consciousness means.TheMadFool
    It's not you and I that aren't seeing eye to eye. You aren't seeing eye to eye with your previous statement.

    What makes it impossible for a "standard issue computer" to be capable of consciousness if you defined consciousness as the difference between waking and sleeping states? If it is still impossible even though you defined it as such, then is consciousness something more than just the difference between waking states, or something else entirely that has nothing to do with waking and sleeping states?

    Why don't you give it a go? What is consciousness to you?TheMadFool
    I got to what consciousness is for me by asking these questions that I'm asking you to myself. I think that if I tell you what I think consciousness is, it would turn into an argument. Let's see where these questions lead us.

    Also, what are alseep and awake states then, if not physical?TheMadFool
    Then why are you trying to determine if consciousness exists by distinction in body type and function, rather than being awake or asleep? I could build a humanoid robot that goes to sleep and wakes up, like a "standard issue computer". Is it conscious? If P-Zombies look and behave like humans, which includes going to sleep and waking up, then p-zombies are conscious.
  • TheMadFool
    6.3k
    What makes it impossible for a "standard issue computer" to be capable of consciousness if you defined consciousness as the difference between waking and sleeping states?Harry Hindu

    When I mentioned sleep and awake states I thought you'd immediately know that the domain of discussion is humans and not anything else.

    I think that if I tell you what I think consciousness is, it would turn into an argumentHarry Hindu

    I'm all eyes and ears. We disagree. I'd like to know why and for that I need to know what you think consciousness is.
  • Harry Hindu
    3.3k
    When I mentioned sleep and awake states I thought you'd immediately know that the domain of discussion is humans and not anything else.TheMadFool
    I think that's part of the problem - anthropomorphism.

    I thought you were talking about p-zombies too, and the point still applies to them:
    If P-Zombies look and behave like humans, which includes going to sleep and waking up, then p-zombies are conscious.Harry Hindu
  • TheMadFool
    6.3k
    I think that's the problem.

    I thought you were talking about p-zombies too, and the point still applies to them:
    If P-Zombies look and behave like humans, which includes going to sleep and waking up, then p-zombies are conscious.
    Harry Hindu

    That's begging the question.
  • Harry Hindu
    3.3k
    It's applying your definition, not mine

    I asked you this:
    If it is still impossible even though you defined it as such, then is consciousness something more than just the difference between waking states, or something else entirely that has nothing to do with waking and sleeping states?Harry Hindu

    If there is something more, then what is it - that it has to be a human? Then by definition p-zombies aren't conscious because they aren't humans.
  • TheMadFool
    6.3k
    It's applying your definition, not mine.Harry Hindu

    I didn't provide a definition. If I did anything, it's give you just a rough idea of what I think consciousness is. Again, with the hope of coming to some agreement with you, what is your definition of consciousness?
  • Harry Hindu
    3.3k
    I didn't provide a definition. If I did anything, it's give you just a rough idea of what I think consciousness is.TheMadFool
    Then you mistook what I was asking for. I wasn't asking for a rough idea, but a specific one as you seemed to know the specifics if you can behave like the arbiter of what is conscious and what isn't. If you've already determined that you must be a human to be conscious, then you've answered your own question.

    Your qualifiers were waking/sleeping and being human. P-zombies fit the former but not the latter, therefore p-zombies being conscious is false.

    If you're going to restrict the discussion to only humans then you're not going to agree with my definition, but then that would exclude p-zombies from the discussion as well, and your thread is inadequately named.
  • Sir2u
    2.1k
    I think you're missing the point. Yes, the TT involves having a conversation; but the conversation is limited only to a text terminal... that is, you're exchanging symbols that comprise the language. But the TT involves being indistinguishable from a human to a (qualified) judge.InPitzotl

    The original game was to try to distinguish between a man and a women using only written text answers to questions, obviously speaking would have made it too easy.
    I think that you missed the point of exactly what he said. Look it up.
    Turing did not set any limit to how the test would be carried out with a computer, his vision of a computer that was capable of fooling a human was 50 years from his time. I doubt that he thought that computers would just stay as text terminals, even though they did not exist in his time. His statement was futuristic.

    Mmm.... it's a little more complex than this. Fall back to the TT's inspiration... the imitation game. Your goal is to fool a qualified judge. So sure, if it takes you 10 minutes to figure out that a banana is a good response to an oblong yellow fruit, that's suspicious. If it takes you 10 seconds? Not so much. But if it takes you 5 seconds to tell me what sqrt(pi^(e/phi)) is to 80 decimal places, that, too, is suspicious. You're not necessarily going for speed here... you're going for faking a human. Speed where it's important, delay where it's important.InPitzotl

    You can program response time and delays into a computer quite easily, an AI would have a basic set of rules to follow for most of its operations. Just as it would need rules to follow when picking something up.

    Technically, yes, but that's a vast oversimplification. It's analogous to describing the art of programming as pushing buttons (keys on a keyboard) in the correct sequence. Yeah, programming is pushing buttons in the right sequence, technically... but the entire problem is about how you push the buttons in what sequence to achieve what goal.InPitzotl

    So an AI would need trillions of lines of code instead of millions, which brings us back to the part of processing power. Sequences using IF/THEN/ELSE would decide which buttons were pressed and when, again trillions of them.

    Think of this as skillsets.InPitzotl

    Computer learning has come a long way in the last few years. Some of them can and do recognize objects. Some of them can and do pick them up and manipulate them, with great competence. Put the two together and there is your machine.

    While accomplishing this has not happened in the 50 years Turing predicted it is getting closer everyday.

    But one thing that most people seem to forget about when talking about the theoretical P zombie is that it is not written anywhere that it has to be a physical object, only that it has to be able to convince real people that it is a real person. A hologram would probably be able to do that.
    If someone throw a banana at you, would you try to catch it or dodge it. Fear of contact from a physical object would be just as convincing as actually catching it.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.