• Heiko
    519
    And if I think about it: This is what is pretty annoying about this discussion and the question "But is it conscious?". First one should be able to answer the question if that mattered at all if that thing constantly behaved as if it were. Everone knows it could not be decided by asking what is "true".
    So put it clear: We want computer-slaves that can solve our problems without us doing much work.
    What are the risks? What is the potential profit worth?
    It seems the last thing "philosophers" would consider is asking the machine if it had any problems with doing so. That is significant
  • TheMadFool
    13.8k
    As I see it there are numerous fields where computers generate intelligent solutions for particular problems.Heiko

    Can you give me some examples?

    Today there are software engineers running around trying to tweak natural language processing as they know where the flaws are.Heiko

    Kindly elaborate.
    I would take it pretty seriously if they once say: "Okay, now we really cannot tell the difference anymore."Heiko

    You would and even I would (probably) but how would one know whether a computer is conscious in the sense we are?
  • A Raybould
    86
    There are a great many things that are unobservable, yet widely regarded as plausible, such as electrons, viruses, and, apparently, jealousy itself. One can, of course, take the position that it is possible that none of them are real, but that road, if taken consistently, leads ony to solipsism. To invoke this line of argument only over just some unobservables is not necessarily wrong (skepticism is an important trait) but it can also be tendentious, or an excuse for avoiding the issue. In particular, I regard it as tendentious, if not an outright inconsistency, to invoke zombie-conceivability arguments in the case of putative future AI but not in the case of people (or people other than oneself, if you are certain about your own case.)

    With regard to your specific claim that there can be behavior without internal state: certainly, but once you have observed more than a few behaviors that do seem to be dependent on internal state (e.g. learned behavior, or any behavior apparently using memories), then the possibility that none of the observed behaviors were actually state-dependent becomes beyond-astronomically improbable.
  • A Raybould
    86
    how would one know whether a computer is conscious in the sense we are?

    Do you know that I am conscious in the same way that you are? (or that any other person is, for that matter.) If so, then apply whatever method you used to come to that conclusion to a computer - and if that method depends on me being human and is not applicable to computers, then you would be begging the question.
  • fishfry
    3.4k
    how would one know whether a computer is conscious in the sense we are?TheMadFool

    Haven't been following discussion but noticed this. If I may jump in, I would say that this is THE core question.

    I put it in the following even stronger form: How do I know my neighbor is self-aware and not a p-zombie? I leave the house in the morning and see my neighbor. "Hi nieghbor." "Hi ff. Looks like a nice day." "Yeah sure is." "See you later." "You too!" I cheerfully respond as I drive off with a wave. What a sentient fellow, I think to myself. He surely must possess what Searle would call intentionality. This is my little joke to myself, how little evidence we accept for the sentience of people. The interrogator is always the weak link in real life Turing test experiments. The humans are always TOO willing to accept that the computer's a human.

    In truth, I operate by the principle that my neighbor is a fellow human, whatever other differences we may have; and that all fellow humans are self-aware. That's my unspoken axiom.

    Computer scientist and blogger Scott Aaronson calls this attitude meat chauvinism and he has a point.

    I have no way of knowing if my neighbor is self-aware, let alone some inanimate program. But at the same time I must admit that just because a thing is different from me, does not count as evidence that the thing is not intelligent. If self-awareness can arise in a wet messy environment like the brain; why shouldn't it arise in a box of circuit boards executing clever programs?

    Personally I don't think machines can ever be conscious; but still I do admit my human-centric bias. I have no proof that self-awareness can only arise in wetware. Who could begin to know a thing like that? The wisest among us don't know.

    And of course this was Turing's point. As a closeted gay man in 1950's England, he argued passionately for the acceptance of those who were different. That's how I read his 1950 paper, not only mathematically but also psychologically.

    If we define self-awareness as a purely subjective experience, then by definition it is not accessible to anyone else. There is no hope of having an actual self-awareness detector. Turing offers an alternative standard: Behavioral. If we interact with it and can't tell the difference, then there is no difference.

    Some days I do wonder about my neighbor.
  • TheMadFool
    13.8k
    Do you know that I am conscious in the same way that you are? (or that any other person is, for that matter.) If so, then apply whatever method you used to come to that conclusion to a computer - and if that method depends on me being human and is not applicable to computers, then you would be begging the question.A Raybould

    @fishfry

    The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.

    The Turing test, as a way of identifying AI (conscious), simply states that if there's no difference between a candidate AI and a human in terms of a person who's assessing the AI being fooled into thinking the AI is human then, the AI has passed the test and, to all intents and purposes, is conscious.

    P-zombies are treated differently: even if they pass the Turing test adapted to them, they're thought not to deserve the status of conscious beings.

    In short, something, AI, that's worlds apart from a flesh-and-bone human, is considered worthy of the label of consciousness while something that's physically identical to us, p-zombies, aren't and in both cases the test for consciousness is the same - behavior based.
  • fishfry
    3.4k
    The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.TheMadFool

    Yes.

    The Turing test, as a way of identifying AI (conscious), simply states that if there's no difference between a candidate AI and a human in terms of a person who's assessing the AI being fooled into thinking the AI is human then, the AI has passed the test and, to all intents and purposes, is conscious.TheMadFool

    Yes but with a big caveat. You used the word conscious but Turing uses the word intelligent. Intelligence is a behavior and consciousness is a subjective state. So we could use the Turing test to assert that we believe an agent to be intelligent, while making no claim about its consciousness. In fact asking if an agent other than yourself is conscious is essentially meaningless, since consciousness (or self-awareness) is a subjective state.

    So that's a semantic difference. The Turing test evaluates intelligence (whether it does that successfully or not is a matter for discussion). But it makes no claims to evaluate consciousness nor do we think any such test is possible, even in theory. Not for an AI and not for my neighbor, who's been acting strangely again.

    P-zombies are treated differently: even if they pass the Turing test adapted to them, they're thought not to deserve the status of conscious beings.TheMadFool

    Thought by whom? You and me, I think. But Turing and Aaronson would say, "If it acts intelligent it's intelligent. And nobody can know about consciousness."

    On what rational basis do we say our neighbors are conscious but that a general AI, if one ever passed a more advanced and clever version of the Turing test, is "intelligent but not conscious." That sounds like an unjustified value judgment; a prejudice, if you will. Meat chauvinism.

    In short, something, AI, that's worlds apart from a flesh-and-bone human, is considered worthy of the label of consciousness while something that's physically identical to us, p-zombies, aren't and in both cases the test for consciousness is the same - behavior based.TheMadFool

    Isn't that just an irrational prejudice? They used to say the same about certain ethnic groups. It's the same argument about dolphins. Just because they don't look like us doesn't mean they're not "considered worthy of the label of consciousness." What is your standard for that?

    I hope you see the problem here. If we can't tell a human from a p-zombie then what's the difference? I'm not advocating that argument, I disagree with it. I just admit that I can't articulate a rational defense of my position that doesn't come down to "Four legs good, two legs bad," from Orwell's Animal Farm.
  • TheMadFool
    13.8k
    You used the word conscious but Turing uses the word intelligent. Intelligence is a behavior and consciousness is a subjective state.fishfry

    I realize that AI = artificial intelligence and not consiousness.

    Firstly, even if that's the case, we still have the problem of inferring things about the mind - intelligence, consciousness, etc. - from behavior alone. The inconsistency here is that on one hand (AI) behavior is sufficient to conclude that a given computer has a mind (intelligence-wise) and on the other hand, p-zombies, it isn't (p-zombies have no minds).

    Secondly, my doubts notwithstanding, intelligence seems to be strongly correlated with consciousness - the more intelligent something is, the more capacity for consciousness.

    In addition, and more importantly, aren't computers more "intelligent" in terms of never making a logical error i.e. Turing had something else in mind regarding artificial intelligence - it isn't about logical thinking which we all know for a fact that even run of the mill computers can beat us at.

    What do you think this something else is if not consciousness? Consciousness is the only aspect of the mind that's missing from our most advanced AI, no? The failure of the best AI to pass the Turing test is not because they're not intelligent but because they're not, or are incapable of mimicking, consciousness.

    In short, the Turing test, although mentioning only intelligence, is actually about consciousness.

    In fact asking if an agent other than yourself is conscious is essentially meaningless, since consciousness (or self-awareness) is a subjective state.fishfry

    It's not meaningless to inquire if other things have subjective experiences or not.

    Isn't that just an irrational prejudice? They used to say the same about certain ethnic groups. It's the same argument about dolphins. Just because they don't look like us doesn't mean they're not "considered worthy of the label of consciousness." What is your standard for that?fishfry

    All I'm saying is a p-zombie is more human than a computer is. Ergo, we should expect there to be more in common between humans and p-zombies than between humans and computers, something contradicted by philosophy (p-zombie) and computer science (Turing test).
  • Kenosha Kid
    3.2k
    I had a recent conversation wherein the responses I was getting were just weird, random-seeming, unassociated to anything I was saying, which made me wonder...

    If many humans cannot pass the Turing test, is it really a good test?
  • A Raybould
    86


    The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.TheMadFool

    Firstly, just to be clear, and as you say in your original question, p-zombies are imagined as not merely behaviorally indistinguishable from humans, but entirely physically indistinguishable (and therefore beyond scientific investigation - and if they could be studied philosophically, I don't think anyone, not even Chalmers, has explained how.)

    Secondly, I don't think the Turing test should be considered as the only or ultimate test for consciousness - it was merely Turing's first shot at such a test (and, given that, it has endured surprisingly well.) For the purpose of this discussion, we can use Turing 's version to stand in for any such test, so long as we don't get hung up on details specific to its particular formulation.

    I assume that you think other people are conscious, but on what is your belief grounded? Is it because they behave in a way that appears conscious? Or, perhaps, is there an element of "they are human, like me, and I am conscious?"

    If you are going to throw out all behavioral evidence, in the case of AI, on account of p-zombies, then you would be inconsistent if you did not also throw it out in the case of other people. If you make an exception for other people because they are human, then you would be literally begging Chalmers' 'hard question of consciousness'. What does that leave? If you have another basis for believing other people are conscious, what is it and why would that not work for AIs? Suppose we find an enclave of Neanderthals, Homo Erectus, or space aliens - how would you judge if they are p-zombies?

    This, I suspect, is what Dennett is alluding to when he says "we are all p-zombies" - he sees no reason to believe that we have these extra-physical attributes that p-zombies would lack.

    Returning to your original question, you raise an interesting point: why should p-zombies not be considered conscious? After all, they were conceived explicitly to be indistinguishable from conscious entities. That they are allegedly lacking something that a philosopher says an entity must have, to be conscious, is not much of an argument; the philosopher might simply have an inaccurate concept of what consciousness is and requires.

    Putting that aside, there is a third option to be considered: that p-zombies are ultimately an incoherent concept. When we look at how strange, ineffable, unique and evidence-free a concept Chalmers had to come up with in order to defeat physicalism, how selective he had to be in what he chooses to bless with being conceivable, in order to get there, and the highly doubtful leap he makes from conceivability to possibility (I can conceive of the Collatz conjecture being true and it being false, but only one of these is possible), I am simply going to apply Occam's razor to p-zombies, at least until a better argument for their possibility comes along.
  • TheMadFool
    13.8k
    I don't think the Turing test should be considered as the only or ultimate test for consciousness - it was merely Turing's first shot at such a test (and, given that, it has endured surprisingly well.)A Raybould

    What test do you propose? Any ideas?

    If you are going to throw out all behavioral evidence, in the case of AI, on account of p-zombies, then you would be inconsistent if you did not also throw it out in the case of other people. If you make an exception for other people because they are human, then you would be literally begging Chalmers' 'hard question of consciousness'.A Raybould

    Yes, that is the sticking point. I don't see a light at the end of this tunnel.

    That they are allegedly lacking something that a philosopher says an entity must have, to be conscious, is not much of an argument; the philosopher might simply have an inaccurate concept of what consciousness is and requires.A Raybould

    What, according to you, is an "accurate" concept of consciousness?

    that p-zombies are ultimately an incoherent concept.A Raybould

    Why is it incoherent?
    I am simply going to apply Occam's razor to p-zombies, at least until a better argument for their possibility comes along.A Raybould

    How and where is Occam's razor applicable?
  • fishfry
    3.4k
    I realize that AI = artificial intelligence and not consiousness.TheMadFool

    I think we're agreed on that.

    I jumped into this thread based on only one phrase from one post without reading the rest. I only posted to get out of my system pretty much everything I knew about the subject. I have no idea what the answers are to the question of consciousness and machines; but I do think I have a fair grasp of the questions, at both the technical and philosophical level.

    So I said my piece, and if it's not clear, I'm not espousing or even expressing any kinds of opinions about anything. If you disagree with anything I write, that's perfectly ok. I disagree with a lot of it too. I probably won't engage though. I literally said, at a high level, everything I know abut the philosophy machine intelligence in my first post.

    Firstly, even if that's the case, we still have the problem of inferring things about the mind - intelligence, consciousness, etc. - from behavior alone. The inconsistency here is that on one hand (AI) behavior is sufficient to conclude that a given computer has a mind (intelligence-wise) and on the other hand, p-zombies, it isn't (p-zombies have no minds).TheMadFool

    Yes ok. I'm not agreeing or disagreeing or having an opinion about this, beyond what I've already said. I perfectly well agree with your analysis of the problem.

    Secondly, my doubts notwithstanding, intelligence seems to be strongly correlated with consciousness - the more intelligent something is, the more capacity for consciousness.TheMadFool

    AHA! But ... why do you say that? Until you give a rational reason for WHY you believe that to be the case, I regard it as an entirely bio-centric prejudice on your part. Mean chauvinism again. So on that point, I am pushing back a bit on your ideas. I want to know what is the rational reason we think that intelligence must correlate with consciousness? Other than that it's how our mind works?

    In addition, and more importantly, aren't computers more "intelligent" in terms of never making a logical error i.e. Turing had something else in mind regarding artificial intelligence - it isn't about logical thinking which we all know for a fact that even run of the mill computers can beat us at.TheMadFool

    No, Gödel and Turing decisively delineated the hard limitations of what can be computed. The core of the argument that humans do something machines can't do is that WE can solve noncomputable problems. Sir Roger Penrose believes consciousness is not a computation. I personally believe consciousness is not a computation.

    Computers aren't intelligent at all. They're dumb as rocks. In fact you can implement a computer with a bunch of rocks painted white on one side and black on the other. You can sit there flipping rock-bits according to programs, and what you are doing is computing. Computers don't know the first thing about anything. That's my opinion. Others have different opinions. That's ok by me.

    What do you think this something else is if not consciousness? Consciousness is the only aspect of the mind that's missing from our most advanced AI, no? The failure of the best AI to pass the Turing test is not because they're not intelligent but because they're not, or are incapable of mimicking, consciousness.[/url]

    Funny you said "mimicking" consciousness instead of implementing it. As in faking it. Pretending to be conscious.

    I think we're each using a slightly different definition of consciousness. I think it's purely subjective and can never be tested for. I gather you believe the opposite, that there are observable behaviors that are reliable indicators of consciousness. We my need to agree to disagree here.
    TheMadFool
    In short, the Turing test, although mentioning only intelligence, is actually about consciousness.TheMadFool

    Nonsense. Turing never used the word. You're adding your own interpretation to what's not actually there. Do you know anything about how chatbots work? People have a tendency to think dumb chatbots are human. That means nothing.

    It's not meaningless to inquire if other things have subjective experiences or not.TheMadFool

    It's not meaningless, it's just damned hard to investigate! I heard one thing they do is the "mirror test." If an animal or a fish or whatever can recognize its own reflection, we think it's got some kind of inner life going on.

    I don't disrespect or downplay the importance of the question.

    I do oppose overly glib and strongly asserted answers. Truth is nobody knows.


    All I'm saying is a p-zombie is more human than a computer is.TheMadFool

    As I mentioned I haven't read the rest of the thread and wasn't really talking about p-zombies except as a shorthand for Turing machines passing as humans among us in society. Essentially the same idea as the standard meaning of "something that looks and acts exactly like a normal person, but has no inner life at all."

    For purpose of anything I'm saying, these two ideas of p-zombies can be taken as the same. I'm not really up on any fine points of difference. A standard p-zombie looks human but has no inner life. A Turing machine operating a perfectly lifelike robot body would in my opinion BE a p-zombie; but I guess you'd say that if it behaves with intelligence, it must be conscious.

    Ergo, we should expect there to be more in common between humans and p-zombies than between humans and computers,TheMadFool

    I'm afraid I don't see the distinction between p-zombies and computers. In my opinion a program running on a computer might possibly implement a true p-zombie -- something that behaves perfectly like a human; but that has no inner life whatsoever.

    If all you mean is that the p-zombies are wetware, why do you have such a meat prejudice? Where does it say that being made of meat is better than being made of vegetable matter? It's meat chauvinsim: believing that meat is superior because we are meat. That is not a rational argument.


    something contradicted by philosophy (p-zombie) and computer science (Turing test).TheMadFool

    I am not aware of this interpretation, but I don't know much about p-zombies. It seems to me that a lifelike chatbot is exactly what philosophers mean by a p-zombie: a thing that behaves like a human but isn't and that has no inner life. It just operates on a program. Like a computer.

    I see p-zombies and computer programs as being very closely related. Perhaps you can educate me as to what I'm missing about p-zombies.
  • A Raybould
    86
    What test do you propose? Any ideas?TheMadFool

    I once half-jokingly suggested that devising a test that we find convincing should be posed as an exercise for the AI. The only reason I said 'half-jokingly' is that it would have a high false-negative rate, as no human has yet completed that task to everyone's satisfaction!

    I do not think Turing, or anyone else until much later, anticipated how superficially convincing a chatbot could be, and how effectively a machine could fake the appearance of consciousness by correlating the syntactical aspects of a vast corpus of human communication. By limiting his original test to a restricted domain - gender roles and mores - Turing made his test unnecessarily defeasible by these means, and subsequent variations have extended the scope of questioning. These tests could be further improved by focusing on what a machine understands, rather than what it knows.

    While there is a methodological difficulty in coming up with a test that defeats all faking, this is not the same problem as p-zombies allegedly pose, as that takes the form of an unexplained metaphysical prohibition on AI ever being 'truly' conscious (by 'unexplained', I mean that, in Chalmers' argument, we merely have to conceive of p-zombies, without giving any thought to how they might be so.)

    What, according to you, is an "accurate" concept of consciousness?TheMadFool

    I don't know, any better than the next person, what consciousness is, and if anyone had come up with a generally-accepted, predictive, falsifiable explanation, we would no longer be interested in the sort of discussion we are having here! For what it's worth, I strongly suspect that, for example, theories linking consciousness to quantum effects in microtubules are inaccurate. In a more general case, I think that any argument that insists physicalism must require a purely reductive explanation of conscious states in terms of brain states, without considering the possibility that the former may be emergent phenomena arising from the latter, are also inaccurate.

    Why is it incoherent?TheMadFool

    I am not saying that p-zombies are definitely an incoherent concept, though I suspect they are - that it will turn out that it is impossible to have something that appears to be as conscious as a human without it having internal states analogous to those of humans.

    Chalmers defends p-zombies as being "logically conceivable", but that is a vary low bar - it simply means that it is not simply a self-contradictory concept (such as 'male vixen' is, to quote one of Chalmers' examples), and that we don't know of any fact that refutes it - but that is always at risk of being overturned by new evidence, as has happened to many other concepts that were once seen as logically conceivable, such as phlogiston (actually, some form of metaphysical phlogiston theory might still be logically conceivable, but no-one would take seriously an argument based on that.)

    How and where is Occam's razor applicable?TheMadFool

    Chalmers is looking forward to a time when neuroscience has a thorough understanding of how brains work, and is trying to say that no such explanation can be complete - that there is something non-physical or magical going on as well. He cannot say what that is or how it works, or offer any way for us to answer those questions, but he insists that it must be there. It is for exactly these sort of unfalsifiable claims that Occam's razor was invented (even though the concept of falsifiability was not explicitly recognized until centuries later!)
  • A Raybould
    86
    I see p-zombies and computer programs as being very closely related. Perhaps you can educate me as to what I'm missing about p-zombies.fishfry

    Chalmers' canonical p-zombie argument is a mataphysical one that is not much concerned with computers or programs, even though they are often dragged into discussions of AI, often under the misapprehension that chatbots and such are examples of p-zombies. The argument is subtle and lengthy, but I think this is a good introduction.
  • TheMadFool
    13.8k
    I once half-jokingly suggested that devising a test that we find convincing should be posed as an exercise for the AIA Raybould

    That solves the mystery of who or what we consider to be more "intelligent"? I think, despite the who's who of computer science constantly reminding us that computers are not intelligent, people are still under the impression that computers are. I wonder why? Even you, presumably a person in the know about the truth of computer "intelligence", half-thought they were suited to a task humans have difficulty with.

    These tests could be further improved by focusing on what a machine understands, rather than what it knows.A Raybould

    I see nothing special in understanding. For the most part it involves formal logic, something computers can do much faster and much better. I maybe wrong of course and you'll have to come up with the goods showing me how and where.

    I am not saying that p-zombies are definitely an incoherent concept, though I suspect they are - that it will turn out that it is impossible to have something that appears to be as conscious as a human without it having internal states analogous to those of humans.A Raybould

    If I maybe so bold as to hazard a "proof": The idea that certain behavior is adequate grounds to infer consciousness is, I believe, an inductive argument. Each person is aware that s/he's conscious and that this state has a set of behavioral patterns associated with it. S/he then observes other people conduct themselves in a similar fashion and then an inference of consciousness is made. Nevertheless, this association between consciousness experienced in first person and some set of behaviors is not that of necessity (no deductive proof of it exists) but is that of probability (an inductive inference made from observation). Ergo, in my humble opinon, p-zombies are conceivable and possible to boot.
  • fishfry
    3.4k
    Chalmers' canonical p-zombie argument is a mataphysical one that is not much concerned with computers or programs, even though they are often dragged into discussions of AI, often under the misapprehension that chatbots and such are examples of p-zombies. The argument is subtle and lengthy, but I think this is a good introduction.A Raybould

    Can you summarize the main point please? What is a p-zombie if it's not a TM (or some alternate mechanism) that emulates a human without being self-aware? I confess that I have little inclination to dive into a subtle and lengthy piece by Chalmers. My limitation, I admit. The question I asked is simple enough though. What's a p-zombie if not a TM or super-TM (TM plus some set of oracles) that emulates a human without being self-aware?

    ps -- Feeling guilty at my laziness for not clicking on your link, I clicked. And it's NOT an article by Chalmers at all. I skimmed quickly but didn't read it. I'd still like a simple answer to how a p-zombie differs from "a thing that is indistinguishable from a human but lacks self-awareness," such as a TM in a nice suit.
  • A Raybould
    86
    That solves the mystery of who or what we consider to be more "intelligent"?TheMadFool

    No, it is intended to be what you asked for, an alternative to the Turing test, and the purpose of that test is to figure out if a given computer+program is intelligent.

    Even you, presumably a person in the know about the truth of computer "intelligence", half-thought they were suited to a task humans have difficulty with.TheMadFool

    I don't see where you got that from. I am writing about a hypothetical future computer that at least looks like it might be intelligent, just as Turing was when he presented his test.

    I see nothing special in understanding. For the most part it involves formal logic, something computers can do much faster and much better.TheMadFool

    If that is so, then how come the most powerful and advanced language-learning program has a problem with "common-sense physics", such as "If I put cheese in a refrigerator, will it melt?"

    Consider Einstein's equation E = mc^2. A great many people know it, but only a tiny fraction of those understand how it arises from what was known of physics at the beginning of the 20th. century. Einstein did not get there merely (or even mostly) by applying formal logic; he did so through a deep understanding of what that physics implied. A computer program, if lacking such insight, could not find its way to that result: for one thing, the problem is vastly too combinatorially complex to solve by exhaustive search, and for another, it would not understand that this formula, out of the huge number it had generated, was a particularly significant one.

    Nevertheless, this association between consciousness experienced in first person and some set of behaviors is not that of necessity (no deductive proof of it exists) but is that of probability (an inductive inference made from observation). Ergo, in my humble opinion, p-zombies are conceivable and possible to boot.TheMadFool

    For one thing, you seem to be making an argument that they are conceivable, but the controversial leap from conceivable to possible is not really argued for here, it is just asserted as as if it followed automatically: "...and possible to boot."

    More interestingly, if I am following you here, you do consider it possible that other people are p-zombies. That is very interesting, because Chalmers hangs his argument against physicalism on the assumption that they are not, and I know of no counter-argument that challenges this view (even when Dennett says, apparently somewhat tongue-in-cheek, that "we are all p-zombies", I think his point is that he thinks Chalmers' distinction between p-zombies and us (the non-physical element that they lack) is illusory.)

    Having said that, I have three follow-up questions: firstly, if other people could be p-zombies, do you think that you are different, and if so, why? Secondly, if it is possible that other people are p-zombies, why would it matter that it would be possible for a p-zombie to pass the Turing test? Thirdly, if it is possible that other people are p-zombies, why did we evolve a highly-complex, physical state machine called the human brain? After all, if the p-zombie hypothesis is correct, our minds are independent of the physical brain. The most parsimonious hypothesis here seems to be that the p-zombie hypothesis is false, and our minds actually are a result of what our physical brains do.
  • A Raybould
    86

    Indeed, that article is not by Chalmers; is that a problem? Is reading Archimedes' words the only way to understand his principle?

    If you want to read Chalmers' own words, he has written a book and a series of papers on the issue. As you did not bother to read my original link, I will not take the time to look up these references; you can find them yourself easily enough if you want to (and they may well be found in that linked article). I will warn you that you will find the papers easier to follow if you start by first reading the reference I gave you.

    I'd still like a simple answer to how a p-zombie differs from "a thing that is indistinguishable from a human but lacks self-awareness," such as a TM in a nice suit.fishfry

    That is a different question than the one you asked, and I replied to, earlier. The answer to this one is that a TM is always distinguishable from a human, because neither a human, nor just its brain, nor any other part of it, is a TM. A human mind can implement a TM, to a degree, by simulation (thinking through the steps and remembering the state), but this is beside the point here.

    If you had actually intended to ask "...indistinguishable from a human when interrogated over a teletype" (or by texting), that would be missing the point that p-zombies are supposed to be physically indistinguishable from humans (see the first paragraph in their Wikipedia entry), even when examined in the most thorough and intrusive way possible. This is a key element in Chalmers' argument against metaphysical physicalism.

    As a p-zombie is physically identical to a human (or a human brain, if we agree that no other organ is relevant), then it is made of cells that work in a very non-Turing, non-digital way. Chalmers believes he can show that there is a possible world identical to ours other than it being inhabited by p-zombies rather than humans, and therefore that the metaphysical doctrine of physicalism - that everything must necessarily be a manifestation of something physical - is false.

    Notice that there is no mention of AI or Turing machines here. P-zombies only enter the AI debate through additional speculation: If p-zombies are possible, then it is also possible that any machine (Turing or otherwise), no matter how much it might seem to be emulating a human, is at most emulating a p-zombie. As the concept of p-zombies is carefully constructed so as to be beyond scientific examination, such a claim may be impossible to disprove, but it is as vulerable to Occam's razor as is any hypothesis invoking magic or the supernatural.
  • TheMadFool
    13.8k
    If that is so, then how come the most powerful and advanced language-learning program has a problem with "common-sense physics", such as "If I put cheese in a refrigerator, will it melt?"A Raybould

    How do you think a human processes this question?

    A great many people know it, but only a tiny fraction of those understand how it arises from what was known of physics at the beginning of the 20th. century. Einstein did not get there merely (or even mostly) by applying formal logicA Raybould

    Is it possible to get to E = mc^2 without logic?

    problem is vastly too combinatorially complex to solve by exhaustive searchA Raybould

    What do you mean by that? Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of?

    For one thing, you seem to be making an argument that they are conceivable, but the controversial leap from conceivable to possible is not really argued for here,A Raybould

    What's the difference between conceivable and possible?
  • A Raybould
    86

    How do you think a human processes this question?TheMadFool

    A person who does not just know the answer might begin by asking herself questions like "what does it mean for cheese to melt?" "what causes it to do so?" "what does a refrigerator do?" and come to realize that the key to answering the question posed may be reached through the answers to two subsidiary questions: what is the likely state of the cheese initially, and how is its temperature likely to change after being put into a refrigerator?

    At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easily. The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of  GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question.

    It gets more interesting when we consider a slightly more difficult problem: for "cheese", substitute the name of a cheese that the subject has never heard of (there are some candidates here). There is a good chance that she will still come up with the right answer, even if she does not suspect that the object is a form of cheese, by applying suitable general principles and some inductive thinking. Current AI, on the other hand, will likely be flummoxed.


    Is it possible to get to E = mc^2 without logic?TheMadFool

    That is beside the point. To think that the use of logic in getting to E = mc^2 somehow implies that, once you can get a machine to do logic, there's "nothing special" in getting it to understand things, is, ironically, a failure to understand the role (and limits) of logic in understanding things.

    Ultimately, you are arguing against the straightforward empirical fact that current AI has trouble understanding the information it has.


    Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of?TheMadFool

    Neither of the above. There is a method to solving certain problems in formal logic, that does a breadth-first search through the tree of all possible derivations from the given axioms, but that is nothing like insight: for one thing, there is no semantic content to the formulae themselves. (One of the first successes in AI research, Logic Theorist, proved many of the early theorems from Principia Mathematica, and as doing so is considered a sign of intelligence in people, some thought that AI was close to being a solved problem. They were mistaken.)

    What I was thinking is this: if you formalized the whole of classical physics, and started a program such as the above on discovering what it could deduce, the chances that it would come up with E=mc^2 before the world comes to an end are beyond-astronomically small (even more importantly, such a program would not understand the importance of that particular derivation, but that is a separate issue.) The reason for this is the combinatorial complexity of the problem - the sheer number of possible derivations and how fast they grow at each step (even the Boolean satisfiability problem 3-SAT is NP-complete.)

    Actually, I have since realized that even this would not be successful in getting to E = mc^2: to get there, Einstein had to break some 'laws' of physics, treat them as approximations, and substitute more accurate alternatives that were still consistent with everything that had been empirically determined. That's not just logic at work.

    Lucky guessing has the same problem, and anyone dismissing Einstein's work as a lucky guess just does not understand what he did. There is something more to understanding than any of this, and the fact that we haven't nailed it down yet is precisely the point that I am making on this tangential issue of whether understanding remains a tough problem for AI.


    What's the difference between conceivable and possible?TheMadFool

    Consider the example I gave earlier: I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible. This situation exists because it is either true or it is false, but so far, no-one has found a proof either way.

    In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)

    If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.
  • TheMadFool
    13.8k
    At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easilyA Raybould

    The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of  GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question.A Raybould

    Your ideas are interesting but one thing we agree on is the necessity for logic. Now, the question that pops into my mind is this: what does anyone mean when s/he says something like, "I understand."? To me, understanding is just a semantics game that's structured with syntax. The latter, syntax, doesn't seem to be an issue with computers; in fact computers are veritable grammar nazis. As for the former, semantics, what's the difficulty in associating words to their referents in a computer's memory? That's exactly what humans do too, right? The question for you is this: is there a word with a referent that's impossible to be translated into computer-speak? I would like to know very much, thank you.

    The reason for this is the combinatorial complexity of the problemA Raybould

    That's not just logic at work.A Raybould

    You talk of "combinatorial complexity" and the way you speak of Einstein's work suggests to me that you think E=mc^2 to be nothing short of a miracle. May I remind you that Newton once said that he achieved what he did only by standing on the shoulder's of giants. There's a long line of illustrious predecessors that paves the way to many scientific discoveries in my opinion. You also seem to ignore serendipity - times when people simply get lucky and make headway with a problem. You seem to be of the view that there's a method behind sheer luck, as if to say there's a way to influence the outcome of a die when in fact it can't be done.

    I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible.A Raybould

    Just curious here. I feel that I've not understood you as much as I'd have liked but bear with me a while...

    If conceivability and possibility are different then the following are possible and I'd like some examples of each:

    1. There's something conceivable that's impossible

    2. There's something possible that's inconceivable
  • A Raybould
    86


    To me, understanding is just a semantics game that's structured with syntax.TheMadFool

    I have no idea what that means. I hope that it means more than "understanding is semantics with syntax", which is, at best, a trite observation that does not explain anything.

    Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.


    Is there a word with a referent that's impossible to be translated into computer-speak?TheMadFool

    This is beside the point, as translation does not produce meaning, whether it is into "computer-speak" or anything else. Translation sometimes requires understanding, and it is specifically those cases where current machine translation tends to break down.

    You have not, so far, addressed a point that is relevant here: the problem current AI has with basic, common-sense understanding is not in solving logic problems, but in formulating the right problem in the first place.

    If you really think you have solved the problem of what it takes to understand something, you should publish (preferably in a peer-refereed journal), as this would be quite a significant advance in the study of the mind. At the very least, perhaps you could address an issue that I have raised twice now: if, as you say, there's nothing special to understanding, and semantics is just associating words to their referents in a computer's memory, how come AI is having a problem with understanding, as is stated in the paper I linked to? Do you think all AI researchers are incompetent?


    You talk of "combinatorial complexity" and the way you speak of Einstein's work suggests to me that you think E=mc^2 to be nothing short of a miracle.TheMadFool

    Well I don't, so I think I can skip your arguments against it - though not without skimming them to see if you made any point that stands on its own. In doing so, I see that you put quotes around combinatorial complexity, as if you thought it was beside the point, but it is very much to the point that humans achieve results that would be utterly infeasible if the mind worked like current theorem solving programs.


    If conceivability and possibility are different then the following are possible and I'd like some examples of each:

    1. There's something conceivable that's impossible

    2. There's something possible that's inconceivable
    TheMadFool

    They may be possible, but it is certainly not necessary that there must be something possible that's inconceivable - and if there is, then neither me, you nor anyone else is going to be able to say what it is. On the other hand, in mathematics, there are non-constructive proofs that show something is so without being able to give any examples, and it seems conceivable to me that in some of these cases, no example ever could be found. If this is so, then whether these things should be regarded as inconceivable or not strikes me as a rather subtle semantic issue.

    I have twice given you an example of the former: If the Collatz conjecture is true, then that it is false is conceivable (at least until a proof is found) but not possible, and vice-versa. It has to be either one or the other.

    By the way, this example is a pretty straightforward combination of syntax, semantics and a little logic, so how do you account for your difficulty in understanding it?
  • TheMadFool
    13.8k
    I have no idea what that means. I hope that it means more than "understanding is semantics with syntax", which is, at best, a trite observation that does not explain anything.

    Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.
    A Raybould

    The problem here is simple: what does it mean to understand? Computers are symbol manipulators and that means whatever can be symbolized, is within the reach of a computer. If you believe there's more to understanding than symbol manipulation, you'll have to make the case that that's the case. I didn't find anything in your posts that achieves that. In short, I contend that even human understanding is basic symbol manipulation and so the idea that it's somehow so special that computers can't handle is not true as far as I'm concerned.

    Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e.to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish.

    This is beside the point, as translation does not produce meaning, whether it is into "computer-speak" or anything else. Translation sometimes requires understanding, and it is specifically those cases where current machine translation tends to break down.A Raybould

    How do you think machine translations work? Each word in a language is matched with another word in a different language. The idea of language translation, as you rightly pointed out, is for humans is an exercise in semantics and yet, despite some mistranslations, computers do a pretty good job. Ask yourself the question: what exactly does understanding semantics mean if a machine, allegedly incapable of semantics, can do as good a job as a human translator of languages?

    if, as you say, there's nothing special to understanding, and semantics is just associating words to their referents in a computer's memory, how come AI is having a problem with understanding, as is stated in the paper I linked to? Do you think all AI researchers are incompetent?A Raybould

    While I'm not claiming I'm correct in all what I've said, I've heard that even the very best expert can and do make mistakes.

    They may be possible, but it is certainly not necessary that there must be something possible that's inconceivable - and if there is, then neither me, you nor anyone else is going to be able to say what it is. On the other hand, in mathematics, there are non-constructive proofs that show something is so without being able to give any examples, and it seems conceivable to me that in some of these cases, no example ever could be found.

    I have twice given you an example of the former: If the Collatz conjecture is true, then that it is false is conceivable (at least until a proof is found) but not possible, and vice-versa. It has to be either one or the other. Whether these things would be regarded as inconceivable or not strikes me as a rather subtle semantic issue.

    By the way, this example is a pretty straightforward combination of syntax, semantics and a little logic, so how do you account for your difficulty in understanding it?
    A Raybould

    Kindly furnish the definitions of "conceivable" and "possible". I'd like to see how they differ, if you don't mind.
  • bongo fury
    1.6k
    Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e. to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish.TheMadFool

    I wish I could locate the youtube footage of Searle's wry account of early replies to his vivid demonstration (the chinese room) that so-called "cognitive scripts" mistook syntax for semantics. Something like, "so they said, ok we'll program the semantics into it too, but of course what they came back with was just more syntax".bongo fury

    I'll have another rummage.

    I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world? Or else how ever did the "linking" seem to you something simple and easily accomplished, by a computer, even??? Weird.
  • TheMadFool
    13.8k
    I'll have another rummage.

    I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world? Or else how ever did the "linking" seem to you something simple and easily accomplished, by a computer, even??? Weird.
    bongo fury

    Yes, I did consider that. Referents can be almost anything, from physical objects to abstract concepts. Most physical objects have perception-based qualities to them. Take red wine for example - it's red, has a certain taste, stored in bottles in a cellar, etc. The set of qualities that define what wine is is linked to the word "wine" in the human mind - this is the essence of semantics and computers, in my opinion, are up to the task.

    A similar argument can be made for concepts.
  • bongo fury
    1.6k
    I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world?
    — bongo fury

    Yes, I did consider that.
    TheMadFool

    Ok...

    Referents can be almost anything, from physical objects to abstract concepts.TheMadFool

    Ah, so after due consideration you decided not. (The referents don't have to be things out in the world.) This was Searle's frustration.

    You can be sure you are in the respectable company of his critics at the time. Also of a probable majority of philosophers and linguists throughout history, to be fair.
  • TheMadFool
    13.8k
    Ah, so after due consideration you decided not. (The referents don't have to be things out in the world.) This was Searle's frustration.bongo fury

    They don't have to be but they can be, no? I hope you reply soon to this query.
  • bongo fury
    1.6k
    I hope you reply soon to this query.TheMadFool

    Why? A quick reply isn't usually a thoughtful one. In my case at least. Actually, I think the site should instigate a minimum time between replies, as well as a word limit.

    They don't have to be but they can be, no?TheMadFool

    I don't think I've been understood, here. (Take more time?) I was trying to explain why @A Raybould was non-plussed by your statements about semantics. See also @InPitzotl's recent posts here.
  • TheMadFool
    13.8k
    Well, I don't know why people make such a big deal of understanding - it's very simple. Referring to InPitzotl's example of chips and dip, I don't see any difficulty at all - a computer with the right programming can recognize these items if the necessary visual and other data have been linked to the words "chips" and "dip" i.e. if the word has been accurately matched to their referents.
  • bongo fury
    1.6k
    Well, I don't know why people make such a big deal of understandingTheMadFool

    It's about

    referentsTheMadFool

    They (and I) mean things out there, you mean just more words/data.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.