As I see it there are numerous fields where computers generate intelligent solutions for particular problems. — Heiko
Today there are software engineers running around trying to tweak natural language processing as they know where the flaws are. — Heiko
I would take it pretty seriously if they once say: "Okay, now we really cannot tell the difference anymore." — Heiko
how would one know whether a computer is conscious in the sense we are?
how would one know whether a computer is conscious in the sense we are? — TheMadFool
Do you know that I am conscious in the same way that you are? (or that any other person is, for that matter.) If so, then apply whatever method you used to come to that conclusion to a computer - and if that method depends on me being human and is not applicable to computers, then you would be begging the question. — A Raybould
The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally. — TheMadFool
The Turing test, as a way of identifying AI (conscious), simply states that if there's no difference between a candidate AI and a human in terms of a person who's assessing the AI being fooled into thinking the AI is human then, the AI has passed the test and, to all intents and purposes, is conscious. — TheMadFool
P-zombies are treated differently: even if they pass the Turing test adapted to them, they're thought not to deserve the status of conscious beings. — TheMadFool
In short, something, AI, that's worlds apart from a flesh-and-bone human, is considered worthy of the label of consciousness while something that's physically identical to us, p-zombies, aren't and in both cases the test for consciousness is the same - behavior based. — TheMadFool
You used the word conscious but Turing uses the word intelligent. Intelligence is a behavior and consciousness is a subjective state. — fishfry
In fact asking if an agent other than yourself is conscious is essentially meaningless, since consciousness (or self-awareness) is a subjective state. — fishfry
Isn't that just an irrational prejudice? They used to say the same about certain ethnic groups. It's the same argument about dolphins. Just because they don't look like us doesn't mean they're not "considered worthy of the label of consciousness." What is your standard for that? — fishfry
The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally. — TheMadFool
I don't think the Turing test should be considered as the only or ultimate test for consciousness - it was merely Turing's first shot at such a test (and, given that, it has endured surprisingly well.) — A Raybould
If you are going to throw out all behavioral evidence, in the case of AI, on account of p-zombies, then you would be inconsistent if you did not also throw it out in the case of other people. If you make an exception for other people because they are human, then you would be literally begging Chalmers' 'hard question of consciousness'. — A Raybould
That they are allegedly lacking something that a philosopher says an entity must have, to be conscious, is not much of an argument; the philosopher might simply have an inaccurate concept of what consciousness is and requires. — A Raybould
that p-zombies are ultimately an incoherent concept. — A Raybould
I am simply going to apply Occam's razor to p-zombies, at least until a better argument for their possibility comes along. — A Raybould
I realize that AI = artificial intelligence and not consiousness. — TheMadFool
Firstly, even if that's the case, we still have the problem of inferring things about the mind - intelligence, consciousness, etc. - from behavior alone. The inconsistency here is that on one hand (AI) behavior is sufficient to conclude that a given computer has a mind (intelligence-wise) and on the other hand, p-zombies, it isn't (p-zombies have no minds). — TheMadFool
Secondly, my doubts notwithstanding, intelligence seems to be strongly correlated with consciousness - the more intelligent something is, the more capacity for consciousness. — TheMadFool
In addition, and more importantly, aren't computers more "intelligent" in terms of never making a logical error i.e. Turing had something else in mind regarding artificial intelligence - it isn't about logical thinking which we all know for a fact that even run of the mill computers can beat us at. — TheMadFool
What do you think this something else is if not consciousness? Consciousness is the only aspect of the mind that's missing from our most advanced AI, no? The failure of the best AI to pass the Turing test is not because they're not intelligent but because they're not, or are incapable of mimicking, consciousness.[/url]
Funny you said "mimicking" consciousness instead of implementing it. As in faking it. Pretending to be conscious.
I think we're each using a slightly different definition of consciousness. I think it's purely subjective and can never be tested for. I gather you believe the opposite, that there are observable behaviors that are reliable indicators of consciousness. We my need to agree to disagree here.
— TheMadFool
In short, the Turing test, although mentioning only intelligence, is actually about consciousness. — TheMadFool
It's not meaningless to inquire if other things have subjective experiences or not. — TheMadFool
All I'm saying is a p-zombie is more human than a computer is. — TheMadFool
Ergo, we should expect there to be more in common between humans and p-zombies than between humans and computers, — TheMadFool
something contradicted by philosophy (p-zombie) and computer science (Turing test). — TheMadFool
What test do you propose? Any ideas? — TheMadFool
What, according to you, is an "accurate" concept of consciousness? — TheMadFool
Why is it incoherent? — TheMadFool
How and where is Occam's razor applicable? — TheMadFool
I see p-zombies and computer programs as being very closely related. Perhaps you can educate me as to what I'm missing about p-zombies. — fishfry
I once half-jokingly suggested that devising a test that we find convincing should be posed as an exercise for the AI — A Raybould
These tests could be further improved by focusing on what a machine understands, rather than what it knows. — A Raybould
I am not saying that p-zombies are definitely an incoherent concept, though I suspect they are - that it will turn out that it is impossible to have something that appears to be as conscious as a human without it having internal states analogous to those of humans. — A Raybould
Chalmers' canonical p-zombie argument is a mataphysical one that is not much concerned with computers or programs, even though they are often dragged into discussions of AI, often under the misapprehension that chatbots and such are examples of p-zombies. The argument is subtle and lengthy, but I think this is a good introduction. — A Raybould
That solves the mystery of who or what we consider to be more "intelligent"? — TheMadFool
Even you, presumably a person in the know about the truth of computer "intelligence", half-thought they were suited to a task humans have difficulty with. — TheMadFool
I see nothing special in understanding. For the most part it involves formal logic, something computers can do much faster and much better. — TheMadFool
Nevertheless, this association between consciousness experienced in first person and some set of behaviors is not that of necessity (no deductive proof of it exists) but is that of probability (an inductive inference made from observation). Ergo, in my humble opinion, p-zombies are conceivable and possible to boot. — TheMadFool
I'd still like a simple answer to how a p-zombie differs from "a thing that is indistinguishable from a human but lacks self-awareness," such as a TM in a nice suit. — fishfry
If that is so, then how come the most powerful and advanced language-learning program has a problem with "common-sense physics", such as "If I put cheese in a refrigerator, will it melt?" — A Raybould
A great many people know it, but only a tiny fraction of those understand how it arises from what was known of physics at the beginning of the 20th. century. Einstein did not get there merely (or even mostly) by applying formal logic — A Raybould
problem is vastly too combinatorially complex to solve by exhaustive search — A Raybould
For one thing, you seem to be making an argument that they are conceivable, but the controversial leap from conceivable to possible is not really argued for here, — A Raybould
How do you think a human processes this question? — TheMadFool
Is it possible to get to E = mc^2 without logic? — TheMadFool
Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of? — TheMadFool
What's the difference between conceivable and possible? — TheMadFool
At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easily — A Raybould
The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question. — A Raybould
The reason for this is the combinatorial complexity of the problem — A Raybould
That's not just logic at work. — A Raybould
I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible. — A Raybould
To me, understanding is just a semantics game that's structured with syntax. — TheMadFool
Is there a word with a referent that's impossible to be translated into computer-speak? — TheMadFool
You talk of "combinatorial complexity" and the way you speak of Einstein's work suggests to me that you think E=mc^2 to be nothing short of a miracle. — TheMadFool
If conceivability and possibility are different then the following are possible and I'd like some examples of each:
1. There's something conceivable that's impossible
2. There's something possible that's inconceivable — TheMadFool
I have no idea what that means. I hope that it means more than "understanding is semantics with syntax", which is, at best, a trite observation that does not explain anything.
Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either. — A Raybould
This is beside the point, as translation does not produce meaning, whether it is into "computer-speak" or anything else. Translation sometimes requires understanding, and it is specifically those cases where current machine translation tends to break down. — A Raybould
if, as you say, there's nothing special to understanding, and semantics is just associating words to their referents in a computer's memory, how come AI is having a problem with understanding, as is stated in the paper I linked to? Do you think all AI researchers are incompetent? — A Raybould
They may be possible, but it is certainly not necessary that there must be something possible that's inconceivable - and if there is, then neither me, you nor anyone else is going to be able to say what it is. On the other hand, in mathematics, there are non-constructive proofs that show something is so without being able to give any examples, and it seems conceivable to me that in some of these cases, no example ever could be found.
I have twice given you an example of the former: If the Collatz conjecture is true, then that it is false is conceivable (at least until a proof is found) but not possible, and vice-versa. It has to be either one or the other. Whether these things would be regarded as inconceivable or not strikes me as a rather subtle semantic issue.
By the way, this example is a pretty straightforward combination of syntax, semantics and a little logic, so how do you account for your difficulty in understanding it? — A Raybould
Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e. to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish. — TheMadFool
I wish I could locate the youtube footage of Searle's wry account of early replies to his vivid demonstration (the chinese room) that so-called "cognitive scripts" mistook syntax for semantics. Something like, "so they said, ok we'll program the semantics into it too, but of course what they came back with was just more syntax". — bongo fury
I'll have another rummage.
I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world? Or else how ever did the "linking" seem to you something simple and easily accomplished, by a computer, even??? Weird. — bongo fury
I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world?
— bongo fury
Yes, I did consider that. — TheMadFool
Referents can be almost anything, from physical objects to abstract concepts. — TheMadFool
Ah, so after due consideration you decided not. (The referents don't have to be things out in the world.) This was Searle's frustration. — bongo fury
I hope you reply soon to this query. — TheMadFool
They don't have to be but they can be, no? — TheMadFool
Well, I don't know why people make such a big deal of understanding — TheMadFool
referents — TheMadFool
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.