I don't know my epistemic from a hole in the ground. But it's certainly possible things aren't as they seem. That's happened enough times that we know better than to be surprised. But I don't know what your point is. We're all going to continue acting like other people exist and are conscious. We're not going to assume they're not, and start acting on that. When people act like that, we cross to the other side of the street. If I find out things aren't as they seem, and none of you are real, then I'll possibly act differently.It seems like you're making two points, one on pragmatism and the other epistemic. Pragmatically, I agree that we act like other people exist and are conscious, but that doesn't mean we should assume that's the way things are. — RogueAI
We don't know that for sure, unless we become one of them in real.Different types of sentience are, obviously, sentience. — Ludwig V
Simulation = Imitation?I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations. — Ludwig V
What is the ground for your saying that there was no ground?I meant to say that it might - or rather, that there was no ground for ruling it out. — Ludwig V
Yes. Do you disagree?Simulation = Imitation? — Corvus
What is your ground for moving from "it hasn't happened" to "it will never happen"?What is the ground for your saying that there was no ground? — Corvus
I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.We don't know that for sure, unless we become one of them in real. — Corvus
Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings. Just saying :)Yes. Do you disagree? — Ludwig V
It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.What is your ground for moving from "it hasn't happened" to "it will never happen"? — Ludwig V
I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient. — Ludwig V
Exactly.If I can't tell that other people are sentient, then I don't know what it is to be sentient. — Ludwig V
I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions. — Corvus
There are two different questions here. If you know that p, I might also know that p, but not that you know that p. But I can also know (and not just guess) that you know that p. For example, you might tell me that you know that p. And I can tell whether you are lying.I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work? — Corvus
Yes. It sounds positively cosy, doesn't it? Watch out! Assistants have been known to take over.They seem to just want to be called as "the useful assistance" to human needs. — Corvus
You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings. — Corvus
Yes. But what would you say if it mindlessly spews out what has been fed in to it, but only when it is appropriate to do so? (I have in mind those little things an EPOS says from time to time. "Unexpected item in the bagging area", for example. Or the message "You are not connected to the internet" that my screen displays from time to time.) It's a kind of half-way house between parroting and talking.AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says. — Pez
But I can tell that other people are sentient. I don't say it follows that I know what sentience is. Do you?If I can't tell that other people are sentient, then I don't know what it is to be sentient.
— Ludwig V
Exactly. — Corvus
Magic and miracles work on far more probability than the sun might not rise tomorrow. If your claim was based on the induction that the sun might not rise tomorrow morning, then it proves that your claims were based on far less plausibility than miracles and magical workings.I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning". — Ludwig V
That sounds like a comment from a mind reading fortune tellers. You need concrete evidences for making such judgements about others.For example, you might tell me that you know that p. And I can tell whether you are lying. — Ludwig V
Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt. — Ludwig V
I am not sure if it can be concluded for certainty. These are the things that cannot be easily proved.but when the parrot says "Good morning" it is imitating human speech and not really talking. — Ludwig V
Again it depends. It is not that simple.I don't say it follows that I know what sentience is. Do you? — Ludwig V
Again it depends. It is not that simple. — Corvus
If you program a highly developed and intelligent AI devices with the listening input device installed and connected to the processor, and the sound recognition software with the interpreting algorithms, then the AI device would understand the language you speak to them. That doesn't mean that the AI is sentient of course. They would be just doing what they are designed and programmed to do according to the programmed and set processes.But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means. — Pez
But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means. — Pez
You are seriously underestimating the intelligence of parrots. You should read about Alex, a grey parrot.
https://en.wikipedia.org/wiki/Alex_(parrot) — Agree-to-Disagree
I did put my point badly. I've tried to find the analysis you refer to. I couldn't identify it. If you could point me in the right direction, I would be grateful.Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective. — Corvus
That's a high bar. I agree that it is impossible to meet. But it proves too much since it also proves that we can never even know that human beings have/are minds.Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.
.....To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions. — Corvus
I'm not sure of the significance of "sentient" in this context, but I agree whole-heartedly with your point that without the ability to act in the world, we could not be sentient because, to put it this way, our brains would not learn to interpret the data properly. The implication is that the machine in a box with no more than an input and output of language could not approximate a human mind. A related point that I remember you pointing out is that the machines that we currently have do not have emotions or desires. Without them, to act as a human person is impossible. Yet, they could be simulated, couldn't they?AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like. — Corvus
It isn't dismissive, it's objective. The fundamental mechanism of information processing via artificial neural networks has not changed. — Pantagruel
It is simply faster and more robust. It isn't one whit more intelligent than any other kind of mechanism. — Pantagruel
Nvidia hasn't become a two trillion dollar corporation because hype.
— wonderer1
This has absolutely no bearing on inherent nature of the technology in question. — Pantagruel
That's exactly why Turing's test is so persuasive - except that when we find machines that could pass it, we don't accept the conclusion, but start worrying about what's going on inside them. If our test is going to be that the putative human needs to have a human inside - mentally if not necessarily physically, the game's over. — Ludwig V
Well, it has an important aspect of intelligence that many other systems don't have, which is learning. Do you think that a distinction between learning mechanisms and non-learning mechanisms is worthwhile to recognize? — wonderer1
So to me, the Turing Test doesn't seem to provide a useful criteria for much of anything. — wonderer1
The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum. — Pez
You are seriously underestimating the intelligence of parrots — Agree-to-Disagree
I was just pointing out logical gaps in your arguments. Not prejudging your points at all. :)I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds. — Ludwig V
With the logical discourse, we are hoping to reach some conclusions or agreements on the topic. I don't presume anyone's point is wrong or right. All points are more plausible or less plausible.I couldn't identify it. If you could point me in the right direction, I would be grateful. — Ludwig V
Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications. Only clear signs and evidence for your perception on other minds are language and actions, but due to the complexity of human mind, the true intentions, desires and motives of humans can be hidden deep inside their subconscious or unconscious rendering into the state of mysteries even to the owner of the mind.On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion. — Ludwig V
Exactly - though I would have put it a bit differently. It doesn't matter here.Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications. — Corvus
Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases. — Corvus
I don't think there is any specific behaviour (verbal or non-verbal) that will distinguish clearly between these machines and people. We do not explain human actions in the same way as we explain what machines do. In the latter case, we apply causal explanations. In the former case, we usually apply explanations in terms of purposes and rationales. How do we decided us which framework is applicable?I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people. — Ludwig V
The question that next is whether we can tease out why we attribute sentience and intelligence to the parrot and not to the AI? Is it just that the parrot is alive and the AI is not? Is that perhaps begging the question?If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient. — Pez
Do we really want to? (Somebody else suggested that we might not even try)The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations. — Relativist
Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.Do we really want to? (Somebody else suggested that we might not even try) — Ludwig V
Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure. — Relativist
The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations. — Relativist
Do we really want to? (Somebody else suggested that we might not even try) — Ludwig V
Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure. — Relativist
Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves. — Ludwig V
they can't think creatively. — Relativist
They would make great Christmas presents — Agree-to-Disagree
It depends on how you define thinking. Digital computers can certainly apply logic, and Artificial Neural Networks can perform pattern recognition. One might label those processes as thoughts.Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test? — Ludwig V
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.