• Patterner
    1k
    It seems like you're making two points, one on pragmatism and the other epistemic. Pragmatically, I agree that we act like other people exist and are conscious, but that doesn't mean we should assume that's the way things are.RogueAI
    I don't know my epistemic from a hole in the ground. But it's certainly possible things aren't as they seem. That's happened enough times that we know better than to be surprised. But I don't know what your point is. We're all going to continue acting like other people exist and are conscious. We're not going to assume they're not, and start acting on that. When people act like that, we cross to the other side of the street. If I find out things aren't as they seem, and none of you are real, then I'll possibly act differently.
  • Manuel
    4.1k
    Can a computer think? Locke points out:

    "..since we know not wherein thinking consists..."

    Or Russell:

    "I do not know whether dogs can think, or what thinking is, or whether human beings can think. "

    Or maybe even Turing himself:

    "If the meaning of the words "machine" and "think" are to be found by examining how they are
    commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd."

    Italics mine.

    Or Wittgenstein:

    "We only say of a human being and what is like one that it thinks. We also say it of dolls and no doubt of spirits too."

    There are several more. It's a small problem, but perhaps we should clear up what this "thinking" is for us, before we attribute them to other things.
  • Corvus
    3.2k
    Different types of sentience are, obviously, sentience.Ludwig V
    We don't know that for sure, unless we become one of them in real.

    I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.Ludwig V
    Simulation = Imitation?

    I meant to say that it might - or rather, that there was no ground for ruling it out.Ludwig V
    What is the ground for your saying that there was no ground?
  • Corvus
    3.2k
    ChatGPT seemed not too confident in understanding and responding to the metaphoric questions.

    Me - "Do you smell a rat?"

    Chatgpt
    "As an AI language model, I don't have the ability to smell or perceive things in the physical world. My capabilities are limited to processing and generating text-based responses to the best of my ability based on the input provided to me. If you have any concerns or suspicions, please feel free to share them, and I'll do my best to assist you."
  • Ludwig V
    1.7k
    Simulation = Imitation?Corvus
    Yes. Do you disagree?

    What is the ground for your saying that there was no ground?Corvus
    What is your ground for moving from "it hasn't happened" to "it will never happen"?

    We don't know that for sure, unless we become one of them in real.Corvus
    I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.
  • Corvus
    3.2k
    Yes. Do you disagree?Ludwig V
    Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings. Just saying :)
    They seem to just want to be called as "the useful assistance" to human needs.

    What is your ground for moving from "it hasn't happened" to "it will never happen"?Ludwig V
    It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.

    I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.Ludwig V
    I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?

    If I can't tell that other people are sentient, then I don't know what it is to be sentient.Ludwig V
    Exactly.
  • Pez
    33
    It strikes me as a little like wanting one's puppets to come alive. :up:BC

    AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says.
  • Ludwig V
    1.7k
    It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.Corvus
    I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".

    I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?Corvus
    There are two different questions here. If you know that p, I might also know that p, but not that you know that p. But I can also know (and not just guess) that you know that p. For example, you might tell me that you know that p. And I can tell whether you are lying.

    They seem to just want to be called as "the useful assistance" to human needs.Corvus
    Yes. It sounds positively cosy, doesn't it? Watch out! Assistants have been known to take over.

    Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings.Corvus
    You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.
    But I wouldn't call the AI guys an imitation. I do call their work in programming a machine to do something that people do (e.g. talking) as creating an imitation. In the same way, a parrot is a real parrot and not an imitation; when I teach it so say "Good morning" I am not imitating anything; but when the parrot says "Good morning" it is imitating human speech and not really talking.

    AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says.Pez
    Yes. But what would you say if it mindlessly spews out what has been fed in to it, but only when it is appropriate to do so? (I have in mind those little things an EPOS says from time to time. "Unexpected item in the bagging area", for example. Or the message "You are not connected to the internet" that my screen displays from time to time.) It's a kind of half-way house between parroting and talking.
    More seriously, Searle argues that computers don't calculate, because it is we who attribute the significance to the results. But we attribute that significance to them because of the way that they were arrived at, so I think it is perfectly appropriate to say that they do calculate. Of course it doesn't follow that they are people or sentient or even rational.

    If I can't tell that other people are sentient, then I don't know what it is to be sentient.
    — Ludwig V
    Exactly.
    Corvus
    But I can tell that other people are sentient. I don't say it follows that I know what sentience is. Do you?
  • Corvus
    3.2k
    I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".Ludwig V
    Magic and miracles work on far more probability than the sun might not rise tomorrow. If your claim was based on the induction that the sun might not rise tomorrow morning, then it proves that your claims were based on far less plausibility than miracles and magical workings.

    It is unusual for anyone to opt for, and believe in the almost no probability case leaving out the clearly more probable case in inductive reasoning. Any particular reason for that?

    For example, you might tell me that you know that p. And I can tell whether you are lying.Ludwig V
    That sounds like a comment from a mind reading fortune tellers. You need concrete evidences for making such judgements about others.

    You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.Ludwig V
    Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.

    but when the parrot says "Good morning" it is imitating human speech and not really talking.Ludwig V
    I am not sure if it can be concluded for certainty. These are the things that cannot be easily proved.

    I don't say it follows that I know what sentience is. Do you?Ludwig V
    Again it depends. It is not that simple.
  • Pez
    33
    Again it depends. It is not that simple.Corvus

    Of course it is not that simple. But this is just the interesting point about our discussion (for me at least).

    To come back to the parrot. There have been long debates about the relation between a concept and its meaning. The idea, that a concept can have a meaning rests on the assumption, that there is a two-fold relation between concept and meaning. Now C. S. Peirce came up with a refreshing suggestion. What, if this relation was three-fold: sign (as he called it), meaning and "interpretant" or someone who understands that sign. Signs (words, signposts, utterances) do not have a meaning unless there is someone who understands it.

    Just imagine, You see a Chinese symbol You have never seen before. It cannot have any meaning to You. Someone born and raised in China thought easily connects a meaning to that character. AI can easily put forward a string of expressions You and I can link a meaning to. But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.
  • Corvus
    3.2k
    But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.Pez
    If you program a highly developed and intelligent AI devices with the listening input device installed and connected to the processor, and the sound recognition software with the interpreting algorithms, then the AI device would understand the language you speak to them. That doesn't mean that the AI is sentient of course. They would be just doing what they are designed and programmed to do according to the programmed and set processes.

    For parrots understanding "Good morning", I am not sure because I have never kept any pets in my life. But I am sure if you keep speaking "Good morning" to a parrot, every morning when you see her, she will understand what you mean, and learn the utterance as well.

    Dogs and cats definitely understand some simple human languages for fetching stuff, giving out their paws etc etc, when spoken to by their masters. But they can't utter the human words due to lack of the proper vocal cords with them for making sounds and uttering the recognisable human language & words.
  • Agree-to-Disagree
    466
    But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.Pez

    You are seriously underestimating the intelligence of parrots. You should read about Alex, a grey parrot.
    https://en.wikipedia.org/wiki/Alex_(parrot)

    Here are some quotes:

    Alex was an acronym for avian language experiment, or avian learning experiment. He was compared to Albert Einstein and at two years old was correctly answering questions made for six-year-olds.

    He could identify 50 different objects and recognize quantities up to six; that he could distinguish seven colors and five shapes, and understand the concepts of "bigger", "smaller", "same", and "different", and that he was learning "over" and "under".

    Alex had a vocabulary of over 100 words, but was exceptional in that he appeared to have understanding of what he said. For example, when Alex was shown an object and asked about its shape, color, or material, he could label it correctly.

    Looking at a mirror, he said "what color", and learned the word "grey" after being told "grey" six times. This made him the first non-human animal to have ever asked a question, let alone an existential one (apes who have been trained to use sign-language have so far failed to ever ask a single question).

    When he was tired of being tested, he would say "Wanna go back", meaning he wanted to go back to his cage, and in general, he would request where he wanted to be taken by saying "Wanna go ...", protest if he was taken to a different place, and sit quietly when taken to his preferred spot. He was not trained to say where he wanted to go, but picked it up from being asked where he would like to be taken.
  • Agree-to-Disagree
    466
    You are seriously underestimating the intelligence of parrots. You should read about Alex, a grey parrot.
    https://en.wikipedia.org/wiki/Alex_(parrot)
    Agree-to-Disagree

    We have been discussing whether AI is or can be sentient. How about answering a simpler question.

    Is Alex (the grey parrot) sentient?

    See the original post about Alex (the grey parrot) here:
    https://thephilosophyforum.com/discussion/comment/885076
  • Ludwig V
    1.7k
    We're not getting anywhere like this. Time to try something different.
    Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.Corvus
    I did put my point badly. I've tried to find the analysis you refer to. I couldn't identify it. If you could point me in the right direction, I would be grateful.

    I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds.
    Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.
    .....To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions.
    Corvus
    That's a high bar. I agree that it is impossible to meet. But it proves too much since it also proves that we can never even know that human beings have/are minds.
    On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion. I think it is true that what we know of other minds, we know by interpreting what we see and hear of other people.
    You also say:-
    AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like.Corvus
    I'm not sure of the significance of "sentient" in this context, but I agree whole-heartedly with your point that without the ability to act in the world, we could not be sentient because, to put it this way, our brains would not learn to interpret the data properly. The implication is that the machine in a box with no more than an input and output of language could not approximate a human mind. A related point that I remember you pointing out is that the machines that we currently have do not have emotions or desires. Without them, to act as a human person is impossible. Yet, they could be simulated, couldn't they?

    There is not yet an understanding of what, for me is a key point in all of this. The framework (language game) which we apply to human persons is different from the framework (language game) that we apply to machines. It is not an inference to anything hidden, but a different category. If a flag waves, we do not wonder what it's purpose is - why it is waving. But we do ask why that guy over there is waving. Actions by people are explained by reasons and purposes. This isn't a bullet-proof statement of a thesis, but opening up what I think the crucial question is.

    Yes, I do have ideas about how such a discussion might develop and progress, but the first step is to put the question why we attribute what philosophy calls actions to human beings, and not to machines, and I want to say it is not a matter of any specific evidence, but how the evidence is interpreted. We see human beings as people and we see computers as machines. That's the difference we need to understand.


    Yes, animals have a way of surprising us. They are perfectly capable of learning and one wonders where the limits are.

    But even without Alex's achievements, I would have said that Alex is sentient. Animals are contested territory because they are (in relevant respects) like us in some ways and unlike us in other ways. In other words, they are not machines. To put it another way, we can related to them and they can related to us, but the relationships are not exactly the same as the relationships between human beings. It's really complicated and it is important to pay attention to the details of each case.
  • wonderer1
    2.2k
    It isn't dismissive, it's objective. The fundamental mechanism of information processing via artificial neural networks has not changed.Pantagruel

    There are different aspects of information processing to be considered. Yes, understanding of how neural networks can process data in powerful ways has been around for a long time. The hardware that allows that sort of information processing to be practical is a much more recent arrival.

    It is simply faster and more robust. It isn't one whit more intelligent than any other kind of mechanism.Pantagruel

    Well, it has an important aspect of intelligence that many other systems don't have, which is learning. Do you think that a distinction between learning mechanisms and non-learning mechanisms is worthwhile to recognize?

    Nvidia hasn't become a two trillion dollar corporation because hype.
    — wonderer1

    This has absolutely no bearing on inherent nature of the technology in question.
    Pantagruel

    It certainly has bearing on the systems that are actually implemented these days. The type of physical systems available to implement artificial neural nets play a significant role in what can be achieved with such systems. The degree of parallel distributed processing is higher these days, and in that sense the hardware is more brain-like.
  • wonderer1
    2.2k
    That's exactly why Turing's test is so persuasive - except that when we find machines that could pass it, we don't accept the conclusion, but start worrying about what's going on inside them. If our test is going to be that the putative human needs to have a human inside - mentally if not necessarily physically, the game's over.Ludwig V

    It seems to me that it is time to rethink the relevance of the Turing Test. If humans ever create a machine that develops sentience, I would expect the machine to think in ways quite alien to us. So I don't see 'being indistinguishable from a human' as a very good criteria for judging sentience. (Or at the very least, humanity will need to attain much deeper understanding of our own natures, to create sentient machines whose sentience is human-like.)

    Furthermore, it seems quite plausible that machines with no sentience will soon be able to convince many Turing Test judges. So to me, the Turing Test doesn't seem to provide a useful criteria for much of anything.
  • Ludwig V
    1.7k


    I agree with every word of that! :smile:

    I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people. It is easy to agree that what the machine does is the only way that we can even imagine tackling the question and mean completely different things by it.

    For example, one can't even formulate the question. "Could a machine be a (not necessarily human) person?" By definition, no. But that's very unhelpful.

    But then we can think of a human being as a machine (for certain purposes) and even think of a machine as a person (in certain circumstances).

    My preferred strategy would be to start from the concept of a human person and consider what versions or half-way houses we already recognize so as to get a handle on what a machine person would look like. We would need to think about animals, which some people seem to be doing, but personification and anthropomorphization and empathy would need to figure as well. It would even help to consider fictional representations.
  • Pantagruel
    3.4k
    Well, it has an important aspect of intelligence that many other systems don't have, which is learning. Do you think that a distinction between learning mechanisms and non-learning mechanisms is worthwhile to recognize?wonderer1

    Sure as long as we understand that learning reflects the ability of a pattern-recognizer to adapt to novel instances. I don't conceive of "machine-learning" as in that sense evocative of sentience any more than I do the outputs of artificial neural networks.

    I do think that there is a wealth of information to be gleaned both about the nature of neural networks themselves as exemplary of self-modifying feedback systems (learning) and also potentially about the nature of reality, through the scientific analysis of data using neural networks.
  • Pantagruel
    3.4k
    So to me, the Turing Test doesn't seem to provide a useful criteria for much of anything.wonderer1

    I've been pondering it a lot myself for the last week and I'd agree with this.
  • Relativist
    2.6k
    Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum.Pez
    The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.
  • Pez
    33
    You are seriously underestimating the intelligence of parrotsAgree-to-Disagree

    Sorry, if I did that! But still, I suppose that even today's AI can easily do what Alex is able to do. If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient.
  • Corvus
    3.2k
    I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds.Ludwig V
    I was just pointing out logical gaps in your arguments. Not prejudging your points at all. :)

    I couldn't identify it. If you could point me in the right direction, I would be grateful.Ludwig V
    With the logical discourse, we are hoping to reach some conclusions or agreements on the topic. I don't presume anyone's point is wrong or right. All points are more plausible or less plausible.

    On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion.Ludwig V
    Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications. Only clear signs and evidence for your perception on other minds are language and actions, but due to the complexity of human mind, the true intentions, desires and motives of humans can be hidden deep inside their subconscious or unconscious rendering into the state of mysteries even to the owner of the mind.

    To reiterate the main point, we can only interpret the contents of other minds with the overt expressions such as language and actions they exhibit in the communication. Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases.
  • Ludwig V
    1.7k
    Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications.Corvus
    Exactly - though I would have put it a bit differently. It doesn't matter here.

    Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases.Corvus
    Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.

    But you don't comment on what I think is the fundamental problem here:
    I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people.Ludwig V
    I don't think there is any specific behaviour (verbal or non-verbal) that will distinguish clearly between these machines and people. We do not explain human actions in the same way as we explain what machines do. In the latter case, we apply causal explanations. In the former case, we usually apply explanations in terms of purposes and rationales. How do we decided us which framework is applicable?

    Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses.

    If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient.Pez
    The question that next is whether we can tease out why we attribute sentience and intelligence to the parrot and not to the AI? Is it just that the parrot is alive and the AI is not? Is that perhaps begging the question?

    The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.Relativist
    Do we really want to? (Somebody else suggested that we might not even try)
  • Relativist
    2.6k
    Do we really want to? (Somebody else suggested that we might not even try)Ludwig V
    Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.
  • Ludwig V
    1.7k
    it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.Relativist
    Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.

    It seems safe to predict that, on the whole, we will prefer our machines to do something better than we can, rather than doing everything as badly as we do. Who would want a machine that needs as much care and attention and takes as long to make (20 years start to finish) as a human being? It wouldn't make economic sense.
  • Relativist
    2.6k
    Machines do lots of things better than we do, but they can't think creatively. Self-driving cars are possible, but their programming is very different from the way we drive.
  • Agree-to-Disagree
    466
    The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.Relativist

    Do we really want to? (Somebody else suggested that we might not even try)Ludwig V

    Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.Relativist

    Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.Ludwig V

    If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?

    Once the genie is out of the bottle then you can't put it back in. People will think that they can control the machine cockroaches. History shows how stupid people can be.

    They don't have to be able to reproduce themselves. People will happily build factories that produce them by the millions. They would make great Christmas presents. @Relativist will spend the rest of his life stamping on machine cockroaches. That is assuming that the machine cockroaches don't get him first. The machine cockroaches would see @Relativist as a threat to their self survival motivation.
  • Ludwig V
    1.7k
    they can't think creatively.Relativist

    Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test?

    There's an inherent risk in trying to draw a clear, single line here. If you identify something that machines can't do, some whizzkid will set to work to devise a machine that does it. It may be a simulation, but it may not.

    Let's suppose they do finally develop a machine that can drive a car or lorry or bus as well as or better than humans can, but in a different way. Suppose they are sold and people use them every day. What would be the point in denying that they are self-driving just because they do it in a different way?
  • Ludwig V
    1.7k
    They would make great Christmas presentsAgree-to-Disagree

    That's an interesting idea. Perhaps someone will design artificial birds and deer - even big game - so that hunters can kill them without anyone getting upset.
  • Relativist
    2.6k
    Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test?Ludwig V
    It depends on how you define thinking. Digital computers can certainly apply logic, and Artificial Neural Networks can perform pattern recognition. One might label those processes as thoughts.

    The Turing Test is too weak, because it can be passed with a simulation. Simulating intelligent behavior is not actually behaving intelligently.

    What I had in mind with my comment about creativity was this. When you drive, if a child runs into the street, you will do whatever is necessary to avoid hitting her: brake if possible, but you might even swerve into a ditch or parked car to avoid hitting the kid. Your actions will depend on a broad set of perceptions and background knowledge, and partly directed by emotion. A self-driving car will merely detect an obstacle in its path and execute the action it is programmed to take. It can't think outside the box. A broader set of optional responses could be programmed into it, giving the impression of creativity- but the car wouldn't have spontaneously created the response, as you might.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.