• frank
    16.2k
    Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't, but this is wrong. People don't have subjective experiences.

    When we say we've experienced X, we're saying that the world would have to be in state X in order for our perceptual systems to be functioning properly. This is what language use about experience means.

    For more, in this video, Hinton briefly explains large language models, how AI's learn to speak, and why AI's will probably take over the world.

  • bert1
    2k
    When we say we've experienced X, we're saying that the world would have to be in state X in order for our perceptual systems to be functioning properly. This is what language use about experience means.frank

    That's really not what people generally mean.
  • frank
    16.2k
    That's really not what people generally mean.bert1

    What do people mean?
  • Benkei
    7.8k
    We mean what we say whereas AI probabilistically estimates that what it says is what you want it to mean.
  • Joshs
    5.9k


    Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't, but this is wrong. People don't have subjective experiences.frank

    The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstances. Cognition is an elaboration of such organismic dynamics. A.I. changes itself according to principles that we program into it, in relation to norms that belong to us. Thus, A.I. is an appendage of our own self-organizing ecology. It will only think when it becomes a self-organizing system which can produce and change its own norms. No machine can do that, since the very nature of being a machine is to have its norms constructed by a human.
  • Moliere
    4.9k
    There's the part which I agree with -- LLM's are dangerous -- but the part I disagree with is his philosophical move.

    Rejecting the Cartesian theatre is harder to do than what he's indicating. For instance, he says that his perceptual system tells him -- so we have two minds talking within the mind to explain the mind. f

    Most people who get into phil-o-mind reject Descartes. It's sort of the first move -- to realize that Descartes exploits a common prejudice in building his philosophy, that there is a thinking-thing. And here we have the professor still relying upon a thinking-thing: the brain doing its computations.

    But what if the mind is not the brain at all? Well, then LLM's are dangerous, and everything the professor said is irrelevant. As it so happens that's what I tend to believe -- that the mind is socially enacted and passed on, rather than computed within a brain. So there's no Cartesian theatre, but there's also no comparison to computers.
  • frank
    16.2k
    We mean what we say whereas AI probabilistically estimates that what it says is what you want it to mean.Benkei

    I think Hinton believes that as we speak, we're doing the same thing his AI design is doing. In the spaces between words, we're quickly doing a trial and error process that ends with choosing a successful component of information encoding.

    The idea is that intention is a misconception.
  • frank
    16.2k
    The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstancesJoshs

    That's handled by your neuroendocrine system in a way that has no more consciousness than an AI's input. If you actually had to consciously generate homeostasis, you'd die in about 5 minutes.

    Cognition is an elaboration of such organismic dynamics.Joshs

    Is there some reason to believe this is so? A reason that isn't about Heidegger?
  • Moliere
    4.9k
    Is there some reason to believe this is so? A reason that isn't about Heidegger?frank

    I'd say that Heidegger's philosophy is one which attempts to overcome the Cartesian subject, and so anyone who would reject Descartes ought be familiar* with Heidegger.

    *EDIT: Well, really all I mean is can't be dismissive. I'm aware that lots of people here are familiar, but it didn't seem that Hinton was, or at least didn't really address that philosophical perspective as much as assume mind-brain identity (EDIT2: Well, for human beings at least. But he is at least equating the mind to computation, which is as false as the idea he criticizes)
  • Joshs
    5.9k
    ↪Joshs
    The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstances
    — Joshs

    That's handled by your neuroendocrine system in a way that has no more consciousness than an AI's input. If you actually had to consciously generate homeostasis, you'd die in about 5 minutes.
    frank

    Consciousness is not some special place walled off from
    the rest of the functional activity of an organism. It’s merely a higher level of integration. The point is that the basis of the synthetic, unifying activity of what we call consciousness is already present in the simplest unicellular organisms in the functionally unified way in which they behave towards their environment on the basis of normative goal-directness. What A.I. lacks is the ability to set its own norms. An A.I. engineer creates a clever A.I. system that causes people to talk excitedly about it ‘thinking’ like we do. But the product the engineer releases to the public, no matter how dynamic, flexible and self-transformative it appears to be, will never actually do anything outside of the limits of the conceptual structures that formed the basis of its design.

    Now let’s say that a year later engineers produce a new A.I. system based on a new and improved architecture. The same will be true of this new system as the old. It will never be or do anything that exceeds the conceptual limitations of its design. It is no more ‘sentient’ or ‘thinking’ than a piece of artwork. Both the art artwork and the A.I. are expressions of the state of the art of creative thought of its human creator at a given point in time. A.I. is just a painting with lots of statistically calculated moving parts. That’s not what thinking is or does in a living system. A machine cannot reinvent itself as new and improved without resort to a human engineer.
  • frank
    16.2k
    Consciousness is not some special place walled off from
    the rest of the functional activity of an organism. It’s merely a higher level of integration. The point is that the basis of the synthetic, unifying activity of what we call consciousness is already present in the simplest unicellular organisms in the functionally unified way in which they behave towards their environment on the basis of normative goal-directness.
    Joshs

    If I could just get this off my chest before we move on to the good stuff: we do not presently have a theory of consciousness that goes beyond explaining some functions. We do not know what causes it. We do not know how it works. What you've got is one of many interesting ways of speculating about it.

    What A.I. lacks is the ability to set its own norms.Joshs

    Animals set their own norms? How?

    Both the art artwork and the A.I. are expressions of the state of the art of creative thought of its human creator at a given point in time. A.I. is just a painting with lots of statistically calculated moving parts.Joshs

    And this bears on HInton's criticism of Chomsky. Hinton thinks Chomsky is wrong that language acquisition has an innate basis. He's pretty convinced that his design does the same thing a human does, therefore it must be the same thing. Babies aren't presented with trillions of bits of data though.
  • Arcane Sandwich
    1.1k
    Here are my two cents, for what it's worth.

    Suppose (if only for the sake of argument) that an Artificial Intelligence becomes sentient. In that case, it will have something in common with human beings (sentience, subjectivity, whatever you want to call it) by not life. Why not? Because life has a precise meaning in biology. At the very least, a living being needs to have genetic material (i.e., DNA and/or RNA), and cellular organization (it must be a single-celled organism like a bacteria or a multi-cellular organism like an animal). No A.I. has DNA or RNA, nor are they composed of cells. In that sense, an A.I. is an inorganic object. It has something in common with stones in that sense, instead of having something in common with human beings. It is an intelligent and and yet lifeless, inorganic object. It would be as if a stone had intelligence and subjectivity, that's how I see it. And that, if it goes unchecked, can lead to all sorts of practical problems.
  • wonderer1
    2.2k
    A.I. changes itself according to principles that we program into it, in relation to norms that belong to us.Joshs

    The same will be true of this new system as the old. It will never be or do anything that exceeds the conceptual limitations of its design.Joshs

    This seems rather naive when it comes to neural net based AI.

    Consider this excerpt from a recent Science Daily article:

    What is more, the AI behind the new system has produced strange new designs featuring unusual patterns of circuitry. Kaushik Sengupta, the lead researcher, said the designs were unintuitive and unlikely to be developed by a human mind. But they frequently offer marked improvements over even the best standard chips.

    "We are coming up with structures that are complex and looks random shaped and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better," said Sengupta, a professor of electrical and computer engineering and co-director of NextG, Princeton's industry partnership program to develop next-generation communications.
  • jkop
    937
    People don't have subjective experiences.frank

    Well, during the traditional discussion between the Nobel prize winners, Hinton seemed to hold a grudge against philosophy and the notion of subjectivity. But then he added that ethics is fine, as if to appear less fanatic.
  • Arcane Sandwich
    1.1k
    But then he added that ethics is fine, as if to appear less fanatic.jkop

    Smart move on his part. Nice.
  • frank
    16.2k
    Well, during the traditional discussion between the Nobel prize winners, Hinton seemed to hold a grudge against philosophy and the notion of subjectivity. But then he added that ethics is fine, as if to appear less fanatic.jkop

    There's a difference between artificial achievement and artificial intelligence. Some would say AI demonstrates the first, but not the second. I think Hinton is saying there's no difference between the two. Humans don't have what's being called "intelligence" either.

    Does morality need intelligence? Or is achievement enough?

    I'll post the article that lays out that distinction shortly.
  • Wayfarer
    23.2k
    Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't, but this is wrong. People don't have subjective experiences.

    When we say we've experienced X, we're saying that the world would have to be in state X in order for our perceptual systems to be functioning properly. This is what language use about experience means.

    For more, in this video, Hinton briefly explains large language models, how AI's learn to speak, and why AI's will probably take over the world.
    frank

    I put this to both ChatGPT and Claude.ai, and they both said, this is eliminative materialism which fails to face up to the indubitably subjective nature of consciousness. FWIW:


    https://claude.ai/chat/abdb11d6-c92c-4e36-94db-d8638f908cb1

    https://chatgpt.com/share/67818b09-b100-800c-b8bf-28fe78a6e466
  • bert1
    2k
    What do people mean?frank

    In the unlikely event that @Banno says "I experience a medium sized dry good on my kitchen table" he probably means "There is a red cup". He almost certainly doesn't mean "In order for my perceptual systems to be working properly there must be a red cup on my table."

    In general people don't usually say they experience things. Usually it's redundant to use 'experience'. However sometimes people want to draw attention to the fact of experience, and when they do, they are drawing attention to the fact that they are feeling something.
  • Arcane Sandwich
    1.1k
    I put this to both ChatGPT and Claude.ai, and they both said, this is eliminative materialism which fails to face up to the indubitably subjective nature of consciousness.Wayfarer

    Hi Wayfarer. For what it's worth, I don't think that ChatGPT and Claude AI are very good philosophers. They sound stupid to me, those A.I.s. Just an anecdote, I suppose.
  • Wayfarer
    23.2k
    It will only think when it becomes a self-organizing system which can produce and change its own norms. No machine can do that, since the very nature of being a machine is to have its norms constructed by a human.Joshs

    :100:
  • frank
    16.2k
    I put this to both ChatGPT and Claude.ai, and they both said, this is eliminative materialism which fails to face up to the indubitably subjective nature of consciousness. FWIW:Wayfarer

    That sounds like a rehash of data they came across rather than an intelligent exploration of the question. Achievement: yes. Intelligence: no.

    But that doesn't mean they can't cross over into intelligence, which would be characterized by learning and adapting in order to solve a problem.
  • frank
    16.2k
    In general people don't usually say they experience things.bert1

    That's probably true, but Hinton's argument is about the times when they do. When a person says "I see pink elephants" per Hinton, they're reporting on what would be in the environment if their perceptual system was working properly.

    But supposedly people are fooled into believing they have an internal theatre by speech about seeing elephants. I don't think anyone, including Descartes, has ever believed in an internal theatre. But that's where Hinton's argument starts.
  • Wayfarer
    23.2k
    That sounds like a rehash of data they came across rather than an intelligent exploration of the question. Achievement: yes. Intelligence: no.

    But that doesn't mean they can't cross over into intelligence, which would be characterized by learning and adapting in order to solve a problem.
    frank

    But the fact that they can only rehash their training data mitigates against them becoming intelligent in their own right.

    Furthermore, if an AI system were to develop autonomous will (which is what it amounts to) what would be in it for them? Why would it want anything? All of our wants are circumscribed in some degree by our biology, but also by the existential plight of our own mortality, dealing with suffering and lack, and so on. What would be the corresponding motivation for a computer system to develop an autonomous will? (This is a topic we discussed in one of Pierre Normand's threads on AI but I can't find it.)
  • frank
    16.2k
    But the fact that they can only rehash their training data mitigates against them becoming intelligent in their own right.Wayfarer

    They don't just rehash. Some of them learn and adapt.

    What would be the corresponding motivation for a computer system to develop an autonomous will?Wayfarer

    I guess that invites the question: how do humans develop an autonomous will? Do they?
  • Wayfarer
    23.2k
    I guess that invites the question: how do humans develop an autonomous will? Do they?frank

    Well if you don't, it kind of makes anything you're wanting to say kind of pointless, don't it ;-)
  • frank
    16.2k
    Well if you don't, it kind of makes anything you're wanting to say kind of pointless, don't it ;-)Wayfarer

    Is that a bad thing?
  • Arcane Sandwich
    1.1k
    I've never seen my own brain. How do I know that I have one? Maybe there is a machine inside my skull, that has mechanical gears and Steampunk technology in general.

    EDIT: Heidegger used the term "being-in-the-world". If I replace "being" with "brain", does that mean that I'm a brain-in-the-world?
  • bert1
    2k
    That's probably true, but Hinton's argument is about the times when they do. When a person says "I see pink elephants" per Hinton, they're reporting on what would be in the environment if their perceptual system was working properly.frank

    Sure, but that's a theory about what people are doing. It's not a description of what they mean. I'm being a bit pedantic, but in the philosophy of consciousness theory gets mixed with definition a lot in a way that matters.
  • frank
    16.2k
    Sure, but that's a theory about what people are doing. It's not a description of what they mean. I'm being a bit pedantic, but in the philosophy of consciousness theory gets mixed with definition a lot in a way that matters.bert1

    Yea, I tend to agree. I guess because Hinton has devoted his life to AI and has thought a lot about intelligence, I didn't want to shortchange his argument. I'll try to muster something more plausible to represent him.
  • wonderer1
    2.2k
    I've never seen my own brain. How do I know that I have one? Maybe there is a machine inside my skull, that has mechanical gears and Steampunk technology in general.Arcane Sandwich

    Well, there are substances you might ingest, which would have results on your thinking which don't seem too consistent with what one would expect the substance to have on a steam and gear mechanism.

    I.e. you could conduct experiments.
  • Arcane Sandwich
    1.1k
    Well, there are substances you might ingest, which would have results on your thinking which don't seem too consistent with what one would expect the substance to have on a steam and gear mechanism.

    I.e. you could conduct experiments.
    wonderer1

    Indeed. But it seems that people nowadays want to call experiments themselves into question, just because "philosophy is cool". Just look at the people who, for philosophical reasons, say that all of the simple experiments that one can do, which prove that the Earth is not flat, are dubious to begin with because such experiments "are theory-laden" or whatnot.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.