Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't, but this is wrong. People don't have subjective experiences. — frank
We mean what we say whereas AI probabilistically estimates that what it says is what you want it to mean. — Benkei
The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstances — Joshs
Cognition is an elaboration of such organismic dynamics. — Joshs
Is there some reason to believe this is so? A reason that isn't about Heidegger? — frank
↪Joshs
The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstances
— Joshs
That's handled by your neuroendocrine system in a way that has no more consciousness than an AI's input. If you actually had to consciously generate homeostasis, you'd die in about 5 minutes. — frank
Consciousness is not some special place walled off from
the rest of the functional activity of an organism. It’s merely a higher level of integration. The point is that the basis of the synthetic, unifying activity of what we call consciousness is already present in the simplest unicellular organisms in the functionally unified way in which they behave towards their environment on the basis of normative goal-directness. — Joshs
What A.I. lacks is the ability to set its own norms. — Joshs
Both the art artwork and the A.I. are expressions of the state of the art of creative thought of its human creator at a given point in time. A.I. is just a painting with lots of statistically calculated moving parts. — Joshs
A.I. changes itself according to principles that we program into it, in relation to norms that belong to us. — Joshs
The same will be true of this new system as the old. It will never be or do anything that exceeds the conceptual limitations of its design. — Joshs
What is more, the AI behind the new system has produced strange new designs featuring unusual patterns of circuitry. Kaushik Sengupta, the lead researcher, said the designs were unintuitive and unlikely to be developed by a human mind. But they frequently offer marked improvements over even the best standard chips.
"We are coming up with structures that are complex and looks random shaped and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better," said Sengupta, a professor of electrical and computer engineering and co-director of NextG, Princeton's industry partnership program to develop next-generation communications.
But then he added that ethics is fine, as if to appear less fanatic. — jkop
Well, during the traditional discussion between the Nobel prize winners, Hinton seemed to hold a grudge against philosophy and the notion of subjectivity. But then he added that ethics is fine, as if to appear less fanatic. — jkop
Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't, but this is wrong. People don't have subjective experiences.
When we say we've experienced X, we're saying that the world would have to be in state X in order for our perceptual systems to be functioning properly. This is what language use about experience means.
For more, in this video, Hinton briefly explains large language models, how AI's learn to speak, and why AI's will probably take over the world. — frank
What do people mean? — frank
I put this to both ChatGPT and Claude.ai, and they both said, this is eliminative materialism which fails to face up to the indubitably subjective nature of consciousness. — Wayfarer
I put this to both ChatGPT and Claude.ai, and they both said, this is eliminative materialism which fails to face up to the indubitably subjective nature of consciousness. FWIW: — Wayfarer
In general people don't usually say they experience things. — bert1
That sounds like a rehash of data they came across rather than an intelligent exploration of the question. Achievement: yes. Intelligence: no.
But that doesn't mean they can't cross over into intelligence, which would be characterized by learning and adapting in order to solve a problem. — frank
But the fact that they can only rehash their training data mitigates against them becoming intelligent in their own right. — Wayfarer
What would be the corresponding motivation for a computer system to develop an autonomous will? — Wayfarer
That's probably true, but Hinton's argument is about the times when they do. When a person says "I see pink elephants" per Hinton, they're reporting on what would be in the environment if their perceptual system was working properly. — frank
Sure, but that's a theory about what people are doing. It's not a description of what they mean. I'm being a bit pedantic, but in the philosophy of consciousness theory gets mixed with definition a lot in a way that matters. — bert1
I've never seen my own brain. How do I know that I have one? Maybe there is a machine inside my skull, that has mechanical gears and Steampunk technology in general. — Arcane Sandwich
Well, there are substances you might ingest, which would have results on your thinking which don't seem too consistent with what one would expect the substance to have on a steam and gear mechanism.
I.e. you could conduct experiments. — wonderer1
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.