What does he know? Did he die before? "Rotting body"? - Death in particular can come quickly. Seems he is lacking introspection wrt to what he is talking about. Just another MIT profressor put out of context?! We can doubt - mushy mysticisms about everyday experience substitute modern voodoo for thorough analysis. Being psychic is not a problem limited to A.I. though. The lack of any foundational reality has a long philosophical tradition. Seems to be something for chosen audiences.“What does it know about dying if it doesn’t have a body that rots? Can it still be intelligent?” — Frank Pray
Patrick Henry Winston, MIT professor, posed this question: "“What does Genesis know about love if it doesn’t have a hormonal system?” he said. “What does it know about dying if it doesn’t have a body that rots? Can it still be intelligent?” — Frank Pray
Has anyone ever proven one or the other? All this does is rephrasing common opinions. From this foundational basis then "arguments" shall be made why that property can or cannot be present in which kind of machinery in a "wise platonic tongue".When the computer chip in your clothes dryer receives data from the humidity sensor and decides when your clothes are dry, do you think it has feelings and opinions about the matter?
And if not, why would a bigger machine, operating on exactly the same fundamental principles, be any different? — fishfry
Has anyone ever proven one or the other? All this does is rephrasing common opinions. From this foundational basis then "arguments" shall be made why that property can or cannot be present in which kind of machinery in a "wise platonic tongue".
This appeals to the attitude that led Turing to hiding which one was the machine in the test. — Heiko
Syntax is not semantics. Machines can compute syntax (that's what "computing" is) but they don't have semantics, they don't know the meaning of what they're computing. They don't know anything. — Hallucinogen
I don't think any kind of 'brain in a box' simulation will answer these questions. The acid test would be genuine functional human simulacrum with more than just nominal autonomy. And I think such a mechanism is a long way off. — Pantagruel
Would Descartes's statement about his existence ever apply to a computerized "brain?" — Frank Pray
Psychological history of the concept subject: The body, the thing, the "whole," which is visualised by the eye, awakens the thought of distinguishing between an action and an agent; the idea that the agent is the cause of the action, after having been repeatedly refined, at length left the "subject" over.
...
"Subject," "object," "attribute"—these distinctions have been made, and are now used like schemes to cover all apparent facts. The false fundamental observation is this, that I believe it is I who does something, who suffers something, who "has" something, who "has" a quality. — Nietzsche
Have you ever seen a human with only nominal autonomy? — InPitzotl
by Jeffrey Searle. — Hallucinogen
https://plato.stanford.edu/entries/chinese-room/I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17). — Searle
So you've said a fair bit about autonomy, but what about that "just nominal" part? — InPitzotl
But have we ever seen a human being with more than just nominal autonomy? — path
Well, yeah, but humans are agents; the Chinese Room, not so much. To me that sounds very important, not mystically, but practically. If I were to ask my s.o. to pick up chips and dip while at the store, my s.o. would be capable of not just giving me the right English word phrases in response, but also coming home with chips and dip as a response. It's as if my s.o. knows what it means to pick up chips and dip. How is a nominal-only program going to bring home chips and dip, regardless of how well it does passing Turing Tests?Note that I asked a question, that point of which was to say that ....hey, maybe we are taking our own autonomy for granted. — path
If I were to ask my s.o. to pick up chips and dip while at the store, my s.o. would be capable of not just giving me the right English word phrases in response, but also coming home with chips and dip as a response. It's as if my s.o. knows what it means to pick up chips and dip. How is a nominal-only program going to bring home chips and dip, regardless of how well it does passing Turing Tests? — InPitzotl
But I think part of what we take for granted when we think of computer programs having thoughts is the simple fact that we're agents. (Yes, and we have hormones and brains and stuff... but the agent part in itself seems very important to me). — InPitzotl
What does it mean to be an agent? — path
You're trivializing this though. First, the symbols "chips and dip" have to actually be related to what the symbols "chips and dip" mean in order to say that they are understood. And what do those symbols refer to? Not the symbols themselves, but actual chips and dip. So somehow you need to get the symbols to relate to actual chips and dip. That's what I think we're talking about here:I don't think this is focused on the real issue. If AI has a body, then it could learn to react to 'get some chips' by getting some chips. — path
i.e., it's what is needed to be more than just "nominal", or syntactic.Perhaps. But have we ever seen a human being with more than just nominal autonomy? — path
Something like I just described above, at a minimum. BTW, I think there's a lot more to add to make a conscious entity; but I don't see how a CR without a "body" (agency) can actually know what chips and dip is.Indeed, it's almost a religious idea. What does it mean to be an agent? — path
First, the symbols "chips and dip" have to actually be related to what the symbols "chips and dip" mean in order to say that they are understood. And what do those symbols refer to? Not the symbols themselves, but actual chips and dip. So somehow you need to get the symbols to relate to actual chips and dip. — InPitzotl
The AI needs more than "just" a body; it needs to relate to the world in a particular way. To actually manage to pick up chips and dip, you need it to be able to plan and attain goals (after all, that's what picking up chips and dip is... a goal; and to attain that goal you have to plan... "there are chips and dip on this aisle, so I need to wander over there and look"). Then you need the actual looking; need to be able to grab them, and so on and so on. This entire thing being triggered from a request to pick up chips and dip is a demonstration of the ability to relate the symbols to something meant by them. — InPitzotl
As far as I know, only humans ask such questions. — Wayfarer
Another way of framing it, is to ask if computer systems are beings. I claim not, but then this is where the problem lies. To me it seems self-evident they're not, but apparently others say differently. — Wayfarer
Can't, though. — Wayfarer
I curious if you think ants are beings? How about viruses? How about a person in a coma? Where do you draw the line and why? — path
What kinds of [human] thoughts are irreducible to computation (logical processes that computers can handle)? — TheMadFool
What assumption? And who is making it?That's one of the assumptions that I am questioning. — path
Yet we vaguely imagine that this mind is radically private (maybe my green is your red and the reverse.)
Okay, sure, let's think about that. We both call fire engines red, even though we have no idea if "my red" is "your red". So if we can both agree that the fire engine is "red", it follows that red is the color of the fire engine and not the "color" of "my red" or the "color" of "your red", because we cannot agree on the latter. Note that my perspective on meaning allows p-zombies and robots to mean things; just not Chinese Rooms (at least when it comes to chips and dip).We never peer into someone's pure mindspace and check that their red is our red. All we do is agree that fire engines are red. — path
Acting is not part of the Turing Test, since that involves communicating over terminals. In this medium we communicate using only symbols, but I imagine you're not AI based solely on balance of probability.Roughly speaking we all pass one another's Turing tests by acting correctly, making the right noises. How do you know that my posts aren't generated by AI?
Well, there's an apparent singularity of purpose; this body doesn't seem to arm wrestle itself or reach in between the two options. And there's a continuity of perspective; when this mind recalls a past event it is from a recalled first person perspective. So there's at least a meaningful way to assign singularity to the person in this body.Do you assume that there is just one of you in there in your skull?
Not... really. You're projecting mentalism onto it.I agree that the task is complex. But note that you are pasting on lots of mentalistic talk. — path
I would, for those symbols, if "correctly" means semantically.If the android picks up the chips as requested, we'd say that it related to the symbols correctly. — path
Well, yeah. Language is a social construct, and meaning is a "synchronization". But we social constructs use language to mean the things we use language to mean. And a CR isn't going to use chips and dip to mean what we social constructs use chips and dip to mean without being able to relate the symbols "chips and dip" to chips and dip.Our actions are synchronized. You can think of our noises and marks as pieces in a larger context of social conventions. Talk of 'I' and 'meaning' is part of that. I'm not saying that meaning-talk is wrong or false. I'm saying that it often functions as a pseudo-explanation. It adds nothing to the fact of synchronized behavior. — path
Some key characteristics of humans are rationality and self-awareness. Aristotle said man was 'a rational animal' which I still think is quite sound. — Wayfarer
I suppose some line can be drawn in terms of 'being that knows that it is', in other words, can truthfully say 'I am'. (This is why Isaac Asimov's title 'I, Robot' was so clever.) Whatever it is that says 'I am' is never, I think, known to science, because it never appears as an object. Rather it provides the basis of the awareness of objects (and everything else). — Wayfarer
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.