• Frank Pray
    12
    Patrick Henry Winston, MIT professor, posed this question: "“What does Genesis know about love if it doesn’t have a hor­monal system?” he said. “What does it know about dying if it doesn’t have a body that rots? Can it still be intelligent?”

    FYI: Genesis is a storytelling artificial intelligence project at MIT that focuses A.I.'s ability to analyze stories, and to draw "inferences" as to the meaning of the stories it is given. The question is: can Genesis understand the story? Before dismissing the question, consider the question posed to Genesis: "What is Macbeth about?" Here's a glimpse of the process Genesis followed to reach an answer:

    The A.I. produces “elaboration graphs" on a screen. For the MacBeth question, the program produced about 20 boxes containing information such as “Lady Mac­beth is Macbeth’s wife” and “Macbeth murders Duncan.” Below that were lines connecting to other boxes, connecting explicit and inferred elements of the story. Genesis arrived at a conclusion: “Pyrrhic victory and revenge.”

    None of Genesis choice of words appeared in the text of the story. The program includes an application called “Introspection,” the process of how Genesis constructs the meaning of the story by tracking the se­quence of its inferences.

    Whether you acribe "introspection" to this early A.I. demonstration, do you anticipate that computers will have this ability in the future? If so, is the future bright or dystopian? What are the implications of a computer constructing its own narrative based on its indiosyncratic experience of the human race?
  • Heiko
    519
    “What does it know about dying if it doesn’t have a body that rots? Can it still be intelligent?”Frank Pray
    What does he know? Did he die before? "Rotting body"? - Death in particular can come quickly. Seems he is lacking introspection wrt to what he is talking about. Just another MIT profressor put out of context?! We can doubt - mushy mysticisms about everyday experience substitute modern voodoo for thorough analysis. Being psychic is not a problem limited to A.I. though. The lack of any foundational reality has a long philosophical tradition. Seems to be something for chosen audiences.
  • Outlander
    2.2k


    I wouldn't call it introspection as none of it involves anything internal aside from programmed algorithms. That said, they already do. Nothing emotional or human but self analysis ie. Disk defrag, system health, etc.

    I'm assuming you mean AI and beyond that some capacity for emotion? I wouldn't call it dystopian as much as I'd call it annoying. Like how your smartphone asks if you want to delete an app you haven't used in a while. Itd be annoying if it sent a notice akin to "I'm sad, you haven't used me in a while. Should I just delete myself?" Or something. Lol.

    What your talking about is an I, Robot kind of deal. Where everything is computerized and connected including all instruments of force/civil defense.

    I suppose it could. If you program it that way.
  • fishfry
    3.4k
    Patrick Henry Winston, MIT professor, posed this question: "“What does Genesis know about love if it doesn’t have a hor­monal system?” he said. “What does it know about dying if it doesn’t have a body that rots? Can it still be intelligent?”Frank Pray

    When the computer chip in your clothes dryer receives data from the humidity sensor and decides when your clothes are dry, do you think it has feelings and opinions about the matter?

    And if not, why would a bigger machine, operating on exactly the same fundamental principles, be any different?
  • Heiko
    519
    When the computer chip in your clothes dryer receives data from the humidity sensor and decides when your clothes are dry, do you think it has feelings and opinions about the matter?

    And if not, why would a bigger machine, operating on exactly the same fundamental principles, be any different?
    fishfry
    Has anyone ever proven one or the other? All this does is rephrasing common opinions. From this foundational basis then "arguments" shall be made why that property can or cannot be present in which kind of machinery in a "wise platonic tongue".
    This appeals to the attitude that led Turing to hiding which one was the machine in the test.
  • Frank Pray
    12
    Yes, I am going beyond 2021 to perhaps 2051, to ask a two-step question, first posed by Alan Turing in 1950: Can machines think? Turing set up a test, now known as the Turing test. If you were placed behind a screen, and unable to see if the questions you were posed were answered by a machine [the term "computer" back then was not widely used] or a human, and they received your questions and you received their answers in typewritten form only (that is, neither you nor they depended on human voice), could you correctly judge which was human? For philosophers, this is a rich new territory. Would Descartes's statement about his existence ever apply to a computerized "brain?"

    The second question, related to the first, and likely dependent on the first, is: Can machines be created to feel? That is, let's suppose the computerized brain is self-aware as a thinking device. Can it also be programmed to "feel" a response to its own existence? Perhaps more significantly for the human race, can it experience "feelings" about the human race? If you're not too interested in these kinds of questions as too improbable to be taken seriously, then you work with them as thought experiments about how human brains work. What if A.I. was programmed to self learn at such a rapid rate that it moved from the goal of being to the goal of dominating the intelligence pyramid? [Which is exactly what the human race currently does with the animate and inanimate environment now.]

    A lot of smart people are freaking out about the potential for A.I. to outthink the human race on multiple dimensions over multiple future decision points. Basically, the angst goes like this: What if A.I. networks become "self-learning?" That is, what if they begin to program themselves based on algorithms we've given them, but over which they take control? Since they can outthink us much faster than we can counter-think them if they decide to take a path in their self-interest that is inimical to our self-interest, will we be expendable by them, or if not destroyed, enslaved? They could very well blackmail the human race with threats of restricting the food supply, shutting down the financial system, or poisoning the water supply. Is this only science fiction? People once thought the same of video technology and space travel.

    These are not new questions, and any one of us can research the current state of affairs. My questions go to something slightly different: Stories. Humans organize reality by folklore to convey how the world came to be, and how we are best to live in that world. What would be the folklore A.I. would create for itself in telling how it liberated itself from its maker, even as Adam freed himself from God by landing butt first outside the Garden of Eden?
  • Pantagruel
    3.4k
    I wonder if the experience of thought can be reduced to purely symbolic form?

    The human experience of consciousness is the product of an instrumental co-evolution with the environment. And the entire process of symbolic interaction in which thought is codified and represented is both social and instrumental in nature. So whether an abstract instantiation of rules can have the same end result as an actual in situ consciousness seems questionable to me.

    I'm sure the simulation of consciousness will eventually be perfected. But in what sense will such a construct ever be genuinely self-determining? Could it spontaneously formulate and act upon novel motivations? A human being can construct a simple song after just hearing someone else sing. Could a computer do this without having some kind of musical theory programming?

    I don't think any kind of 'brain in a box' simulation will answer these questions. The acid test would be genuine functional human simulacrum with more than just nominal autonomy. And I think such a mechanism is a long way off.
  • path
    284
    Has anyone ever proven one or the other? All this does is rephrasing common opinions. From this foundational basis then "arguments" shall be made why that property can or cannot be present in which kind of machinery in a "wise platonic tongue".
    This appeals to the attitude that led Turing to hiding which one was the machine in the test.
    Heiko

    I think I agree with your approach here. I used to think that 'it's all just switches.' In some sense I still think that. What's changed is the stuff I take for granted about human consciousness. Wittgenstein's beetle points out that we really don't know what we are talking about (in some sense) as we confidently invoke our own consciousness. We tend to think that we act appropriately in a social context because we understand. Perhaps it's better to think that 'understanding' is a complement paid to acting appropriately. The speech act of 'you understand' or 'I understand' is caught up in embodied social conventions.

    I'd like to hear more about this 'wise platonic tongue.' Do you happen to like or have any thoughts on Derrida, also? I mention this because anti-AI talk seems connected to the assumption that humans have minds with 'direct access' to these critters called meanings. And that assumption has been challenged, I think, with some success.
  • Hallucinogen
    322
    This is the Chinese Room problem by Jeffrey Searle.

    Syntax is not semantics. Machines can compute syntax (that's what "computing" is) but they don't have semantics, they don't know the meaning of what they're computing. They don't know anything.
  • path
    284
    Syntax is not semantics. Machines can compute syntax (that's what "computing" is) but they don't have semantics, they don't know the meaning of what they're computing. They don't know anything.Hallucinogen

    Assuming that you are right, what makes us so sure as humans that we do ? To me it's not at all about AI mysticism. It's instead about demystifying human consciousness. To be sure, we have words like 'know' and we can sort of think of pure redness.

    But how could I ever 'know' that I that understand the Chinese Room the way its author did or you do? This stuff is inaccessible by definition. In practice we see faces, hear voices, act according to social conventions, including verbal social conventions. (Maybe I should say that we point our eyes at people, etc.)
  • path
    284
    I don't think any kind of 'brain in a box' simulation will answer these questions. The acid test would be genuine functional human simulacrum with more than just nominal autonomy. And I think such a mechanism is a long way off.Pantagruel

    Perhaps. But have we ever seen a human being with more than just nominal autonomy?
  • path
    284
    Would Descartes's statement about his existence ever apply to a computerized "brain?"Frank Pray

    'I think therefore I am.' What is this 'I'? A computer can learn this grammar, just as humans do, by learning from examples. If there is something more than a string of words here...if there is some 'meaning' to which we have direct access...then it seems to be quasi-mystical be definition. Obviously it's part of our everyday speech acts. 'Is he conscious?' is a sound that a human might make in a certain context. Does the human 'mean' something in a way that transcends just using these sounds according to learned conventions? That we take the experience of sense data for granted might just be part of our training. We just treat certain statements as incorrigible. Vaguely we imagine a single soul in the skull, gazing on meanings, driving the body. But perhaps this is just a useful fiction?

    Psychological history of the concept subject: The body, the thing, the "whole," which is visualised by the eye, awakens the thought of distinguishing between an action and an agent; the idea that the agent is the cause of the action, after having been repeatedly refined, at length left the "subject" over.
    ...
    "Subject," "object," "attribute"—these distinctions have been made, and are now used like schemes to cover all apparent facts. The false fundamental observation is this, that I believe it is I who does something, who suffers something, who "has" something, who "has" a quality.
    — Nietzsche

    FWIW, I don't have a positive doctrine for sale. The situation is complex, and I think some of that complexity is swept under the rug of 'consciousness.'
  • InPitzotl
    880
    But have we ever seen a human being with more than just nominal autonomy?path
    Have you ever seen a human with only nominal autonomy?
  • path
    284
    Have you ever seen a human with only nominal autonomy?InPitzotl

    What I'm getting at is that autonomy is a vague notion, an ideal. We have certain ways of treating one another, and we have a lingo of autonomy, responsibility, etc. So in a loose sense we all have 'autonomy' in that we'll be rewarded or punished for this or that. The issue is whether there is really some quasi-mystical entity involved. Another view is that 'autonomy' is a sign we trade back and forth without ever knowing exactly what we mean. Using the word is one more part of our highly complex social conventions.

    But we could also network some AI and see what complex conventions they develop. They might develop some word functionally analogous to 'I' or 'autonomy.'

    Do birds have autonomy? Do pigs have a soul? How about ants?

    'I think therefore I am' can catch on without anyone really understanding exactly what they mean. Their use merely has to fit in certain conventions and we don't lock them up and might even shake their hand.
  • InPitzotl
    880
    What I'm getting at is that autonomy is a vague notion, an ideal.path
    So you've said a fair bit about autonomy, but how about that "only nominal" part?
  • path
    284
    by Jeffrey Searle.Hallucinogen

    Actually it's John Searle.

    I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17). — Searle
    https://plato.stanford.edu/entries/chinese-room/

    'Mental or semantic' contents are problematic. They are more or less 'conceived' as radically private and unverifiable. It's anything but clear what the implementation of the program is supposed to lack. Searle only passes the Turing test because he spits out signs in a certain way. Why is he so sure that he is swimming in something semantic? An appeal to intuition? 'I promise you, I can see redness!'

    Well a program could say that too. I think Searle was a bot. (I know I use 'I think' in the usual way. Obviously ordinary language is full of mentalistic talk. The question is whether we might want to question common sense a little bit, as philosophy has been known to do.) (And I'm not saying my criticisms or ideas are mine or new. I just have to work through them myself, and it's nice to do so in dialogue.)
  • path
    284
    So you've said a fair bit about autonomy, but what about that "just nominal" part?InPitzotl

    But have we ever seen a human being with more than just nominal autonomy?path

    Note that I asked a question, that point of which was to say that ....hey, maybe we are taking our own autonomy for granted. Maybe we have familiar loose talk about consciousness and free will and autonomy, and that we are so worried about AI mysticism that we ignore our mysticism about ourselves.

    'This AI sure is getting impressive in terms of what it can do, but it's still just stupid computation.'

    But this also means that just-stupid-computation is getting more human-like. In short, we still start from some belief in a soul, even if we are secular and think this soul is born and dies. If a computer can learn to say that it has a soul (consciousness) and not be telling the truth, then maybe we've done the same thing. Or at least we're being lazy about what we're taking for granted.
  • InPitzotl
    880
    Note that I asked a question, that point of which was to say that ....hey, maybe we are taking our own autonomy for granted.path
    Well, yeah, but humans are agents; the Chinese Room, not so much. To me that sounds very important, not mystically, but practically. If I were to ask my s.o. to pick up chips and dip while at the store, my s.o. would be capable of not just giving me the right English word phrases in response, but also coming home with chips and dip as a response. It's as if my s.o. knows what it means to pick up chips and dip. How is a nominal-only program going to bring home chips and dip, regardless of how well it does passing Turing Tests?

    I tend to agree, people take our autonomy for granted. But I think part of what we take for granted when we think of computer programs having thoughts is the simple fact that we're agents. (Yes, and we have hormones and brains and stuff... but the agent part in itself seems very important to me).
  • path
    284
    If I were to ask my s.o. to pick up chips and dip while at the store, my s.o. would be capable of not just giving me the right English word phrases in response, but also coming home with chips and dip as a response. It's as if my s.o. knows what it means to pick up chips and dip. How is a nominal-only program going to bring home chips and dip, regardless of how well it does passing Turing Tests?InPitzotl

    I don't think this is focused on the real issue. If AI has a body, then it could learn to react to 'get some chips' by getting some chips. People are already voice-commanding their tech.

    To me the real issue is somehow figuring out what 'consciousness' is supposed to be beyond passing the Turing test. Let's imagine an android detective who can outperform its human counterparts. Or an AI therapist who outperforms human therapists. If we gave them humanoid and convincing skins, people would probably project 'autonomy' and 'consciousness' on them. Laws might get passed to protect them. They might get the right to vote. At that point our common-sense intuitions embodied in everyday language will presumably have changed.
    Our language and the situation will change together, influencing one another (not truly differentiated in the first place.)
  • path
    284
    But I think part of what we take for granted when we think of computer programs having thoughts is the simple fact that we're agents. (Yes, and we have hormones and brains and stuff... but the agent part in itself seems very important to me).InPitzotl

    Indeed, it's almost a religious idea. What does it mean to be an agent? It's important to me also, to all of us. The idea that we as humans are radically different from nature in some sense is something like 'the' religious idea that persists even in otherwise secular culture. So we treat pigs the way we do. (I'm not an activist on such matters, but perhaps you see my point.)
  • Wayfarer
    22.8k
    the fact that you enclose 'know' in scare quotes says something already, don't you think? I agree with the others here who say that computers don't know - they perform calculations - vast numbers at astonishing speeds. But they are no more sentient than calculators.

    What does it mean to be an agent?path

    As far as I know, only humans ask such questions.

    Another way of framing it, is to ask if computer systems are beings. I claim not, but then this is where the problem lies. To me it seems self-evident they're not, but apparently others say differently.

    But if computers were beings, then would they have rights? How would a computer feel about itself?
  • InPitzotl
    880
    I don't think this is focused on the real issue. If AI has a body, then it could learn to react to 'get some chips' by getting some chips.path
    You're trivializing this though. First, the symbols "chips and dip" have to actually be related to what the symbols "chips and dip" mean in order to say that they are understood. And what do those symbols refer to? Not the symbols themselves, but actual chips and dip. So somehow you need to get the symbols to relate to actual chips and dip. That's what I think we're talking about here:
    Perhaps. But have we ever seen a human being with more than just nominal autonomy?path
    i.e., it's what is needed to be more than just "nominal", or syntactic.

    So, yes, this isn't impossible for an AI, if only you gave it a body. But that's trivializing it as well. The AI needs more than "just" a body; it needs to relate to the world in a particular way. To actually manage to pick up chips and dip, you need it to be able to plan and attain goals (after all, that's what picking up chips and dip is... a goal; and to attain that goal you have to plan... "there are chips and dip on this aisle, so I need to wander over there and look"). Then you need the actual looking; need to be able to grab them, and so on and so on. This entire thing being triggered from a request to pick up chips and dip is a demonstration of the ability to relate the symbols to something meant by them.
    Indeed, it's almost a religious idea. What does it mean to be an agent?path
    Something like I just described above, at a minimum. BTW, I think there's a lot more to add to make a conscious entity; but I don't see how a CR without a "body" (agency) can actually know what chips and dip is.
  • TheMadFool
    13.8k
    The idea behind Genesis seems to be simple. All human thought can be reduced to computation. Yes, Genesis may not be able to feel emotions but it has accsss to all the ingredients that evoke emotions, no? After all, emotions are the body's response to impressions we get, which are endpoints of some form of rational analysis.

    Whether Genesis can "know" depends on how you define that word. A popular, albeit incomplete, meaning of knowing is justified true belief (JTB). Against the backdrop of the JTB definition of knowledge, humans don't fare better than a program like Genesis: justification is nothing but computation and Genesis seems to be doing exactly that; as concerns truth, a human is in the dark to the same extent as Genesis is; regarding belief, human belief is simply the storage of a set of propositions in memory, something Genesis is surely capable of.
  • path
    284
    First, the symbols "chips and dip" have to actually be related to what the symbols "chips and dip" mean in order to say that they are understood. And what do those symbols refer to? Not the symbols themselves, but actual chips and dip. So somehow you need to get the symbols to relate to actual chips and dip.InPitzotl

    That's one of the assumptions that I am questioning. The mentalistic language is familiar to us. We imagine that understanding is something that happens in a mind, and colloquially it is of course. Yet we vaguely imagine that this mind is radically private (maybe my green is your red and the reverse.) Roughly speaking we all pass one another's Turing tests by acting correctly, making the right noises.
    How do you know that my posts aren't generated by AI? Do you assume that there is just one of you in there in your skull? Why can't two agents share a body? Because we weren't brought up that way. One doesn't have two souls. We are trained into English like animals.
  • Wayfarer
    22.8k
    All human thought can be reduced to computation.TheMadFool

    Can't, though.
  • path
    284
    The AI needs more than "just" a body; it needs to relate to the world in a particular way. To actually manage to pick up chips and dip, you need it to be able to plan and attain goals (after all, that's what picking up chips and dip is... a goal; and to attain that goal you have to plan... "there are chips and dip on this aisle, so I need to wander over there and look"). Then you need the actual looking; need to be able to grab them, and so on and so on. This entire thing being triggered from a request to pick up chips and dip is a demonstration of the ability to relate the symbols to something meant by them.InPitzotl

    I agree that the task is complex. But note that you are pasting on lots of mentalistic talk. If the android picks up the chips as requested, we'd say that it related to the symbols correctly. Think of how humans learn language. We never peer into someone's pure mindspace and check that their red is our red. All we do is agree that fire engines are red. Our actions are synchronized. You can think of our noises and marks as pieces in a larger context of social conventions. Talk of 'I' and 'meaning' is part of that. I'm not saying that meaning-talk is wrong or false. I'm saying that it often functions as a pseudo-explanation. It adds nothing to the fact of synchronized behavior.
  • path
    284
    As far as I know, only humans ask such questions.Wayfarer

    Ah, but if a computer did ask such a question, I suspect that somehow it wouldn't count. I could easily write a program to do so.

    Here's one in Python:

    print("What does it mean to be an agent ?")

    Another way of framing it, is to ask if computer systems are beings. I claim not, but then this is where the problem lies. To me it seems self-evident they're not, but apparently others say differently.Wayfarer

    It was self-evident to many that the world was flat, that some were born to be slaves, etc. To me strong philosophy is what shakes the self-evident and opens up the world some.

    I realize that what I'm suggesting is counter-intuitive. It's not about puffing up A.I. and saying that A.I. might have 'consciousness.' Instead it's about emphasizing that we human beings don't have a clear grasp on what we mean by 'consciousness.' Connected to what I'm suggesting is the understanding of meaning as a social phenomenon. To frame it imperfectly in an aphorism: the so-called inside is outside.

    I curious if you think ants are beings? How about viruses? How about a person in a coma? Where do you draw the line and why?
  • TheMadFool
    13.8k
    Can't, though.Wayfarer

    Just curious. What kinds of [human] thoughts are irreducible to computation (logical processes that computers can handle)?
  • Wayfarer
    22.8k
    I curious if you think ants are beings? How about viruses? How about a person in a coma? Where do you draw the line and why?path

    All very good and difficult questions. I rather like the Buddhist saying, 'sentient beings'. It covers a pretty wide territory, but I don't *think* trees are in it. Viruses I don't think are sentient beings, I think I understand them in terms of rogue byproducts of DNA. In evolutionary history, the cell evolved by absorbing or merging with organisms very like viruses over the course of aeons. Viruses are like an aspect of that part of evolution.

    Some key characteristics of humans are rationality and self-awareness. Aristotle said man was 'a rational animal' which I still think is quite sound. But the point is, in my understanding 'reason' or rationality likewise covers a very broad sweep. On a very simplistic level, the ability to see that 'this equals that' requires rational judgement i.e. to understand that two apparently different things are actually the same in some respects. I think the faculty of reason is an awe-inspiring thing and a differentiator for h. sapiens.

    I suppose some line can be drawn in terms of 'being that knows that it is', in other words, can truthfully say 'I am'. (This is why Isaac Asimov's title 'I, Robot' was so clever.) Whatever it is that says 'I am' is never, I think, known to science, because it never appears as an object. Rather it provides the basis of the awareness of objects (and everything else). Schrodinger wrote about this in his later philosophical works.

    What kinds of [human] thoughts are irreducible to computation (logical processes that computers can handle)?TheMadFool

    Computers are calculators, albeit immensely powerful calculators. They can deal with anything that can be quantified. But being has a qualitative element, a felt dimension, which is intrinsic to it, which can't be objectified, as it can't be defined.

    This essay says it much better than I ever could https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
  • InPitzotl
    880
    That's one of the assumptions that I am questioning.path
    What assumption? And who is making it?
    Yet we vaguely imagine that this mind is radically private (maybe my green is your red and the reverse.)
    We never peer into someone's pure mindspace and check that their red is our red. All we do is agree that fire engines are red.path
    Okay, sure, let's think about that. We both call fire engines red, even though we have no idea if "my red" is "your red". So if we can both agree that the fire engine is "red", it follows that red is the color of the fire engine and not the "color" of "my red" or the "color" of "your red", because we cannot agree on the latter. Note that my perspective on meaning allows p-zombies and robots to mean things; just not Chinese Rooms (at least when it comes to chips and dip).
    Roughly speaking we all pass one another's Turing tests by acting correctly, making the right noises. How do you know that my posts aren't generated by AI?
    Acting is not part of the Turing Test, since that involves communicating over terminals. In this medium we communicate using only symbols, but I imagine you're not AI based solely on balance of probability.
    Do you assume that there is just one of you in there in your skull?
    Well, there's an apparent singularity of purpose; this body doesn't seem to arm wrestle itself or reach in between the two options. And there's a continuity of perspective; when this mind recalls a past event it is from a recalled first person perspective. So there's at least a meaningful way to assign singularity to the person in this body.
    I agree that the task is complex. But note that you are pasting on lots of mentalistic talk.path
    Not... really. You're projecting mentalism onto it.
    If the android picks up the chips as requested, we'd say that it related to the symbols correctly.path
    I would, for those symbols, if "correctly" means semantically.
    Our actions are synchronized. You can think of our noises and marks as pieces in a larger context of social conventions. Talk of 'I' and 'meaning' is part of that. I'm not saying that meaning-talk is wrong or false. I'm saying that it often functions as a pseudo-explanation. It adds nothing to the fact of synchronized behavior.path
    Well, yeah. Language is a social construct, and meaning is a "synchronization". But we social constructs use language to mean the things we use language to mean. And a CR isn't going to use chips and dip to mean what we social constructs use chips and dip to mean without being able to relate the symbols "chips and dip" to chips and dip.
  • path
    284
    Some key characteristics of humans are rationality and self-awareness. Aristotle said man was 'a rational animal' which I still think is quite sound.Wayfarer

    Indeed, the rational animal...which is to say in some sense the spiritual animal. Our distinction of ourselves from the rest of nature is dear to us. I think we tend to interpret a greater complexity in our conventional sounds and noises as a genuine qualitative leap. Of course that can't differentiate us from A.I., or probably not in the long run. We may convert an entire planet into a computer in 4057 and feed it all of recorded human conversation. It (this planet) may establish itself as 'our' best philosopher yet. It might be worshiped as a God. It could be that charming, that insightful.

    I suppose some line can be drawn in terms of 'being that knows that it is', in other words, can truthfully say 'I am'. (This is why Isaac Asimov's title 'I, Robot' was so clever.) Whatever it is that says 'I am' is never, I think, known to science, because it never appears as an object. Rather it provides the basis of the awareness of objects (and everything else).Wayfarer

    Indeed, and we get to the center of the issue perhaps. What we seem to have is a postulated entity that is by definition inaccessible. It never appears as on object. I do think this is a difficult and complex issue. But I also think that it assumes the subject/object distinction as fundamental. At the same time, one can make a case that subject/object talk is only possible against a background of social conventions. In other words the 'subject' must be plural in some sense. Or we might say that the subject and its object is a ripple in the noises and marks we make.

    Can a dolphin say something like 'I am'? I don't know. Must the subject be linguistic? The subject seems to play the role of a spirit here. The old question is how 86 billion neurons end up knowing that they are a single subject, assuming that we have any kind of clear idea of what such knowing is. If we merely rely on the blind skill of our linguistic training (common sense), then we may just be playing along in taken-for-granted conventions. By the way, I'm know that I am partaking in such mentalistic language. It's hard to avoid, given that I was trained like the rest of us and therefore am 'intelligible' even if the species decides later that it was all confusion.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.