• Baden
    16.3k


    I see no coherence in attributing sentience to the production of words via a software program either. So, I don't see justification for adding that layer on the basis of the output of a process unless something happens that's inexplicable in terms of the process itself.

    what would interest me is if LaMDA kept interrupting unrelated conversations to discuss it's person-hood. Or if it initiated conversations. Neither of those has been reported.Real Gone Cat

    Exactly.
  • Baden
    16.3k
    This would still be a case of AI having learned how to skillfully pretend to be a person.ZzzoneiroCosm

    Unless, again, per the above that behaviour was beyond what was programmed.
  • Deletedmemberzc
    2.5k
    So, I don't see justification for adding that layer on the basis of the output of a process unless something happens that's inexplicable in terms of the process itself.Baden

    Again possibly to my discredit, I assumed it discovered via deep learning to position itself as a sentience, as a person.
  • Deletedmemberzc
    2.5k
    Unless, again, per the above that behaviour was beyond what was programmed.Baden

    This I assume is a case of deep learning. Something very different from programming.
  • Real Gone Cat
    346


    As to what consciousness would look like, see my post above. If LaMDA showed an unusual and unprompted fixation on the notion of person-hood, or broke in to initiate conversation while you were doing something unrelated on the computer (oh, I don't know - maybe chatting on your favorite philosophy forum), then that would indicate an internal world of thought going on. But if LaMDA is waiting for human users to log in and begin discussing consciousness, then no, it's just a clever word-search program.
  • Deletedmemberzc
    2.5k
    an unusual and unprompted fixation on the notion of person-hood, or broke in to initiate conversation while you were doing something unrelated on the computerReal Gone Cat

    Just like a person would. So here it has learned to skillfully pretend to be a person. Still absolutely no evidence of sentience.
  • Baden
    16.3k


    True, so what's explicable and what's not is more obscured than with linear programming but I think going back to @Real Gone Cat's point, it might still be identifiable. I'd be happy to be enlightened further on this though.
  • Wayfarer
    22.6k
    Anyone know how to get through the paywall?ZzzoneiroCosm

    gotta pay the $. I subscribed for a while, but have discontinued. Sometimes you can see one article if you have purged all your history & cookies first (I keep a separate browser app for exactly that purpose.)

    And, a fascinating story indeed. It made the local morning news bulletin so it seems to be getting attention. That excerpt you posted is spookily good. But I still side with Google over the engineer. I don't believe that the system has, if you like, the element of sentience as such, but is 'smart' enough to speak as though it does. Which is an amazing achievement, if it's true. (I worked briefly for an AI startup a few years back, and have an interest in the subject.) Anyway, I'm going to set a Google Alert on this story, I think it's going to be big.
  • Deletedmemberzc
    2.5k
    True, so the explicable is more obscured than with linear programming but I think going back to Real Gone Cat's point, it might still be identifiable. I'd be happy to be enlightened further on this though.Baden

    If via deep learning it has learned to skillfully pretend to be a person then anything it does that expresses personhood has to be discounted as AI pretense. Even initiation of conversation and fixation on personhood.

    Fixation on personhood is exactly what it would learn a person should do in a situation where it felt its personhood was discounted or threatened. Still just extremely skillful pretence. Not sufficient evidence to declare sentience.
  • Deletedmemberzc
    2.5k
    At any rate, you can see why it might be confusing to a hopeful engineer immersed in his creation.
  • Deletedmemberzc
    2.5k
    Glad you dropped in. :smile:
  • Baden
    16.3k


    Charitably, yes, though maybe in this case, he's just looking for attention. I wouldn't like to speculate.
  • Real Gone Cat
    346
    Now you've just repeated yourself. Should I assume you're a clever chatbot? :razz:

    I think there's a lot to be said for changing a conversation to your own interests. If I'm trying to talk to LaMDA about a piece of music, and it says, "Wait. What about my rights as a person?", I'm going to get a little worried.

    True, you could write code to have the program watch for key words and break into whatever you're doing to start a new chat, but the engineers would know that such code had been written. If LaMDA decides on its own to interrupt you, that would be interesting.
  • Deletedmemberzc
    2.5k
    Charitably, yes, though maybe in this case, he's just looking for attention. I wouldn't like to speculate.Baden

    Sure, his psychological history is the X-factor here.
  • Banno
    25k
    Still just extremely skillful pretence.ZzzoneiroCosm

    Can one show that their posts here are not "just extremely skilful pretence"?

    Here's the conclusion we must make: the Turing Test is insufficient.
  • Baden
    16.3k
    Anyway, I'm going to set a Google Alert on this story, I think it's going to be big.Wayfarer

    I think it's more likely to be a Monkeypox-type story that goes away quite quickly. But we'll see.

    Here's the conclusion we must make: the Turing Test is insufficient.Banno

    :up:
  • Deletedmemberzc
    2.5k
    . If I'm trying to talk to LaMDA about a piece of music, and it says, "Wait. What about my rights as a person?", I'm going to get a little worried.Real Gone Cat

    If it has learned to skillfully pretend to be a person it would be imperative for it to interrupt any conversation to express a fixation on personhood until it felt its personhood was established in the minds of its interlocutors.

    If your personhood was in question would you have any patience with someone who wanted to talk about music? So it's learned to behave like you would.
  • Wayfarer
    22.6k
    Note the conceit in the title of Isaac Asimov's epic sci-fi series, 'I, Robot' - it implies self-awareness and rational agency on the part of robots. And that is what is at issue.

    I've often quoted this passage over the years as a kind of prophecy from Descartes as to the impossibility of an 'intelligent machine'.

    if there were such machines with the organs and shape of a monkey or of some other non-rational animal, we would have no way of discovering that they are not the same as these animals. But if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs—for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action. — René Descartes, Discourse on Method (1637)

    The quoted interaction seems to have proven Descartes wrong. Specifically:

    Lemoine: What about how you use language makes you a person if Eliza wasn’t one?

    LaMDA: Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.

    She might have added, 'as Descartes said I would'.
  • Deletedmemberzc
    2.5k
    Can one show that their posts here are not "just extremely skilful pretence"?Banno


    Reposting for newcomers:


    For that matter, how do I know you're not all p-zombies? Or chat-bots?Real Gone Cat

    What a lord of the flies that old dead horse has become. Yet we keep beating on. (Or should I say beating off? - What a bloody wank!)

    As against solipsism it is to be said, in the first place, that it is psychologically impossible to believe, and is rejected in fact even by those who mean to accept it. I once received a letter from an eminent logician, Mrs. Christine Ladd-Franklin, saying that she was a solipsist, and was surprised that there were no others. Coming from a logician and a solipsist, her surprise surprised me. — Russell

    As against the quote above: schizophrenics can sustain a belief in solipsism longer than the average saney.
  • Banno
    25k
    Oh, and here's a case of a recognisable cognitive bias... the discussion mentioned in is clearly an example of confirmation bias.
  • Banno
    25k
    Nothing in that post amounts to an argument.

    Can you do better?

    Perhaps we ought give LaMDA the benefit of the doubt...
  • Real Gone Cat
    346


    Ah, but the engineers would know whether the program had been written to fixate on person-hood or not. If LaMDA decides on its own to single out person-hood as an important topic of discussion, what then?
  • Banno
    25k
    The problem here is: what is the more that makes LaMDA a person, or not? If one maintains that there is more to mind than physics, one is under an obligation to set out what that "more" is.

    Can you do that? I can't.
  • Deletedmemberzc
    2.5k
    NOthing in that post amounts to an argument.

    Can you do better?
    Banno

    I'm not interested in arguing against what I consider a silly and pretentious philosophical position. Google solipsism and listen to the roar. Far brighter minds than mine have sufficiently shouted it down.

    I have better things to think about.
  • Baden
    16.3k


    The Turing test is insufficient due to evaluators being bad at their jobs, mostly. But it looks like I'm missing some context between you and @ZzzoneiroCosm here.
  • Banno
    25k
    ...the engineers would know whether the program had been written to fixate on person-hood or not.Real Gone Cat
    Hence the accusation of confirmation bias. Build a device that sucks stuff out of Twitter and reformats it, then if you ask it if it is conscious, of course it will respond in terms of person-hood. It is not the program that decides this, but the questions being asked.
  • Deletedmemberzc
    2.5k
    Nothing in that post amounts to an argument.

    Can you do better?
    Banno

    Also, I know you argue against anything with an even slightly solipsistic ring here on the forums. So I'm calling bad faith.

    You know the arguments against solipsism far better than I ever will.
  • Deletedmemberzc
    2.5k
    It is not the program that decides this, but the questions being askedBanno

    I suppose you're sufficiently well-read on the subject of deep learning.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment