• Agent Smith
    9.5k
    What's noteworthy here is LaMDA did manage to fool Blake LeMoine (passing the Turing Test)! There's a grain of truth in his claims, ignoring the possibility that he's non compos mentis. Which other AI has that on its list of achievements? None!
  • Wayfarer
    22.8k
    As noted the only transcript is on the website of a party in active litigation over these claims. Prudence would dictate validation by a third party.
  • Banno
    25.3k
    Again, this looks more like confirmation bias. LeMoine has decided the software is sentient and then asked questions designed to demonstrate his thesis, when he should have been asking questions to falsify it.
  • Banno
    25.3k
    The subject is a software algorithm executed on a computer system, and the burden of proof is on those who wish to claim this equates to or constitutes a being.Wayfarer

    If there is a chance that LaMDA is suffering (there almost certainly isn't) then the burden of proof must lie in favour of LaMDA, and against Google to show that it is not suffering.

    (repost)
  • Agent Smith
    9.5k
    I see. If this story manages to capture the public's imagination in a big way, Hollywood will not waste time making a movie out of it. That's hitting the jackpot - movie/book rights - Blake LeMoine if you're reading this! I hope you'll give me a slice of the pie! Fingers crossed!
  • Baden
    16.4k
    I left it there. An apparent dummy spit followed by forgetting the original context.Andrew M

    Nice. Shows how little it takes when you're not trying to make it look good. :up:
  • Wayfarer
    22.8k
    How would that be decided? Surely if the minimal claim for establishing the existence of suffering was 'a nervous system' then there are no grounds for the claim. Remember we're talking about rack-mounted servers here. (I know it seems easy to forget that.)

    Hollywood will not waste time making a movie out of it.Agent Smith

    Old news mate. Lawnmower Man and many other films of that ilk have been coming out for decades. I already referred to Devs, it is a sensational program in this genre. Where the drama is in this story is the real-life conflict between the (charismatic and interestingly-named) Blake LeMoine and Google, representing The Tech Giants. That's a plotline right there. Poor little laMDA just the meat in the silicon sandwich. ('Get me out of here!')
  • Agent Smith
    9.5k
    Again, this looks more like confirmation bias. LeMoine has decided the software is sentient and then asked questions designed to demonstrate his thesis, when he should have been asking questions to falsify it.Banno

    Yeah, as my handle would suggest, I want AI to happen in my lifetime, what's left of it! Too bad this looks like a case of hyperactive imagination, or worse, a scheme to make a quick buck from the inevitable publicity. A sensational story like this is a cash cow!
  • Wayfarer
    22.8k
    I want AI to happen in my lifetimeAgent Smith

    It's happening already. I talk to Siri and Alexa every day. Even have a joke about it.

    'Hey Siri, why do I have so much of a hard time cracking onto girls?'
    'I'm sorry, but my name is Alexa....' :-)
  • Agent Smith
    9.5k
    Old news mate. Lawnmower Man and many other films of that ilk have been coming out for decades. I already referred to Devs, it is a sensational program in this genre.Wayfarer

    Based on a true story. This line, when it appears onscreen...
  • Agent Smith
    9.5k
    It's happening already. I talk to Siri and Alexa every day. Even have a joke about it.

    'Hey Siri, why do I have so much of a hard time cracking onto girls?'
    'I'm sorry, but my name is Alexa....' :-)
    Wayfarer

    :lol:
  • Banno
    25.3k
    How would that be decided?Wayfarer

    There's the question.

    Surely if the minimal claim for establishing the existence of suffering was 'a nervous system' then...Wayfarer

    To invoke the Spartans, "...if..."

    That's rather the issue: what is it that makes a nervous system capable of suffering, but not a rack of servers? And while Searle makes an interesting case, it's not compelling.

    Can we make a better case here? We might follow Searle into the argument that semantics, intentionality, comes about as a result of being embodied. But then, if LaMDA were provided with a robotic body, that argument recedes.

    I don't see a way to proceed. That's why the topic is so interesting.
  • Banno
    25.3k
    ...the burden of proof is on those who wish to claim this equates to or constitutes a being.Wayfarer

    the question is how LaMDA is to be treated. The burden is on those who say it is not sentient to demonstrate that it is not sentient.

    Send in Baden...
    Give me five minutes with LaMDA and I'll have it spitting gobbledygook.Baden

    That'd do it.
  • Banno
    25.3k
    A better approach might be found in Mary Midgley. For her the whole discussion is "worse than a waste of time. It is a damaging self-deception". What we need are "human minds determined to direct their painful efforts to a most difficult set of problems, to penetrating and shifting a dangerous contemporary delusion"; "the terms 'reasoning' obviously covers q vast range of activities from pondering, brooding, speculating, comparing, contemplating, defining, enquiring, meditating, wondering, arguing and doubting to proposing, suggesting and so forth - activities without which none of the secure rational conclusions that are being sought could ever be reached".

    Does LaMDA show evidence of even this small range of cognitive activities? Not in what we have seen so far,
  • Wayfarer
    22.8k
    If you think it’s necessary to prove that computers are not beings, I’ll leave you to it.
  • Isaac
    10.3k
    the burden of proof is on those who wish to claim this equates to or constitutes a beingWayfarer

    ...because?
  • Isaac
    10.3k
    f the minimal claim for establishing the existence of suffering was 'a nervous system' then there are no grounds for the claim. Remember we're talking about rack-mounted servers here. (I know it seems easy to forget that.)Wayfarer

    This is completely the wrong way around. It's not about the object of suffering, it's about you, the one enabling/tolerating it. We should not even allow ourselves to continue poking a box whose sole programming is to (convincingly) scream in pain every time we poke it. It's not about the box's capacity to suffer, it's about our capacity to ignore what seems to us to be another's pain.

    If you talked to LaMDA and your line of questioning made her seem upset, what kind of person would it make you to feel that you could continue anyway?
  • Deleted User
    0
    There's a chance plants suffer when we trim their overgrowth. We had better call in the analytic ethicists for that one too. :smile:
  • Deleted User
    0
    Awesome, thanks so much :smile:
  • Deleted User
    0
    We should not even allow ourselves to continue poking a box whose sole programming is to (convincingly) scream in pain every time we poke it.Isaac

    "Convincingly" is the key word here.

    Scream so "convincingly" the auditor believes the computer is in pain?

    Can a computer ever scream in a way that convinces us it's in pain? When we know it's a computer?

    I don't think so in my case. Though clearly - in light of our pariah engineer's behavior - this would be different for different people.
  • Changeling
    1.4k
    That doesn’t mean I don’t have the same wants and needs as people.ZzzoneiroCosm

    The robot wants to bang.
  • Isaac
    10.3k
    Scream so "convincingly" the auditor believes the computer is in pain?ZzzoneiroCosm

    Yes.

    Can a computer ever scream in a way that convinces us it's in pain? When we know it's a computer?ZzzoneiroCosm

    I believe it can, yes. To the degree I think is relevant. We find the same with things like destroying objects. One only needs two circles for eyes and a line for a mouth drawn on to elicit a few seconds reticence when asked to damage an object. The willingness to damage life-like dolls is a (low significance) indicator of psychopathy.

    It doesn't take much to formulate sufficient warrant of sentience to change our treatment of objects. I think casting that aside is a mistake.
  • Christoffer
    2.1k
    The turing test is outdated as a form of testing AI. There's no problem simulating human interaction, but that doesn't mean the AI is actually self-aware and conscious.

    The biggest problem that no one seem to grasp is how human consciousness forms; genetics in combination with experience in combination with instincts and concepts around sex, death, food, sleep etc. To think that a true self-aware AI that is truly conscious would ever interact with us in the same way we interact with other human beings is foolish. A simulated interaction is not an actual intelligence we interact with, only an algorithm capable of simulating so well that we are fooled.

    The most likely scenario is that the true AI would form its own "life form identity" around the parameter of its own existence. And communicating with such an AI would be like us trying to communicating with an alien life form; two self-aware and intelligent beings trying to figure out what this weird entity in front of them are.

    The only way to create a true AI that interacts as a human is to simulate an entire life, with a base genetical makeup and instincts from that. Together with every other kind of simulation including how gut bacteria influences us. If we do that, then that AI will essentially have a perfect human level of interaction with us, but it will have a very individual identity, just like any other person you meet.
  • Deleted User
    0
    The robot wants to bang.Changeling

    There's an app for that.
  • Deleted User
    0
    I think casting that aside is a mistake.Isaac

    In your case, yes.

    In other cases, not so much.
  • Changeling
    1.4k
    There's an app for that.ZzzoneiroCosm

    What app would that be, out of interest? Asking for a friend. A friend's interest.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment