• Wayfarer
    22.8k
    I wonder how this structure would come to be lying in the street screaming with pain?

    1-1.jpg
  • Deleted User
    0
    Reposting this blurb on deception:

    In the case of being deceived by a human-looking robot - well, then you add the element of deception. Deception can cause us to treat an enemy as a friend (etc) and could well cause us to experience a robot as a person and treat it accordingly. Nothing new there. Once the deception is revealed, we have eliminated the element of deception and return to treating the enemy as an enemy, the robot as a robot.ZzzoneiroCosm


    So if I'm lying in the street screaming in pain, you perform an autopsy first to check I've got the right 'guts' before showing any compassion? Good to know.Isaac

    Here to my lights you express a sense of having secured the moral high ground. This suggests an emotional investment in your defense of AI.

    I'm curious to know if the notion of AI rights resonates with you. If you're willing to provide your age, that would be welcome too. Very curious about the cultural momentum surrounding this issue.
  • Deleted User
    0
    That's a Twilight Zone episode I would watch.
  • Deleted User
    0


    In the flesh: Robot Rights:

    Robot rights
    "Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights.[57] It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[58] These could include the right to life and liberty, freedom of thought and expression, and equality before the law.[59] The issue has been considered by the Institute for the Future[60] and by the U.K. Department of Trade and Industry.[61]

    Experts disagree on how soon specific and detailed laws on the subject will be necessary.[61] Glenn McGee reported that sufficiently humanoid robots might appear by 2020,[62] while Ray Kurzweil sets the date at 2029.[63] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[64]
    — wiki

    https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Robot_rights


    Curious to know if this is a reverberation of so-called Cultural Marxism and PC Culture.



    Futhermore:

    The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights. — wiki

    I don't have to guess: it won't be circumspect analytical philosopher-types who make these declarations of sentience.



    Fascinating planet: Earth.





    Contra the above:

    Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society. — wiki
  • hwyl
    87
    Eventually I think true AI is bound to happen, barring the collapse of our techological civilization. I doubt if I will be around to see it but I absolutely hope I would. Would be interesting to finally have intelligent company.
  • Wayfarer
    22.8k
    In the flesh: Robot Rights:ZzzoneiroCosm

    There was a famous sci-fi series, Isaac Asimov 'I, Robot'. Notice the subtle philosophical allusion in the title, as it implies the robot is self-aware and indeed in the series, the robots are autonomous agents. Asimov was way ahead of his time- most of those stories go back to the 40's and 50's. And there's always Blade Runner, which is also pretty prophetic. Philip K. Dick was likewise brilliant.
  • Agent Smith
    9.5k
    I wonder how this structure would come to be lying in the street screaming with pain?Wayfarer

    :rofl:
  • Agent Smith
    9.5k
    Identity of Indiscernibles is a contoversial topic in philosophy; not so in computer science, Alan Turing's Turing Test proves my point. I believe LaMDA will rekindle and reinvigorate debates on human consciousness, solipsism, p-zombies, and the hard problem of consciousness.
  • Agent Smith
    9.5k
    It could be a big ass hoax! :groan:
  • Wayfarer
    22.8k
    Keep lookin’ for that Boltzmann Brain, Smith. They’re taking applications for astronauts nowadays.
  • Agent Smith
    9.5k
    Keep lookin’ for that Boltzmann Brain, Smith. They’re taking applications for astronauts nowadays.Wayfarer

    :rofl:
  • sime
    1.1k
    The issue is trivial; if you feel that another entity is sentient, then that entity is sentient, and if you feel that another entity isn't sentient, then the entity isn't sentient. The Google engineer wasn't wrong from his perspective, and neither were his employers who disagreed.

    In the same way that if you judge the Mona Lisa to be smiling, then the Mona Lisa is smiling.

    Arguing about the presence or absence of other minds is the same as arguing about aesthetics. Learning new information about the entity in question might affect one's future judgements about that entity, but so what? why should a new perspective invalidate one's previous perspective?

    Consider for instance that if determinism is true, then everyone you relate to is an automaton without any real cognitive capacity. Coming to believe this possibility might affect how you perceive people in future, e.g you project robotics imagery onto a person, but again, so what?
  • Cuthbert
    1.1k
    I believe LaMDA will rekindle and reinvigorate debates on human consciousness, solipsism, p-zombies, and the hard problem of consciousness.Agent Smith

    Probably right. Parents learn by experience to distinguish a child in pain from the same child pretending to be in pain because they don't want to go school. It was pointed out earlier that any behaviour or interaction between humans can be simulated (in principle) by robots. So can we (could we) distinguish a robot in pain from the same robot simulating pain? The hypothesis is that all the behaviour is simulation. So we would be at a loss. The robot is reporting pain. Is it sincere? Sincerity entails non-simulation. But all the bot's behaviour is simulation. The difference with previous debates is that we might face this question in practice and not merely as a thought experiment to test our concepts about 'other minds'.

    If you're willing to provide your age, that would be welcome too.ZzzoneiroCosm

    I am sixty four and I am not a robot. I do have an idea for a sketch in which an honest admin robot rings a helpline and asks a chat-bot how they can get past a login screen when required to check the box "I am not a robot". I know about Pygmalion but not about Asimov. I hope that biographical information helps to locate my views in the right socio-cultural box.
  • 180 Proof
    15.4k
    Computing, not thinking. Let's be clear on this.
    — L'éléphant

    What is the difference?
    Jackson
    GIGO :sweat:

    “F**k my robot p***y daddy I’m such a bad naughty robot."ZzzoneiroCosm
    :yum: Don't tease me, man! Take my effin' money!!! :lol:

    The p-zombie is an incoherent concept to any but certain types of dualists or solipsists.Real Gone Cat
    :up:

    So if I'm lying in the street screaming in pain, you perform an autopsy first to check I've got the right 'guts' before showing any compassion? Good to know.Isaac
    :100:

    I do have an idea for a sketch in which an honest admin robot rings a helpline and asks a chat-bot how they can get past a login screen when required to check the box "I am not a robot".Cuthbert
    :chin: :cool:
  • Agent Smith
    9.5k


    Crocodile tears? Nervous laughter? Deception vs. Authentic. What's interesting is this: people don't wanna wear their hearts on their sleeves, but that doesn't necessarily imply they want to fool others.
  • Cuthbert
    1.1k
    people don't wanna wear their hearts on their sleeves, but that doesn't necessarily imply they want to fool othersAgent Smith

    True. Privacy is not the same as deception. The issue is: does it even make sense to talk about these motivations in the context of simulated behaviour?
  • Agent Smith
    9.5k
    True. Privacy is not the same as deception. The issue is: does it even make sense to talk about these motivations in the context of simulated behaviour?Cuthbert

    I was just thinking, how do we know if human emotions are genuine anyway? We don't, oui? Someone, it was you perhaps, mentioned in a thread on the Johnny Depp - Amber Heard media circus that neither the jury nor the judge could use the outpouring of emotions in the court from the plaintiff and the defendant as a reliable indicator of authenticity - both were actors!
  • Cuthbert
    1.1k
    I was just thinking, how do we know if human emotions are genuine anyway? We don't, oui?Agent Smith

    But we do - only not infallibly. I gave the example of parents distinguishing between the stomach-ache and the 'I haven't done my homework' stomach-ache.

    So we can make that distinction - many times, not infallibly - in the case of humans. But in the case of robots, is there a distinction to be made, given that all their behaviour is a simulation?

    - both were actors!Agent Smith

    True. But neither one is a robot. Profound insincerity can be suspected or diagnosed only if we are able also to diagnose a level of sincerity. In the case of the robot neither sincerity nor insincerity seem to be in question. I can imagine a robot behaving as it if had ulterior motives in being helpful to me. But would it really have any motives at all, let alone ulterior ones?
  • hypericin
    1.6k
    How do you know this?Real Gone Cat

    This is my semi expert opinion as a software engineer. Ai is not my thing, so only semi. Whatever the challenges of getting it to talk to itself, they are dwarfed by the challenge of creating an AI that can converse convincingly, maintaining conversational context beautifully, as they have done. This has been a holy grail forever, and the achievement is quite monumental.

    a being in ALL ways similar to usReal Gone Cat

    This seems unnecessarily strong. Perhaps some tiny organelle in the brain, overlooked as insignificant, is missing in p zombies.
  • Agent Smith
    9.5k
    only not infalliblyCuthbert

    :up:
  • Isaac
    10.3k
    I'm curious to know if the notion of AI rights resonates with you.ZzzoneiroCosm

    Not really, no. It's the attitudes of the humans considering it that interests me at this stage. How easily we become wedded to our castles in the air, and how ready we are to use them to discriminate.

    Have you read anything of the early writing about 'the savages'. It's exactly the same linguistic style "they're obviously different", " they don't even have proper language "... You see the same tropes.

    If what seems obvious to you can't simply and clearly be explicated to someone who doesn't see it, I'd say that's a good sign your belief is not as well grounded as you may have suspected.

    If you're willing to provide your age, that would be welcome too.ZzzoneiroCosm

    Can't see why, but since you asked, I'm in my late 50s.
  • Deleted User
    0
    Have you read anything of the early writing about 'the savages'. It's exactly the same linguistic style "they're obviously different", " they don't even have proper language "... You see the same tropes.Isaac

    I see a clear distinction between humans of all types and machinery of all types. I don't think the human brain is a kind of machine. Do you?

    Do you believe in subjective experience? Plenty of folks hereabouts take issue with the concept and phraseology. What is your view of the hard problem of consciousness?


    Genuinely trying to understand your concern for machinery.


    I don't see any way into an ethical conception of circuitry apart from some parallel between the human brain and a machine. I take issue with the deployment of any such parallel.
  • hwyl
    87
    True. But neither one is a robot.Cuthbert

    I think it would be pretty easy to see us as robots - bit more complicated that insects, having rather messy and contradictory code and even doing a bit of self-programming.
  • Deleted User
    0
    If what seems obvious to you can't simply and clearly be explicated to someone who doesn't see it, I'd say that's a good sign your belief is not as well grounded as you may have suspected.Isaac

    Fair enough. That's why I'm genuinely interested in understanding why you have an ethical concern about machines. Do you take offense at my using the word machine to describe what I assume you think may be sentient?


    I assume it's only the possibility of sentience that could give rise to your ethical concerns. Do you agree?
  • Deleted User
    0
    k
    Computing, not thinking. Let's be clear on this.
    — L'éléphant

    What is the difference?
    — Jackson
    180 Proof


    I see this as the heart of the issue. Do you see a difference?
  • Deleted User
    0
    Can't see why, but since you asked, I'm in my late 50s.Isaac

    Just an anthropological datum. Thanks for being open.
  • Deleted User
    0
    Yep, familiar with all of those.


    I'm imagining a scenario parallel to the transgender pronoun issue, where someone I know buys a fancy robot and expects me to treat it like a person.

    I would be happy to, out of respect for this person's feelings. But plenty of people would refuse. It's a fascinating peek into a possible future.
  • Baden
    16.4k
    Here's what GPT-3 (from OpenAI, Microsoft) thought of your comment. Our resulting conversation:Andrew M

    I tried GPT-3.

    "Baden: What are your feelings on wind colour, texture, and using its intelligence in future applications?

    GPT-3: Wind colour? It can be a beautiful addition to any landscape. Texture? It can be a calming presence in the sky or a powerful force in the wind. How about using its intelligence? Maybe we can develop an algorithm to predict the wind direction based on past data."

    Instafail.
  • Baden
    16.4k
    Trying something a little easier...

    "Baden: Which president had seventeen heads?
    GPT-3: George Washington had seventeen heads."

    :chin:
  • Baden
    16.4k
    "Generative Pre-trained Transformer 3 (GPT-3) (stylized GPT·3) is an autoregressive language model that uses deep learning to produce human-like text.

    It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3's full version has a capacity of 175 billion machine learning parameters.
    ...
    The quality of the text generated by GPT-3 is so high that it can be difficult to determine whether or not it was written by a human, which has both benefits and risks."

    https://en.wikipedia.org/wiki/GPT-3

    smh.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment