• ToothyMaw
    1.4k
    This OP is inspired by another thread started by substantivalism in which I considered the potential for humans being supplanted by ethical androids (i.e. androids with substantial moral beliefs): there is already enough anxiety around having sentient robots walking around making choices, but to attempt to develop these intelligences to make moral decisions in place of humans - even with a human in the loop - has even more baggage. And the easiest way of making it work gives rise to some serious moral concerns, regardless of how sophisticated our morality gets.

    Imagine we have an android programmed to adhere to a set of morals without exception, and, against all odds, it is indeed capable of executing its limited, semi-permanent programming without running into any moral quandaries or acting in erratic or unpredictable ways when presented with difficult or novel situations. It remains that the morality of this android could always be wrong; a lack of internal contradictions says little about whether or not the general beliefs, although actionable, are correct. This is to say that, in a practical way, the android might be programmed effectively, but there are greater moral obligations than just avoiding contradiction with limited programming.

    That is the best situation. A more realistic one is that the android inevitably gets hung up on something and either cannot decide what to do or acts counterintuitively, poorly, or unpredictably. The ideal case and the more realistic one both indicate that some sort of protocol that changes the programming in response to stimuli is necessary.

    Thus, it appears we would need to program significant self-reflection protocols into any such android so it could adapt to a changing moral landscape. But can you program truly moral self-reflection? Can a robot learn to be empathetic or compassionate (which I would argue are the two most important markers of morality)?

    Alternatively, and more feasibly, we might constantly program and reprogram these ethical androids. We might even wipe their memories over and over again if these memories would interfere with them executing their tasks in such a way that it is safe for them to continually exist - or even replace these androids if that wouldn’t work. If this sounds like a reasonable avenue of action, allow me to pose the following question:

    Why not do it with humans? Why not just set up re-education camps to make people have the "correct" moral beliefs? Why not partially wipe someone’s memories if one of their loved ones dies and it interferes with their work at the construction site manning the crane? Why not get rid of someone if their internal mechanisms fail and they can no longer execute tasks that promote the wellbeing of the public? What if they are a danger to the public? To commit to this method is to commit to the demise of any intuitive notion of human morality. And since these actions are now on the table, all any of them would need in terms of justification is that they produce more good than harm.

    So, if we want truly ethical androids, we might have to commit to giving them status equivalent to humans, or we face a horrific double standard. But, if we do that, we risk quite a bit if we cannot produce a mode of self-reflection for them. Thus, I think the best option is to just not create truly ethical androids, but maybe someone can show that I'm wrong.
  • kindred
    147
    Even humans sometimes do not live up to their own moral standards. Because morality sometimes is such a grey area between right and wrong it is hard to program this type of ambiguity into a machine with the expectation that it makes 100% correct moral choices when faced with scenarios where right and wrong are to be distinguished because even as humans we struggle with it.

    The biggest issue with creating self aware androids is their capacity to carry out morality in human terms and expectations, because we differ from the outset in terms of our makeup our priorities would differ. Thus the nature of self reflection (if they are imbued with it) would be different for androids as it would be for us and their consideration of what makes a choice morally or ethically correct would differ too. This has to do with empathy towards other lives human or non-human. If empathy could somehow be programmed into androids then they’d be more capable of making better ethical/moral choices, but that is not the question.

    The question is whether it is possible to do so i.e. grant androids the same level of empathic self reflection as humans, and if we could do that I so no issue with doing so as an android capable of moral decision making is obviously desirable.
  • ToothyMaw
    1.4k
    Even humans sometimes do not live up to their own moral standards. Because morality sometimes is such a grey area between right and wrong it is hard to program this type of ambiguity into a machine with the expectation that it makes 100% correct moral choices when faced with scenarios where right and wrong are to be distinguished because even as humans we struggle with it.kindred

    Yes, I noted that that is the ideal case, but that it is far more realistic that any ethical android would inevitably get "stuck" on some moral problem in a way that a human might not, and that this reality necessitates some means of self-reflection. I then point out that truly moral self-reflection probably requires the human traits of empathy and compassion.

    The biggest issue with creating self aware androids is their capacity to carry out morality in human terms and expectations, because we differ from the outset in terms of our makeup our priorities would differ.kindred

    Their priorities, both moral and otherwise, would be what we program into them, largely. Or so I would think. As such, they may develop new priorities, but the development of the new and relevant moral priorities requires characteristics associated with humans and a capacity to self-reflect. So, I'm not saying that they wouldn't think differently or have properties humans wouldn't, but they could easily be morally recognizable if we program them accordingly - even if they lack some human characteristics.

    If empathy could somehow be programmed into androids, then they’d be more capable of making better ethical/moral choices, but that is not the question.kindred

    I agree that that isn't the question. The question in the OP is essentially whether or not we should try to create ethical androids in the absence of an appropriately meaningful mode of self-reflection for them.

    The question is whether it is possible to do so i.e. grant androids the same level of empathic self-reflection as humans, and if we could do that I so no issue with doing so as an android capable of moral decision making is obviously desirable.kindred

    Ok, yes, I agree that if we had a means of making empathetic, compassionate androids that they could very well be desirable, but we don't have that right now. Furthermore, we need to avoid validating the double standard in the OP, and that might require some thoughtfulness in how we go about the whole thing.
  • T Clark
    14.3k
    Azimov’s Rules of Robotics - 1942

    The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

    [1] A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    [2] A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    [3] A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • ToothyMaw
    1.4k


    A classic. If only it were so simple. And where is the Zeroth Law? Or was that never technically added to the fictional handbook?
  • BC
    13.7k


    All Watched Over By Machines Of Loving Grace by Richard Brautigan

    I like to think (and
    the sooner the better!)
    of a cybernetic meadow
    where mammals and computers
    live together in mutually
    programming harmony
    like pure water
    touching clear sky.

    I like to think
    (right now, please!)
    of a cybernetic forest
    filled with pines and electronics
    where deer stroll peacefully
    past computers
    as if they were flowers
    with spinning blossoms.

    I like to think
    (it has to be!)
    of a cybernetic ecology
    where we are free of our labors
    and joined back to nature,
    returned to our mammal
    brothers and sisters,
    and all watched over
    by machines of loving grace.
  • ToothyMaw
    1.4k


    I find that poem very compelling and think things may ultimately play out that way. I think Brautigan said more about it in a short poem than I could in a book.

    edit: to be clear this is the first time I've seen this poem.

    edit 2: I get it. The further along the path we get to the end situation in which we are "watched over by machines of loving grace" the stronger the urge or necessity of getting there. What a brilliant poem. I suppose giving androids the same status as humans with or without a means of self-reflection would advance us towards that end.
  • 180 Proof
    15.7k
    ... ethical androids (i.e. androids with substantial moral beliefs [habits (i.e. priorities-constaints])ToothyMaw
    I.e. AGI (neural network (not program)) that learns (to mimic?) empathy, eusociality, ecology and nonzero sum conflict resolution (e.g. fairness) for discursive practices and kinetic relationships with other moral agents¹/patients²...

    Okay, maybe; but why would any for-profit corporation or military organization ever build an "ethical android" that would be useless as either a slave or a killer?

    https://en.m.wikipedia.org/wiki/Moral_agency [1]

    https://en.m.wikipedia.org/wiki/Moral_patienthood [2]

    :nerd: :up:
  • ToothyMaw
    1.4k
    Okay, maybe; but why would any for-profit corporation or military organization ever build an "ethical android" that would be useless as either a slave or a killer?180 Proof

    Think about this: if you were to have an android in your house that would help take care of your family or something, or even if we just had androids walking around doing things at all, people would undoubtedly want these androids to be able to behave morally because they would inevitably be in positions to act morally. And, if that is what the consumers would want, I think corporations would provide it.

    So, while there is a difference between an android capable of behaving morally and an android that has substantial moral beliefs like a human might, corporations will have incentives to create some sort of android that can behave ethically. In the OP I'm just discussing androids that are internally morally comparable to humans.

    Furthermore, an ethical android is no more useless as a slave or a killer than a well-programmed human, so already we see that people tend towards thinking about these androids inconsistently.

    edit: also: when I say "beliefs", I really do mean beliefs comparable to what humans have, not just habits.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.