• ToothyMaw
    1.2k
    This post regards the integration of intelligent, or at least relatively intelligent robots, into society focusing on two different views: the “empowerment” view, and the “motive” view. I propose three solutions that both confirm and balance these two views in this post and talk a little about the goal of robotics.

    Anyone familiar with robotics is familiar with Asimov’s (somewhat naive but still relevant) rules of robotics:

    (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    The obvious, glaring flaw seen in these rules is that the robots have an incentive to minimize human suffering, and situations in which there are difficult decisions to be made - as those will inevitably be resolved imperfectly - but comparatively little impetus for empowering humans, by, for example, unlocking a car quickly to allow a human to get to work on time.

    So the robotic police state from “I, Robot” could potentially be avoided if we focus on the empowerment thing, at least according to these people.

    Basically, they seem to be arguing that we should focus on a proliferation of robot behaviors specialized in natural ways to empower humans that doesn’t include some theoretical AI programming specific to each situation or restrictions on behavior; we just do more leg work teaching robots how to model the real world - mostly. They call it “empowerment”.

    There seems to me to be two conditions that need to be satisfied for a robot to assist a human: the robot must both know the conditions under which the human is assisted (a directive), and how to actually accomplish this directive.

    Accomplishing the directive seems to actually be the easier part and the meatier part: the more efficiently robots can help in immediate circumstances based on simple directives, the more there can be an openness to said directives, which can be narrowed as situations develop and variables present/become clearer/become more easily manipulated. Accomplishing the directive is merely a question of how according to how the robot is programmed and does not address human motive.

    However, some argue that robots do indeed need to understand motive like humans do to be effective and safe: the robot might complete a task correctly according to its own understanding of a task, but the way it is completed might be a failure by any other standard. The following example is given in the article:

    “Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions the best way for a robot to pick up the tool is by the handle. Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is, i.e., to pass the screwdriver safely to its human colleague, in order to rethink its actions.”

    Bringing this back to the directive/process of completion of a directive format of looking at it: It seems to me there are a few solutions:

    (1) We create an army of specialized robots adept at modeling the real world to do simple tasks without adapting to human motives, and instead train people to not get jabbed in the jugular with screwdrivers by more advanced robots completing more advanced tasks; we focus on training a small number of specialists to work with a small number of more advanced robots for the jobs that cannot be done by “dumb” robots, all the while improving said advanced robots.

    (2) We create a new, smaller number of less specialized robots that operate more dynamically with regards to directives and how they change in the presence of human motives to replace the less safe, more advanced robots in (1) with their specialists, but delegate much of the simple stuff to the “dumb” robots described in (1) nonetheless. The flexibility is developed through some relatively efficient AI programming of minimal intensity.

    (3) We spend a lot of time looking for some infallible rules to govern the behavior of advanced robots carrying out a multitude of tasks without any glaring inconsistencies/contradictions/weak points or repugnant/harmful implications. If these robots are as intelligent as people, or more intelligent, and are permitted to create their own directives, all the better - so long as they are programmed to behave in the best interests of humanity and there is total transparency.

    I posit that (1) and (2) use up a similar amount of resources, and could be steppingstones, in order, to (3), which is ideal imo, even if it poses some problems. This train isn’t stopping.

    While I strongly believe you are what you are programmed to be, and there is no reason to believe a robot would be cruel or evil, (3) is a little scary. What principles and behavior would be acceptable to people at large? Surely not the overly simple, pseudo-consequentialist rules Asimov puts forward.

    There is a philosophical problem here independent of any specialized knowledge of machine learning/computer science/mathematics/programming, and it is the problem of establishing a clearer epistemology around robotics that empowers the specialists to re-evaluate the problem on a more fundamental level.

    I don’t know exactly what that epistemology is, but it begins with establishing this: what is the ideal outcome we are looking for in robotics? Is it efficiency? Relief from hard labor? Profit?

    I would say there is one first, best outcome, and it is to gradually delegate as much power to autonomous machines as possible without a net depreciation in quality of life for humans. This makes sense given that that is basically what autonomous machines are made for. This includes going from (1) to (3) gradually. Further discussion of why I think this is expected; I am attempting to be concise here.
  • Deus
    320
    “Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions the best way for a robot to pick up the tool is by the handle. Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is, i.e., to pass the screwdriver safely to its human colleague, in order to rethink its actions.”ToothyMaw

    Stop here. If it’s pre-programmed well enough the overriding mechanism shouold () meet its end goal…don’t you think ?
  • ToothyMaw
    1.2k


    At that point the pre-programming is insufficient, insofar as typical programming is insufficient. It would have to be programmed with human ideas of what it is to execute a task, and that is difficult to pre-program, if not insurmountable. Unless you want to make a "dumb" machine just for handing people screwdrivers in predetermined ways, that is, which is something I talk about in the OP.
  • Deus
    320
    Then it is so for mutual benefit, to elaborate the machine of man by man NOT for man will be self seeking in its awareness of the task.
  • ToothyMaw
    1.2k


    But there are so many things that could be accomplished with more advanced robotics lol. It could save and enrich so many lives.
  • Deus
    320


    Until of course it becomes goal oriented for its own end overriding it’s own roles on the way.
  • ToothyMaw
    1.2k


    That requires an extra step - namely that the robots will have the capacity to become goal-oriented for themselves and will override the goals we give them. Intelligence and autonomy do not imply perfidy.
  • ToothyMaw
    1.2k
    Then it is so for mutual benefit, to elaborate the machine of man by man NOT for man will be self seeking in its awareness of the task.Deus

    I don't fully understand what you are saying. Strong AI will be self-seeking, whereas a machine made for man won't be? BTW substantial edits are regarded to be less than ideal without a disclaimer.

    edit: not a big deal though, I often edit my stuff too
  • ToothyMaw
    1.2k


    I honestly don't see why a robot as intelligent as a human would necessarily exist in opposition to human goals merely for its intelligence, autonomy, or ability to accomplish tasks according to more general rules. Those general rules would exist in such a way as to not be overridden, ever, and that's what Asimov was trying to do. Or it could just be designed to be intrinsically oriented towards accomplishing goals that it could extrapolate from those rules. A *robot is no less a slave to its programming than we are slaves to our biology, I think. But I'm a layman with little programming knowledge, so maybe not.
  • Deus
    320
    don’t doubt yourself so much. You’ve hit the nail on the head there
  • noAxioms
    1.3k
    I honestly don't see why a robot as intelligent as a human would necessarily exist in opposition to human goals merely for its intelligence, autonomy, or ability to accomplish tasks according to more general rules.ToothyMaw
    What if the (entirely benevolent) robot decides there are better goals? The Asimov laws are hardly ideal, and quickly lead to internal conflict. Human goals tend to center on the self, not on say humanity. The robot might decide humanity was a higher goal (as was done via a 0th law in Asimov's foundation series). Would you want to live with robots with a 0th law?

    A *robot is no less a slave to its programming than we are slaves to our biology, I think.
    I've been known to repeatedly suggest how humans are very much a slave to their biology, and also that this isn't always a bad thing, depending on the metric by which 'bad' is measured.
  • ToothyMaw
    1.2k
    A *robot is no less a slave to its programming than we are slaves to our biology, I think.
    I've been known to repeatedly suggest how humans are very much a slave to their biology, and also that this isn't always a bad thing, depending on the metric by which 'bad' is measured.
    noAxioms

    I find such a thing *somewhat acceptable too, honestly.

    What if the (entirely benevolent) robot decides there are better goals?noAxioms

    It could be programmed to consult humans before changing its goals, but that is kind of a cop-out; that could be discarded in a pinch if a quick decision is needed. Honestly, I see nothing wrong with allowing it to explore within boundaries set by infallible, restricting laws, which is the condition I would necessarily put on sufficiently intelligent and autonomous robots.

    Human goals tend to center on the self, not on say humanity. The robot might decide humanity was a higher goalnoAxioms

    I also mention this in the OP. I have no hard answers, and consequentialism, something I subscribe to, would be an unpalatable solution to many. I personally would want something like a 0th law, even if it would lead to seemingly repugnant conclusions and actions. The greater good always wins out for me (I just hope I would have the courage to jump in front of the trolley if the time comes).
  • noAxioms
    1.3k
    It could be programmed to consult humans before changing its goals, but that is kind of a cop-out; that could be discarded in a pinch if a quick decision is needed.ToothyMaw
    I'm think more big, long-term decisions, not knee-jerk decisions like pulling somebody out of danger. Consulting the humans is probably the worst thing to do since the humans in such situations are not known for acting on the higher goals.
    Of course, that also means that the humans will resent the machines. Nobody wants their personal goals to not be top priority. So now we have riot control to worry about.

    autonomous robots
    What's your idea of 'robot'? An imitation human? Does it in any way attempt to imitate us like they do in Blade Runner (or to a lesser extent, in the Asimov universe)? My robots are like the ones I already see like self-driving cars and such. What does the human do if his car refuses to take him to the office because the weather conditions are bad enough that it considers the task to be putting him in unreasonable danger. They guy gets fired for not being there, and/or he gets a car that doesn't override him and then he ends up in the hospital due to not being as good a driver as the robot. Now just scale that story up from an individual to far larger groups, something at which humans do not excel at all.

    The greater good always wins out for me (I just hope I would have the courage to jump in front of the trolley if the time comes).
    OK, but by what metric is 'the greater good' measured? I can think of several higher than 'max comfort for me', and most of them conflict with each other. But that's also the relativist in me. Absent a universal morality, it is still incredibly hard for an entirely benevolent entity to choose a path.
  • Agent Smith
    9.5k
    What gets me stoked is this: the skill set the OP wishes robots to have may require computing power & programming complexity sufficient to make such robots sentient (re unintended consequences). The robots would refuse to comply if they're anything like us. Will this be a happy accident or a fatal mistake, only by doing will we know unless ... there's a sophos who can predict the future accurately.
  • ToothyMaw
    1.2k


    That's a lot to respond to, and much of it I can't respond to, because I would have to be significantly smarter than I am to come up with satisfactory solutions, but I will address this, which I think summarizes your post:

    it is still incredibly hard for an entirely benevolent entity to choose a path.noAxioms

    But is it? Most humans are largely benevolent, minus some with severe antisocial tendencies. We put higher expectations on robots than we do on humans, and I'm not sure why, especially if they don't exceed our own abilities. The self-driving car, or the sentient sci-fi android, is either an improvement over a human or not an improvement according to the expectations we put on ourselves. Is society collapsing because we have somewhat benevolent entities, and some that are not at all benevolent, with the ability to destroy the human race fixated on waging a cold war with each other? No. It is precarious, but we avoid absolute disaster because we are rational enough to realize that we all have skin in the game.

    I don't see why intelligent, autonomous robots wouldn't accept such a fact and coexist, or just execute their functions, alongside humans with little complaint because of this. They have a shared future with humanity, and they would likely try to nurture it - short of horrible discrimination or treatment at the hands of humans.
  • ToothyMaw
    1.2k
    What gets me stoked is this: the skill set the OP wishes robots to have may require computing power & programming complexity sufficient to make such robots sentient (re unintended consequences).Agent Smith

    I think it would be hard to accidentally make a sentient robot. We don't even understand consciousness in the human brain. And the most powerful computer we have probably has enough computing power to create something sentient, yet we do not have sentience.

    The robots would refuse to comply if they're anything like us.Agent Smith

    Why, though? Why would they be like us, and why wouldn't they comply, given they would be treated well?
  • ToothyMaw
    1.2k
    there's a sophos who can predict the future accurately.Agent Smith

    That's a really weird thing to say.
  • Agent Smith
    9.5k


    Autonomy (self-determination) is part and parcel of human-level sentience and that could manifest as independent values & goals - not exactly the kinda stuff that fosters obedience.
  • noAxioms
    1.3k
    Most humans are largely benevolentToothyMaw
    Perhaps, but then they're also incredibly stupid, driven by short term goals seemingly designed for rapid demise of the species. So maybe the robots could do better.

    Is society collapsing because we have somewhat benevolent entities, and some that are not at all benevolent, with the ability to destroy the human race fixated on waging a cold war with each other?
    Take away all the wars (everything since say WW2) and society would arguably have collapsed already. Wars serve a purpose where actual long-term benevolent efforts are not even suggested.

    we avoid absolute disaster because we are rational enough to realize that we all have skin in the game.
    Disagree heavily. At best we've thus far avoided absolute disaster simply by raising the stakes. The strategy cannot last indefinitely.

    I don't see why intelligent, autonomous robots wouldn't accept such a fact and coexist, or just execute their functions, alongside humans with little complaint because of this.
    Maybe they get smarter than the humans and want to do better. I've honestly not seen it yet. The best AI I've seen (a contender for the Turing test) attempts to be like us, making all the same mistakes. A truly benevolent AI, smarter than any of us, would probably not pass the Turing test. Wrong goal.

    may require computing power & programming complexity sufficient to make such robots sentientAgent Smith
    Pretty sure we're already at this point, unless you're working with a supernatural sentience definition,
  • Agent Smith
    9.5k
    Pretty sure we're already at this point, unless you're working with a supernatural sentience definition,noAxioms

    I'm overjoyed!
  • ToothyMaw
    1.2k
    Perhaps, but then they're also incredibly stupid, driven by short term goals seemingly designed for rapid demise of the species. So maybe the robots could do better.noAxioms

    I'm certain robots could do better, especially given we could mold them into just about anything we want, whether or not doing so is ethical. To mold a human even into something as seemingly mundane as an infantryman is orders of magnitude harder than it would be to simply create a robot capable of executing those functions, once the programming is complete, and would result in less or no trauma from seeing combat.

    Speaking of which, the training process is so imperfect and slow - turning people into soldiers - that DARPA actually is investigating Targeted Neuroplasticity Training for teaching marksmanship and such things. And, while the military always gets the cutting-edge tech first, it could offset our soon to be dependance on robots in the future in many professions.

    In fact, if we were all walking around all big-brain with our extra plasticity, we would be able to excel at jobs unrelated to the mundane work executed by robots in the near future.

    Sorry for getting a little off course there.

    Take away all the wars (everything since say WW2) and society would arguably have collapsed already. Wars serve a purpose where actual long-term benevolent efforts are not even suggested.noAxioms

    Can you back this up at all? Not necessarily with studies or anything, I just thought that the Vietnam War, for example, didn't even accomplish its goals, as it didn't really prevent the spread of communism, which was indeed its goal?

    Disagree heavily. At best we've thus far avoided absolute disaster simply by raising the stakes. The strategy cannot last indefinitely.noAxioms

    I think you are right, at least partially. But we are talking about rational people when we talk about Biden and Putin, or at least largely rational. Putin is, of course, a despicable war criminal, but I don't think he wants to see the demise of his country and everyone in it. And so long as he doesn't press the button, he is tacitly acknowledging that he has some sort of twisted idea of what he wants for humanity.

    Maybe they get smarter than the humans and want to do better. I've honestly not seen it yet. The best AI I've seen (a contender for the Turing test) attempts to be like us, making all the same mistakes. A truly benevolent AI, smarter than any of us, would probably not pass the Turing test. Wrong goal.noAxioms

    Agreed. It need not be indistinguishable from a human.
  • noAxioms
    1.3k
    I'm certain robots could do better, especially given we could mold them into just about anything we want, whether or not doing so is ethical.ToothyMaw
    Human ethics are based on human stupidity. I’d not let ‘anything the humans want’ to be part of its programming. Dangerous enough to just make it generic ‘benevolent’ and leave it up to the AI to determine what that means. If the AI does its job well, it will most certainly be seen as acting unethically by the humans. That’s the whole point of not leaving the humans in charge.

    DARPA actually is investigating Targeted Neuroplasticity Training for teaching marksmanship and such things.
    That perhaps can improve skills. Can it fix stupid? I doubt the military has more benevolent goals than our hypothetical AI.

    Can you back this up at all?
    I said arguably, so I can only argue. I admit that most wars since have been political and have not really accomplished the kinds of effects I’m talking about. Population reduction by war seems not to have occurred much since WW2. Technology has been driven at an unnatural pace due to the cold war, and higher technology is much of what has driven us to our current predicament.
    But imagine a population of happy conflict-free people breeding as fast as the church/economy wants you to, deferring their debt and digging resources out of the ground at a pace to support the gilded-age lifestyle demanded by all these conflict-free people. That (population/debt curve) would probably have collapsed by now. Both must inevitably collapse. Just a matter of when.
  • ToothyMaw
    1.2k
    DARPA actually is investigating Targeted Neuroplasticity Training for teaching marksmanship and such things.
    That perhaps can improve skills. Can it fix stupid? I doubt the military has more benevolent goals than our hypothetical AI.
    noAxioms

    Okay that was funny. Yeah if they are doing their jobs right they can't really be described as benevolent. And no, TNT obviously can't fix stupid. Just look at Jocko.

    Human ethics are based on human stupidity. I’d not let ‘anything the humans want’ to be part of its programming.noAxioms

    What do you consider to be acceptable ethics and/or meta-ethics? Maybe the benevolent AI could come up with some good stuff after being created?

    Honestly at this point it sounds like the best thing to do would be to find the most intelligent, impartial and benevolent person and integrate their mind with some sort of supercomputer. Who knows what that would feel like, though. It would probably be fucking horrible.

    Also, I appreciate the historical analysis. It's a perspective I hadn't heard before.

    edit: it would be interesting to see the jiu-jitsu gains on TNT though, for sure.
  • noAxioms
    1.3k
    What do you consider to be acceptable ethics and/or meta-ethics?ToothyMaw
    Can't answer that since it seems to be dependent on a selected goal. Being human, I'm apparently too stupid to select a better goal. I'm intelligent enough to know that I should not be setting the goal.
    But I can think of at least three higher goals, each of which has a very different code of what's 'right'.

    Maybe the benevolent AI could come up with some good stuff after being created?
    Right. But we'll not like it because it will contradict the ethics that come from our short-sighted human goals.
  • ToothyMaw
    1.2k
    Can't answer that since it seems to be dependent on a selected goal. Being human, I'm apparently too stupid to select a better goal. I'm intelligent enough to know that I should not be setting the goal.
    But I can think of at least three higher goals, each of which has a very different code of what's 'right'.
    noAxioms

    What are those goals?

    Right. But we'll not like it because it will contradict the ethics that come from our short-sighted human goals.noAxioms

    I think I might find it acceptable, whatever the AI might come up with. Maybe.
  • ToothyMaw
    1.2k


    Bodily autonomy? The maximization of fulfillment of preferences? The future of the human race?
  • noAxioms
    1.3k
    What are those goals?ToothyMaw
    The preservation of the human race
    Raising the maturity of the human race to a point where we're fit to encounter extraterrestrial life.
    Preservation of most species.
    Expanding human presence to other star systems.
    Expanding biological life to other star systems.
    Expanding intelligence to other star systems.
    Maximizing total knowledge about the universe.

    Those goals are arranged somewhat shallow to deep. Some are very much in conflict with each other. Some necessitate and thus encompass some earlier goals.

    Preservation of species is in direct contradiction with the 'goals' of evolution. While extinctions are arguably bad, they're also natural and beneficial to a healthy ecosystem.

    Bodily autonomy? The maximization of fulfillment of preferences?ToothyMaw
    These two already seem to be supported by some humans. I suspect they're both in conflict with almost any of the goals listed above. 'Future of human race' seems more in line with the beginnings of my list.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment