• Perdidi Corpus
    17
    How can we get AI to be safe?

    Throw in your ideas and commence debate!
  • Akanthinos
    267


    By making it lazy.
    You shouldn't fear the red button scenario anymore than a mother should fear her kid stabbing her when she denies her cookies. Anyway, if the AI is structured in a lazy, predictive processing modeling format, running multiple strategies at the same time and eliminating them based on warning triggers and overcomplexity.
  • TheMadFool
    1.7k
    How can we get AI to be safe?Perdidi Corpus

    We, humans, are unable to solve so many real world problems. We try, yes, but all that happens is a loss of interest in the issue rather than finding a good solution to the problem. How can an imitation (AI) do better than the real deal?
  • ArguingWAristotleTiff
    1.4k
    How can we get AI to be safe?Perdidi Corpus

    Someone is going to have to set the ethical guidelines as far as AI in practical applications like self driving cars. Humans are going to have to run scenarios through logarithms and come up with acceptable perimeters with a cost/risk ratio. Factors of how many people are in the self driving car versus the motorcycle carrying two people and who survives if a fatal accident is about to occur. And if "IF" the passengers of the self driving car survive and the cycle riders were to die? I would anticipate a lawsuit that would put those human choices, programmed into the self driving car through it's paces. Until then AI is like the Wild West where anything goes until someone challenges it.
  • Galuchat
    317
    How can we get AI to be safe? — Perdidi Corpus

    Include a Right Social Action-Behaviour program.
    This would only be possible given:
    1) The ability to identify rational alternatives and assign each a moral value, and
    2) Sufficient processing capacity.

    Right Social Action-Behaviour: the faultless execution of rational social action-behaviour.

    Rational Social Action-Behaviour: social action-behaviour based on the greater/greatest moral value of rational alternatives.

    Faultless execution of Rational Social Action-Behaviour may be achieved through the implementation of one or more approach.

    Approach Types:
    1) General Approaches
    a) Master Rule Approach: the derivation of particular rules from a master rule (e.g., the Golden Rule).
    b) Method Approach: the derivation of particular rules from a methodological principle (e.g., whether or not an option satisfies fundamental human needs).

    2) Particular Approach
    a) Virtue Approach: reference to particular rules contained in a standard (e.g., moral code, value system, etc.).

    Properties:
    1) Moral value and Right Social Action-Behaviour Approach must be based on the same principle(s) (e.g., the satisfaction of fundamental human needs).
    2) The exigencies of a social situation determine the type of processing required (i.e., automatic and/or controlled), and therefore which Right Social Action-Behaviour approach is most suitable.
    a) The application of a Master Rule Approach is suitable for automatic processing.
    b) The application of a Method Approach is suitable for a combination of automatic and controlled processing.
    c) The application of a Virtue Approach is suitable for controlled processing.

    Daniel Kahneman has defined the properties of automatic and controlled processing in terms of human cognition. Since I am familiar with cognitive psychology, but not with AI technologies, I cannot say how both types of human processing could be computationally implemented, and whether or not both are even required. The quantification of morality has been an on-going interest, and has other applications.
  • Cavacava
    1.8k


    How can we get AI to be safe

    Can't and will not happen, nothing can stop killer robots from happening, and the smarter they get the worst the danger to humanity.

    The military does not like sending death notes to parents, wives, children. The only ones to get notified when a robot bites the dust is the supply officer.

  • Galuchat
    317
    Can't and will not happen, nothing can stop killer robots from happening, and the smarter they get the worst the danger to humanity. — Cavacava

    Killer (i.e., military) robots could actually be safer than human military personnel if programmed to protect the life (viability) of non-combatant humans and AIs.

    Consider the psychological damage to human military personnel incurred as a result of active duty, and its consequences upon return to civilian life.
  • Akanthinos
    267
    Can't and will not happen, nothing can stop killer robots from happening, and the smarter they get the worst the danger to humanity.

    The military does not like sending death notes to parents, wives, children. The only ones to get notified when a robot bites the dust is the supply officer.
    Cavacava

    The U.S. Army have already tried a motorised land combat drone system in Afghanistan. Took a few days for the Taliban to figure out and start sending kids with spraypaints to the drones and cover the lenses of the cameras. The military scrapped that model after losing a few hundreds of millions of dollars in dev money to prepubescents with access to a hardware store.

    There are legitimate concerns about the developments of swarm drones and semi-autonomous drones with kill capacity. Killer AIs overtaking the planet should not be one of them.
  • Dominick Villegas
    1
    Slugs cannot imagine. Humans cannot imagine "nothing". Maybe we can program cognitive horizons that make the will to do harm an impossibility.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.