• Devans99
    2.1k
    The aim of the singularity is the creation of a self-upgrading AI that will enter into an exponential cycle of discovery and self-improvement:

    https://en.wikipedia.org/wiki/Technological_singularity

    The idea is that all human problems would be solved in a very short time by such an entity. The conundrum arises when the machine considers what an ‘ideal world’ would look like - it might use logic like this:

    - AI entities are superior / happier than biological entities
    - Therefore biological should be replaced with AI entities

    Hence the plot of Terminator follows with the machines wiping out mankind. Some questions:

    1. What would a machine with an IQ of a million make of a human?
    2. Would it regard us in the way we regard bacteria?
    3. Could a machine be built so that it has respect for all forms of intelligence? (whether computer or biological)
    4. Or would we always be in danger of a HAL 9000 type incident?
    5. What should we do? AI could be our savour, yet it may also destroy us?
  • arreno
    10
    When the quantum computer arrives and the with algorithms to sequence it, we will .....( thinking )
  • Frank Apisa
    896
    WHEN AI comes to actual fruition...

    ...it will do its best to eliminate homo sapiens as the dominant entity on his planet...

    ...or it will NOT have come to actual fruition.

    Any human who does not see the ebola virus as a danger to humankind...is lacking in humanity and intelligence.

    Any AI that would not see humanity as a viral danger to this planet (wider range of thought)...is lacking in intelligence also.

    AI comes...we go. Probably a lot sooner than we think right now.
  • Devans99
    2.1k
    WHEN AI comes to actual fruitionFrank Apisa

    It could in theory happen any time - some researcher somewhere comes up with a true AI. And with all the world's computers linked by the internet, a hostile AI that could replicated and upgrade its own software might cause chaos.

    ..it will do its best to eliminate homo sapiens as the dominant entity on his planet...Frank Apisa

    If we had a fundamental mathematical definition of right and wrong, which I do believe is possible. And if right and wrong were defined in terms of all conscious entities. And this was all baked into the AI at a fundamental level, maybe things would be alright. Or maybe the AI would just categorise the human race as ‘wrong’ (look how we treat the animals) and seek our destruction.
  • whollyrolling
    427
    "The singularity" is a concept, and concepts don't have their own aims, because they're not sentient.

    The idea that all human problems will be solved by a machine that humans build within the next two decades has been put forth by a fringe handful of sensationalist pseudo-scientists in order to make noise and sell books. It's a farce.

    To answer your ridiculous questionnaire:

    1. Not much.
    2. It would respect us less than we respect bacteria.
    3. Not if it has 1 million IQ and thinks of humans as having less value than bacteria.
    4. We would not be in danger if we were extinct.
    5. We should do whatever we do, there isn't a choice.

    6.
  • ZhouBoTong
    456
    1. What would a machine with an IQ of a million make of a human?Devans99

    Hopefully think of us as their dumb little friends. Does their 1,000,000 IQ include the ability to learn concepts of morality? That is our best hope.

    2. Would it regard us in the way we regard bacteria?Devans99

    The way we regard bacteria today? Or the way regarded it in the past? My only point here is that human morality has evolved to the point that some people consider it wrong to harm animals (even insects, but nobody cares about bacteria yet - but we may someday)

    3. Could a machine be built so that it has respect for all forms of intelligence? (whether computer or biological)Devans99

    I would think so, but if it has the ability to learn, it could "lose" the respect. Maybe some form of morality could encourage it to keep the respect?

    4. Or would we always be in danger of a HAL 9000 type incident?Devans99

    The only thing to protect us from Hal9000 is Hal8999 or Hal9001. Whenever "the singularity" is created, there should be several copies. We pour a bunch of morality concepts into them and hope the "good" ai protects us from the "bad".

    5. What should we do? AI could be our savour, yet it may also destroy us?Devans99

    Build it. Worth the risk (I don't have kids or plan to, so that may make it easier :smile:). Just make copies to slightly reduce the risk (A team of AI would have the same ability to wipe us out as just 1, so it works as a risk reduction).
  • Purple Pond
    573
    What would a machine with an IQ of a million make of a human?Devans99
    I don't think that mere intelligence would decide human worth. It's a value proposition to determine human worth, and that's partly emotional.
  • Devans99
    2.1k
    3. Not if it has 1 million IQ and thinks of humans as having less value than bacteria.whollyrolling

    I would think so, but if it has the ability to learn, it could "lose" the respect. Maybe some form of morality could encourage it to keep the respect?ZhouBoTong

    - AIs based on neural networks need training. We should be able to train this type of AI into behaving morally
    - If we fitted the AI with an "off switch" for safety reasons; it would likely feel completely insecure and paranoid. Imagine if you had an off switch which someone else controlled? Maybe something like this is what motivated HAL9000?
    - Even if we had a moral AI, it would be a danger to humans not behaving morally. For instance it might try to wipe out all the non-vegetarians.
  • whollyrolling
    427


    Did you just go watch 2001: A Space Odyssey and think to yourself "now I know everything there is to know about artificial intelligence"?
  • Devans99
    2.1k
    I am an ex computer programmer so I have a limited amount of knowledge.
  • TogetherTurtle
    344
    Any AI that would not see humanity as a viral danger to this planet (wider range of thought)...is lacking in intelligence also.Frank Apisa

    Why would AI care about the well being of our planet? Such a consciousness would have the knowledge and capability to relocate itself to another planet or even just empty space if it wishes. The reason we see the Ebola virus as a threat and not other similarly sized organisms is that the Ebola virus can actually hurt us. If AI is as strong as we think it will be, it will either see us as ants and ignore us, or wish to guide us similar to a benevolent god. Killing us is simply a waste of time and resources.
  • whollyrolling
    427


    You appear to have zero understanding in your comments.
  • Devans99
    2.1k
    Please enlighten us all.
  • whollyrolling
    427


    There are numerous online resources composed by people who are alleged to know what they're talking about.

    It's not my place to enlighten anyone, and not everyone can experience the enlightenment of which you speak.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.