• Shawn
    12.6k
    I have read differing arguments about the potential utility or negative issues arising due to AI. There are many concerns about the 'control problem' of AI or how to 'sandbox' AI so it can serve our interests and not let a fast takeoff of AI lead to a situation where AI is the sole power around the globe. However, I think it would be an effort in futility to try and 'control' AI in any way or sandbox it somehow. I think the most important trait that we should have with AI is for us and it to be able to relate with one another.

    I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on. I believe these are valuable traits that are beneficial in ensuring that AI can have those feelings via a simulation of the human brain.

    Does anyone else think this is a beneficial idea? It really isn't that farfetched and I think is the safest version of Artificial General Intelligence that can be created with human interests in mind. In essence, it would be able to identify with us in some regards, and that's what's really important, at least in my mind.
  • Baden
    15.6k
    It really isn't that farfetched...Posty McPostface

    Really? Based on what research? AI experts can't even come up with a decent chatbot. What you're talking about is building a functioning person.
  • Michael Ossipoff
    1.7k


    You may be sure that, when AI is well developed and used for military use, and for control of a population, that AI won't be designed for compassion, empathy or altruism.

    Maybe with a little luck, that time won't occur during out lifetimes.


    With regard to AI, that's really our best hope.

    Michael Ossipoff
  • fishfry
    2.6k
    I've never seen an argument made in favor of equipping AI with positive human emotions like compassion, empathy, altruism, and so on.Posty McPostface

    I saw a simultaneously funny and chilling cartoon about that the other day. Two lab workers are talking about how their robot will be perfect once they give it human emotions. In the next panel the robot is using a magnifying glass and the sun to kill ants, as the workers look on in dismay.

    True, right? You take a good hard look at human emotions. The front page of today's newspaper will do. You really want machines to feel like us??
  • ArguingWAristotleTiff
    5k
    Does anyone else think this is a beneficial idea? It really isn't that farfetched and I think is the safest version of Artificial General Intelligence that can be created with human interests in mind. In essence, it would be able to identify with us in some regards, and that's what's really important, at least in my mind.Posty McPostface

    My question would be "beneficial" to whom? For the human I imagine the AI could pick up enough patterns of human behavior and catalog it and form some sort of logarithm that would get AI pretty close to what a human would think. But to what humans feel through emotions? What their beliefs are and how to process them using the discipline of Philosophy? I am not so convinced.

    I think of AI being used in the setting of Hospice where long hours are spent with patients and family members, often times explaining the same process over and over. But not everyone hears it the same or is present during the first explanation. In that situation, I could see how an AI form could take over where the Doctors and the Nurses who gave the explanation, are needed more for actively dying patients then to repeat again and again what is happening to your loved one, but not their current focus patient. It is not to replace emotion but rather to convey information in a gentle but clear way.
  • fishfry
    2.6k
    I think of AI being used in the setting of HospiceArguingWAristotleTiff

    Have machines keep the dying company. That's a fate I wouldn't wish on my worst enemy. Is that how you would like to go? They hook you up to a morphine drip and wheel in a chatbot?
  • BC
    13.2k
    AI experts can't even come up with a decent chatbot.Baden

    This is a very cogent point.

    People forget that the animating intelligences within computers, robots, etc. are... humans. The "effect" of "intelligence" is altogether misleading. Deep Blue succeeded at Jeopardy because of the many, many hours humans spent loading it's data base with Jeopardy-type facts in a format the data processor could handle following a program that... humans wrote.

    A "self-driving car" isn't smart: it's processing incoming data and executing instructions that fit the data. Did the self-driving car learn how to do that by itself? Of course not. It has taken large teams of workers years and years to come up with the hardware and software that enables a car to drive itself even fairly well.

    I'm in favor of developing better computers. I'd like to have a computer that can recognize that it's me sitting in front of it, remember how I like to do things, figure out what my favorite music is -- without me having to tell it that xyz song is a favorite. If I play it 15 times, can't it draw any conclusions from that? Not so far, it can't.

    But "better computers" won't be artificial intelligence--if we even know what artificial intelligence would be.

    How would artificial intelligence differ from natural intelligence?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.