• Cavacava
    1.6k
    Here is a road map for an emotional, creative, social, AI.
    5asifvllxle03boz.png

    Military applications are frightening,


    Article about this appeared in today Bloomberg.
    https://www.bloomberg.com/view/articles/2017-09-05/take-elon-musk-seriously-on-the-russian-ai-threat
    Attachment
    affectove robot roadmap (98K)
  • szardosszemagad
    82
    Road map is nice and symmetrical. A beauty. Too complex for me to examine. Arrows can mean anything -- processes, feeback, feed forward, reaction, action, feeling, emotion, motivation, action. Too much. I like a two-way road map, such as two arrows pointing in opposite directions, side-by-side, parallel to each other. THAT I understand. To understand this road map would take a long time and reading a magazine article, and even that is not a guarantee I'd understand it when all is said and done.

    Clearly, my ineptitude, not that of the author of the diagram.

    My point is, one should always make a point he or she herself understands, otherwise the point is lost even on him. This is clearly lost on me.
  • Cavacava
    1.6k
    I am certainly no expert, that is why I referenced the article and short 7 page paper written by people who have a much better understanding.
  • szardosszemagad
    82
    I apologize I was not referring to you at all when I mentioned "some people don't understand this chart". After all I don't know you and to the point of your most recent post I had no clue what you knew and what you didn't. But it was nice of you to volunteer that you are no expert either. :-)
  • praxis
    234
    The line that I thought disturbing:

    Whoever becomes the leader in this area [AI] will rule the world. — Vladimir Putin

    Didn't know there was an AI arms race. If it's true I imagine it will greatly accelerate development.
  • Cavacava
    1.6k


    Yes, did you catch Elon Musk's tweet from yesterday?

    China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.
    5:33 AM - Sep 4, 2017

    He and 116 other international technicians have asked the UN to ban autonomous weapons, killer robots. UK has already said it would not participate in ban....and I am sure that US, China, Russia and others will also not participate in ban.
  • MikeL
    236
    Didn't know there was an AI arms race. If it's true I imagine it will greatly accelerate development.praxis

    And to think this site is one of the biggest philosophical sites on the net, if I was trying to get an edge or new insight I'd be reading your posts very carefully. :)
  • praxis
    234
    Well, being dim witted and poorly informed can have its advantages at times. For instance I now realize that I've been enjoying the naive notion in the back of my mind that AI could be used for good, rather than a tool for the rich a powerful to acquire more wealth and power.

    People suck. :’(
  • Nelson
    8
    For a robot to truly feel scared it must truly be aware, like a human. My definition of being aware is: being able to act and think without input, have illogical fellings develop out of the self(scientists programming illogical fellings into you would therefor not count), being able to remember and being able to act on said memories. The amount of awarenes a creature possesses is determined by how mutch it fits into these criteria. The robot is felling real fear if it has all these qualities.
  • MikeL
    236
    My definition of being aware is: being able to act and think without input, have illogical fellings develop out of the selfNelson
    Hi Nelson,
    This is well a considered point.

    Why wouldn't a program that allows for illogical feelings count though? Scientists are, afterall, designing the 'self'. When the weighting of an electro-neural signal exceeds the threshold of 8 for example, we may program the neuron to start firing off in random fashion to random connections.
  • Nelson
    8

    I agree. A program that allows for ilogicall fellings is okay as long as the programmer cannot predict what it will result in. I also think that there is a difference between the robot actually having a thought process that is faulty resulting in ilogical feelings and illogical feelings just popping into existence. It is those faulty processes that will determine the personality of the robot. For example two robots with the same code will have different personalities.
  • MikeL
    236
    So, are you saying that our own illogical feelings just 'pop' into existence without a neurochemical or coded cause?
  • Nelson
    8
    I think that illogical feelings need to have a thought process behind them no matter how faulty. If the robot was scared of pie because it had seen a video where someone chocked on pie and ether died that would be better that it suddenly being scared of pie. I have no problem with the proscess popping into existence but i don't believe that the result and/or answer should.
  • MikeL
    236
    Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

    Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

    Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

    What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

    What is the distinction between a person who reacts this same way? They too have over weighted their code. The shock of seeing death caused by a pie or any other death has over weighted their perception of pies. Perhaps their friend was the one that died. The entire incident caused the weighting, but the focus was on the guilty pie.

    Do you see it differently?

    Of course you could also condition the robot to be afraid of the pie by beating it with a stick every time it saw a pie, but that is a slightly different matter.
  • praxis
    234

    This doesn't appear to be true. Dolphins have this ability. See: http://www.actforlibraries.org/animal-intelligence-how-dolphins-read-symbols/

    Also dolphins can recognize themselves in a mirror, which suggests that they have a self concept.
  • Nelson
    8
    Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

    Yes, and that goes for all feelings. The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.

    Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

    What I'm saying is that the robot should be able or even prone to illogicality. The robot may develop a pie phobia or maybe it won't. It is all up to if the robot calculates the situation right. Without a chance to misjudge or calculate situations badly it would remain static, never change personally and always reply with the same answers. it would be as aware as a chatbot.

    Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

    It's up to the robot to judge if we are speaking truth and if it should listen. It may not be the reasonable thing to do and that is the point.

    What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

    That is the problem. If the robot is totally logical it would have total control over its feelings and that is obviously not how real creatures work. And if reason is not connected with feelings the feelings still have to be controlled by a programmed logical system. My solution is giving the robot a mostly logical thought proscess that sometimes misjudges and acts illogical. That way we get a bit of both
  • praxis
    234
    I think it's important to acknowledge the fundamental difference in physical structure between human intelligence and AI when considering emotions such as fear. The substrate that AI will be built on is significantly different and that will effect it's development in terms of emotion. The biology associated with human emotion deals with regulating energy and other biologically based needs. General AI doesn't have these requirements. You'd have to go out of your way to simulate these requirements, like going out of your way to simulate flying like a bird, which may be aesthetically pleasing but inefficient. But why would that be a desirable thing to do? Presumably we create AI to help accomplish our goals. If we were to encode a general AI with an imperative to replicate itself and simulate emotions as we experience them, wouldn't that be dangerous and counterproductive to accomplishing our goals?

    It seems to me that we should intentionally avoid emotional AI, and perhaps even consciousness, or allow just enough consciousness to learn or adapt so that it can accomplish our goals.
  • Nelson
    8
    I totally agree. There is no real reason for emotional AI. But if we would want to create a truly aware AI it would have to be emotional.
  • MikeL
    236
    The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.Nelson

    Hi Nelson, if a human has an illogical thought process, is that also the result of the faulty wiring or code? What's the difference?
  • praxis
    234
    if we would want to create a truly aware AI it would have to be emotional.Nelson

    Of course no one has consciousness figured out yet and I'm certainly no expert but I think that I may know enough to caution against making too many assumptions about awareness and emotions.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.