• Nelson
    8

    I agree. A program that allows for ilogicall fellings is okay as long as the programmer cannot predict what it will result in. I also think that there is a difference between the robot actually having a thought process that is faulty resulting in ilogical feelings and illogical feelings just popping into existence. It is those faulty processes that will determine the personality of the robot. For example two robots with the same code will have different personalities.
  • MikeL
    644
    So, are you saying that our own illogical feelings just 'pop' into existence without a neurochemical or coded cause?
  • Nelson
    8
    I think that illogical feelings need to have a thought process behind them no matter how faulty. If the robot was scared of pie because it had seen a video where someone chocked on pie and ether died that would be better that it suddenly being scared of pie. I have no problem with the proscess popping into existence but i don't believe that the result and/or answer should.
  • MikeL
    644
    Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

    Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

    Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

    What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

    What is the distinction between a person who reacts this same way? They too have over weighted their code. The shock of seeing death caused by a pie or any other death has over weighted their perception of pies. Perhaps their friend was the one that died. The entire incident caused the weighting, but the focus was on the guilty pie.

    Do you see it differently?

    Of course you could also condition the robot to be afraid of the pie by beating it with a stick every time it saw a pie, but that is a slightly different matter.
  • praxis
    6.2k

    This doesn't appear to be true. Dolphins have this ability. See: http://www.actforlibraries.org/animal-intelligence-how-dolphins-read-symbols/

    Also dolphins can recognize themselves in a mirror, which suggests that they have a self concept.
  • Nelson
    8
    Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

    Yes, and that goes for all feelings. The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.

    Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

    What I'm saying is that the robot should be able or even prone to illogicality. The robot may develop a pie phobia or maybe it won't. It is all up to if the robot calculates the situation right. Without a chance to misjudge or calculate situations badly it would remain static, never change personally and always reply with the same answers. it would be as aware as a chatbot.

    Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

    It's up to the robot to judge if we are speaking truth and if it should listen. It may not be the reasonable thing to do and that is the point.

    What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

    That is the problem. If the robot is totally logical it would have total control over its feelings and that is obviously not how real creatures work. And if reason is not connected with feelings the feelings still have to be controlled by a programmed logical system. My solution is giving the robot a mostly logical thought proscess that sometimes misjudges and acts illogical. That way we get a bit of both
  • praxis
    6.2k
    I think it's important to acknowledge the fundamental difference in physical structure between human intelligence and AI when considering emotions such as fear. The substrate that AI will be built on is significantly different and that will effect it's development in terms of emotion. The biology associated with human emotion deals with regulating energy and other biologically based needs. General AI doesn't have these requirements. You'd have to go out of your way to simulate these requirements, like going out of your way to simulate flying like a bird, which may be aesthetically pleasing but inefficient. But why would that be a desirable thing to do? Presumably we create AI to help accomplish our goals. If we were to encode a general AI with an imperative to replicate itself and simulate emotions as we experience them, wouldn't that be dangerous and counterproductive to accomplishing our goals?

    It seems to me that we should intentionally avoid emotional AI, and perhaps even consciousness, or allow just enough consciousness to learn or adapt so that it can accomplish our goals.
  • Nelson
    8
    I totally agree. There is no real reason for emotional AI. But if we would want to create a truly aware AI it would have to be emotional.
  • MikeL
    644
    The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.Nelson

    Hi Nelson, if a human has an illogical thought process, is that also the result of the faulty wiring or code? What's the difference?
  • praxis
    6.2k
    if we would want to create a truly aware AI it would have to be emotional.Nelson

    Of course no one has consciousness figured out yet and I'm certainly no expert but I think that I may know enough to caution against making too many assumptions about awareness and emotions.
  • Harry Hindu
    4.9k
    What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.MikeL

    How do we know that we are scared if not an awareness of our own physical characteristics - heart beating faster, adrenaline rush, the need to run, etc. and then know the symbol for those characteristics occurring together - "fear" - in order to communicate that you fear something.

    You mention the physical characteristics of fear. All that is needed is an awareness of those physical characteristics and a label, or designation, for those characteristics - "fear". In this sense, the robot would know fear, and know that it fears if it can associate those characteristics with its self. A robot can be aware of its own condition and then communicate that condition to others if it has instructions for which symbol refers to which condition: "fear", "content", "sad", etc.
  • MikeL
    644
    I agree. Just like people.
    The fun thing to explain, like Nelson alluded to, is when we have a tone of positively weighted inputs, that when summed lead to the opposite feeling. For example, you may hate the way a yellow beach house looks at sunset, yet independently love yellow, beaches, houses and sunsets. You might have to surmise a contradiction has occurred (eg beaches are nature, nature is inviolable, beaches are nice but on a beach violate nature - or something to that effect).
    I believe it can be coded though - you can code the illogical without it being a fault.
  • Michael Ossipoff
    1.7k
    What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.

    This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."

    Is is really scared?
    MikeL

    The robot isn't really scared if it's just programmed to say, "I'm scared."

    But, if some genuine menace (a snake probably wouldn't menace a robot) triggered measures for self-protection, then it could be said that the robot is scared.

    It's the old question of what you call "conscious".

    The experience of a purposefully-responsive device is that device's surroundings and events, in the context of the purpose(s) of its purposeful response.

    The robot can be scared.

    Dogs, cats, and all other animals, are, of course much more like us than the robot is. For one thing, all of us animals result from natural-selection, and the purposes and precautions that go with that. Harming an animal of any kind is very much like harming a human.

    Michael Ossipoff
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.