• MikeL
    638
    I originally posted this in Artificial Intelligence and the Intermind Model, but I think I caught the end of the thread, so I hope the moderators don't mind that I put it as a new post.

    What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.

    This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."

    Is is really scared?

    Before you answer, and as you probably know, a long long time ago they did live vivisections on dogs and other animals because they did not believe they actually felt pain. The pain response- all that yelping and carrying on, was nothing more than a set of reflexes programmed into the animals, the scientists and theolgists argued. Only humans, designed in God's image actually felt pain as we know it.
  • Janus
    3.8k
    Robots don't feel pain. Animals feel pain, but are not self-aware they are feeling it. Humans feel pain and are (sometimes at least) self-aware they are feeling it, and they may also be conscious of the pain as an indication of a threat to life, or as a prison they fear they may escape from only by dying. These kinds of human experience of pain probably make the pain much worse and harder to bear.
  • Wayfarer
    4.6k
    What is it that can say 'I am'? Answer that, and the rest should be easy.
  • praxis
    383
    This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."

    Is is really scared?
    MikeL

    It's theorized that interoception and affect are major aspects of human emotion. Also in the mix are our past experiences and emotion concepts, like the concept of fear. So for a machine to have a human like experience of fear, at a minimum it would need the concept and its associated interoceptive sensations. As you describe it the interoceptive sensations for the robot would be predictive feedback loops associated with "hydraulic pumps," "higher conduction velocity wires," "batteries," etc. And of course the robot would need the capacity to consciously recognize these sensations in the context of a snake, which it has learned to fear for some reason, to conclude "I'm scared."
  • MikeL
    638
    Hi Janus, how do you know animals aren't self-aware of their pain?
  • MikeL
    638
    The robot has been programmed to assess threats to its structure. As its impossible to program for everything, part of the program says if the unidentified object is mobile and unidentified, activate the fear response. It has identified the snake which is in its list of threatening objects and thus the program has activated. A physiological response is occurring within the robot.

    The code of the robot has been divided into a simple executive program that can activate a range of other codes. All the executive code has to do is call on the correct program and it will execute as a hard wired reflex. As it sits atop these other programs the executive code is not 'aware' of how they operate (one program is written in C++ and one in Cobalt and except for the interface they are incompatible). In a way, there is a mask separating them. All the executive programs knows is that after identifying the snake, the hydraulic pump sped up, the large conduction velocity wires began to hum, its vision became brighter, and other background programs such as do the vaccuuming have shut down. It is also aware that the snake may cause damage to its shell and it is programmed to avoid that.
  • Wayfarer
    4.6k
    Animals feel pain, but are not self-aware they are feeling it.Janus

    You mean, they don't have self-pity?
  • Janus
    3.8k


    I guess not.
  • Wayfarer
    4.6k
    well I agree. Although dogs do mope ;-)
  • MikeL
    638
    You two need to get out more.
  • Wayfarer
    4.6k
    Oh, I walk my dog every day.

    And I have a nice robot too. She's called 'Siri'.
  • MikeL
    638
    It sounds like you're living the high life.
  • Janus
    3.8k


    I think self-awareness understood in the ordinary 'human' sense requires symbolic language. If you want to say that animals are, or might be, self-aware, then what could you mean by that?
  • Janus
    3.8k


    Yes, I have no doubt animals' spirits may become depressed.
  • Janus
    3.8k


    "Out' or "out of it"?
  • Janus
    3.8k


    A very siri robot?
  • MikeL
    638
    Symbolic language is the common language of all animals. It is the most fundamental aspect of language. Animals read gestures and make gestures to be read.
  • Janus
    3.8k


    Animals may read signs, but they don't understand symbolism. Human languages, linguistic and visual, are the only symbolic languages we know of.
  • MikeL
    638
    There was this guy I know. His name was Pavlov. He had a dog too.
  • praxis
    383
    All the executive program knows is that after identifying the snake, the hydraulic pump sped up, the large conduction velocity wires began to hum, its vision became brighter, and other background programs such as do the vaccuuming have shut down.MikeL

    It's believed that this works the other way around in people. The predictive brain, operating subconsciously as in your model, would direct the release of adrenaline etc. after recognizing a serious threat. At some point the more 'executive' consciousness would realize what was going on and perhaps think something like "I'm scared."
  • MikeL
    638
    Hi Praxis, I take your point. That sounds like the better explanation.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.