Comments

  • Emotions are a sense like sight and hearing
    Disscusions like this one often devolve into the different parts arguing about definitions. So before we get to that point, what is your definition of sense?
  • The Robot Who was Afraid of the Dark
    I totally agree. There is no real reason for emotional AI. But if we would want to create a truly aware AI it would have to be emotional.
  • The Robot Who was Afraid of the Dark
    Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

    Yes, and that goes for all feelings. The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.

    Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

    What I'm saying is that the robot should be able or even prone to illogicality. The robot may develop a pie phobia or maybe it won't. It is all up to if the robot calculates the situation right. Without a chance to misjudge or calculate situations badly it would remain static, never change personally and always reply with the same answers. it would be as aware as a chatbot.

    Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

    It's up to the robot to judge if we are speaking truth and if it should listen. It may not be the reasonable thing to do and that is the point.

    What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

    That is the problem. If the robot is totally logical it would have total control over its feelings and that is obviously not how real creatures work. And if reason is not connected with feelings the feelings still have to be controlled by a programmed logical system. My solution is giving the robot a mostly logical thought proscess that sometimes misjudges and acts illogical. That way we get a bit of both
  • The Robot Who was Afraid of the Dark
    I think that illogical feelings need to have a thought process behind them no matter how faulty. If the robot was scared of pie because it had seen a video where someone chocked on pie and ether died that would be better that it suddenly being scared of pie. I have no problem with the proscess popping into existence but i don't believe that the result and/or answer should.
  • Can this be formulated as a paradox?
    Religion is full of paradoxes like that one. Where good behavior can cause suffering and bad behavior cause happiness. Take murder for example, if good people go to heaven when they die and heaven is infinitely better than earth would it not be ethical to go around killing good people even if you go to hell?
  • The Robot Who was Afraid of the Dark

    I agree. A program that allows for ilogicall fellings is okay as long as the programmer cannot predict what it will result in. I also think that there is a difference between the robot actually having a thought process that is faulty resulting in ilogical feelings and illogical feelings just popping into existence. It is those faulty processes that will determine the personality of the robot. For example two robots with the same code will have different personalities.
  • The Robot Who was Afraid of the Dark
    For a robot to truly feel scared it must truly be aware, like a human. My definition of being aware is: being able to act and think without input, have illogical fellings develop out of the self(scientists programming illogical fellings into you would therefor not count), being able to remember and being able to act on said memories. The amount of awarenes a creature possesses is determined by how mutch it fits into these criteria. The robot is felling real fear if it has all these qualities.
  • Do you cling to life? What's the point in living if you eventually die?
    Have you ever been happy? That feeling is by definition good, and good is by definition better than nothing (which I believe death feels like). Just because things don't have meaning, doesn't mean you shouldn't do them.
  • Do you cling to life? What's the point in living if you eventually die?
    There is no objective universal meaning to existence. The only only supreme authority on issues like these is the self and therefore the only objective meaning to life is the one you choose.
  • The God-Dog Paradox
    The problem is that we cant know for sure that your religion is the true one. There are and have ben thousands of religions all with the same amount of valididty. Atheists therefore conclude that their chance of chosing the right religion is one in a thousand. Isn't the right decision then to live your life without rules and restrictions? Not to mention maybe god does not exist and then religion won't even grant that one in a million chance at heaven.