• Thatonekid
    1
    A PHILOSOPHY ON THE SIGNIFICANCE OF HUMAN DIALOGE AND THE ETHICS OF HOW IT IS SHARED

    What is the difference between writing in a private journal, and writing in a social media forum? What changes in the world when a dialogue is shared and when it is not? I'd expect it would be very difficult for me to find a person today that is proficient in the use of social media that would believe firmly that there are no consequences for blatant honesty about all things if shared on the public internet. But can you be honest in a journal? Can I dialogue every dirty detail of my life or the lives of people close to me to a book only intended for my own eyes exclusively? What are the consequences of that?
    But now to focus on more fundamental variation of this question; what is the difference between dialogue or information that is known only by the one who thought it, and information that is shared between conscious bodies? What is the significance in someone actually hearing what you have to say?
    Professor Sherry Turkle of MIT explains her decision to not deploy robots, to specifically deny proposals of distributing machines designed to respond to human dialogue, to elderly patients in care facilities. The idea is that these machines are made to "listen" to us and what we need to say doesn't sit well. It's important that a human, a genuine conscious person, will sit and listen to what it is that must be heard; to relegate listening to a machine that is programmed to look like it cares is immoral. It is this subject that inspires my writing here.
    But now I propose, what if that robot can put those words into text? What if you could just talk to your journal, and have it write for you? I think that could be a wonderful option for many people who cannot write due to ignorance, ill-preference, random restrictions and disability alike. My mother struggles with writing in this way due to an injury, I'm sure she would be delighted with such a tool. And they exist at that. Most computers and cell-phones today come with this kind of feature built in; it's already widely in use by the demographic that uses text messaging. Why not employ instead robots that can help those patients document their stories while performing their empathic functions? Does that change the morality of their use? It certainly does, and it does so in a very big way.
    If it is ethical to write a journal, than doing so with a speech-to-text technology is equally ethical. If a robot designed to mimic human care and responsiveness to a dementia patient that requires someone to talk to is unethical, does that change if the robot really is listening? Is a robot that interacts this way ethical to prescribe if it puts into memory what its patient says?
    What is fundamentally required to make the listening and interacting with these patients ethical, and what is missing when it is unethical?
    Is it ethical to employ a human that doesn't care to listen? What about if a patient talks to a deaf person? What truly matters in our social interactions?
    Only the parts that matter, matter. This may be at first a cyclical argument, but it requires expansion: if anything at all fulfills a humans need for social interaction, it can be morally and ethically good if the person understands fully what they are interacting with and it helps them, short and/or long-term; it can be morally and ethically repugnant if it is misleading, damaging, or a false portrayal of it's true self to those it interacts with. If I decide to build myself a machine that will listen, tilt it's head, say "m'kay" to portray understanding, and digitally store in text and speech whatever I dictate to it, then it is totally ethical to have and use it. It becomes a tool, no less controversial than a hammer. If I give a robot to someone that only looks like it is listening, and tell them it's doing all the other things, I'm a liar and a cheat, but also their words were lost to time, and nobody ever listened to them; that is where the core of the moral question is.
    If I walk into my mother's room, sit by her bed and listen to her stories with love and care; paying attention to every word is what makes that honest and true. If I sit and let her talk, but secretly have headphones drowning with music every heartfelt word, I am an immoral jerk who displays a lack of humanity and goodness.
    And to come back and address Professor Turkle's robots, it is very unethical to replace caretakers with hollow machines in interacting with patients who may in conversation reveal medical problems like symptoms of mental health issues through their dialogue, or may suffer delusions that these robots are conscious, and attempt to rely on them where a human is very needed. But it's also unethical to work as a human in that kind of caretaking position if you're not listening either.
    It is not wrong to build a robot like this, but it's applications are what make it significant. If a young adult enjoys the idea of a toy that playfully responds to them, by all means sell one at Wal-Mart. Providing them as medical devices or marketing them to demographics that could likely contain many users that will not fully comprehend what they really are is supremely unethical. However, if these robots really can listen, analyze what the patients are saying, be programmed to recognize symptoms of issues that need to be addressed, or save a dying man's final stories, that seems to me to be a very powerful tool towards a more advanced and sophisticated culture.

    Written by Bradley Morgan in response to MIT Professor Sherry Turkle in her talk: "Alone Together" on The RSA.
    Video Link: https://youtu.be/5AeMSQdUUEM
  • Kenshin
    20
    If robots fulfil a genuine human need, I see no harm in using them. My only concern would be if the robots are not sufficient but are used anyway.
  • Noblosh
    152
    And my only concern would be if humans would become insufficient but would be used anyway, cough, Matrix, cough.
  • WISDOMfromPO-MO
    753
    ThatonekidThatonekid




    Context is missing here.

    Why would the robots be used?

    Is it simply a case of automation--replacing human workers with machines to save on labor costs and maximize profits?

    Are the robots an innovation, like an artificial heart, that would add to the treatment options at the disposal of providers and patients? Or would they be intended to be standard treatment in all cases?

    Machines/AI/robots are replacing humans in providing every other kind of service. A robot can't smile, ask how your day is going, ask your granddaughter how old she is, etc. like a human working a cash register. When machines replace human cashiers we lose that interaction and the emotional/psychological benefits, but I don't hear anybody stating unequivocally that it is morally wrong to take that away.

    But people, whether they approve of the change or not, know what they are losing and what they now have when a machine replaces a human cashier. People know the costs and the benefits of both ways. As consumers they are not being deceived. They are simply being forced to adjust to what the market now demands and supplies.

    I can't imagine a scenario where machines replace humans in health care and patients do not know what they are now getting. If the machine isn't really listening like a human nurse listened, won't they know that? Just like a customer at a retail store knows that the machine won't ask how her day is going like the human cashiers did, won't a health care consumer know the difference between a human by her bed and a robot by her bed?

    It seems like we can rule out deception, fraud, etc. and simply ask if the care that a patient gets from a machine is better or worse than the care he/she would get from a human and if any care lost is justified by things like costs being lowered and more money being available for a more comprehensive battery of treatments.
  • TheMadFool
    13.8k
    An unethical thing to do would be to lie about a robot's capabilities, especially if it concerns human emotional needs (in this case the need for a person to talk to).

    I don't think people are so stupid to believe that a robot is a person. We're completely aware that all things robotic have the artificial tag. So, I don't see anything unethical in using robots for any purpose, from cooking to psychotherapy, because we're not being lied to. Think of sex toys. The blow-up doll isn't a real person but it does provide therapeutic benefit.:P
  • WISDOMfromPO-MO
    753
    We have people saying that robots would be better than humans at raising children, so the concern brought up in the OP is understandable.

    But without more context it is difficult to know and respond to the ethical/moral problems that the actions in question present.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.