Even humans sometimes do not live up to their own moral standards. Because morality sometimes is such a grey area between right and wrong it is hard to program this type of ambiguity into a machine with the expectation that it makes 100% correct moral choices when faced with scenarios where right and wrong are to be distinguished because even as humans we struggle with it. — kindred
The biggest issue with creating self aware androids is their capacity to carry out morality in human terms and expectations, because we differ from the outset in terms of our makeup our priorities would differ. — kindred
If empathy could somehow be programmed into androids, then they’d be more capable of making better ethical/moral choices, but that is not the question. — kindred
The question is whether it is possible to do so i.e. grant androids the same level of empathic self-reflection as humans, and if we could do that I so no issue with doing so as an android capable of moral decision making is obviously desirable. — kindred
I.e. AGI (neural network (not program)) that learns (to mimic?) empathy, eusociality, ecology and nonzero sum conflict resolution (e.g. fairness) for discursive practices and kinetic relationships with other moral agents¹/patients²...... ethical androids (i.e. androids with substantial moralbeliefs[habits (i.e. priorities-constaints]) — ToothyMaw
Okay, maybe; but why would any for-profit corporation or military organization ever build an "ethical android" that would be useless as either a slave or a killer? — 180 Proof
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.