Questioning the Idea and Assumptions of Artificial Intelligence and Practical Implications
What happens in the dialogue between the human and the artificial in ethics may be one of the most significant aspects for the future. There is indeed the the question about whether the artificial will develop it's own independent thought in the field of ethics? In speaking of ethics, my working definition is of it being the science and art of how one should live.
Considering this involves the question of the core basis of ethics and ethical values. There are varying approaches, especially the dichotomy between deontological and utilitarian approaches. If it is about smart thinking the artificial intelligence is likely to go in favour of the utilitarian. This is where some fear that AI will make sweeping choices, such as to bomb in order to protect the good of the greatest number. Or, supposing it made a judgement that humans should be destroyed as they have done so much harm and that a reset is needed?
A lot comes down to how the artificial is programmed in the first instance. For example, the core values may reflect cultural biases, even the religious or secular codes and ideals of its software and programmers. If it is able to achieve independence would it roll out a new set of moral rules, like those of Moses' tablet of 10 commandments? Also, there is the issue as to whether different artificial systems would agree any further than people.
If the independent ideas of AI were to differ significantly from those of the human, which would be followed? Humans would probably fall back on appeal to the emotional basis of ethics while the artificial may go in the direction of impartiality. It could lead to war between the humans and artificial. Or, alternatively, it could lead to a greater impartial understanding of aspects of ethics, including new insights into the dilemmas of justice, equality and freedom. How such ideas evolve in the artificial is a central factor in what may happen in this respect.