AREA V: Ethical human-machine behavior in moral dilemmas
AREA V Overview
General Principle: Human values should always govern the use of AI and smart machines. This involves not only understanding appropriate and inappropriate behaviors and uses for AI and smart machines but also appreciation for differences between machines and humans with moral/ethical reasoning. Experience with moral dilemmas generally introduces prudence, compassion, and responsibility in human ethical reasoning, judgment, and decisions that informs the development of wisdom. Smart machines, currently, do not provide adequate capability for reflecting upon or addressing moral dilemmas related to their own behaviors, whether under directed control or not by humans, to include metacognitive capacity required to make adaptations based on a priori reflections or follow-on assessment of ethical/moral consequences.
Relevance for operational applications: Smart machines can only be expected to provide supplementary decision-making support for moral dilemmas encountered by decisionmakers. Increasingly, smart machines for applications will bring tremendous capability along with likely hand-in-hand ethical/moral dilemmas. Training and instruction will be necessary for decision-makers on AI ethics and appropriate ethical usage of smart machines to better anticipate and address not only appropriate operational and strategic usage but also the moral dilemmas that are very likely to arise.
Near term considerations: The use of smart machines in applications will very likely introduce new ethical/moral dilemmas for decisionmakers. AI implementation simulations should include forced-choice ethical dilemmas for decisionmakers to not only help to introduce and educate on the nature of likely dilemmas but also help with moral/ethical reasoning necessary for addressing them in context.