WIKINDX |
![]() |
Abbass, H. A. (2019). Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation, 11(2), 159–171. |
Resource type: Journal Article BibTeX citation key: Abbass2019 View all bibliographic details |
Categories: Artificial Intelligence, Cognitive Science, Complexity Science, Computer Science, Data Sciences, Decision Theory, Engineering, General, Mathematics, Military Science Subcategories: Analytics, Augmented cognition, Autonomous systems, Big data, Command and control, Decision making, Edge AI, Human decisionmaking, Human factors engineering, JADC2, Machine learning, Military research, Neural nets, Psychology of human-AI interaction Creators: Abbass Publisher: Collection: Cognitive Computation |
Attachments |
Abstract |
Artificial intelligence (AI) is finding more uses in the human society resulting in a need to scrutinise the relationship between humans and AI. Technology itself has advanced from the mere encoding of human knowledge into a machine to designing machines that “know how” to autonomously acquire the knowledge they need, learn from it and act independently in the environment. Fortunately, this need is not new; it has scientific grounds that could be traced back to the inception of computers. This paper uses a multi-disciplinary lens to explore how the natural cognitive intelligence in a human could interface with the artificial cognitive intelligence of a machine. The scientific journey over the last 50 years will be examined to understand the Human-AI relationship, and to present the nature of, and the role of trust in, this relationship. Risks and opportunities sitting at the human-AI interface will be studied to reveal some of the fundamental technical challenges for a trustworthy human-AI relationship. The critical assessment of the literature leads to the conclusion that any social integration of AI into the human social system would necessitate a form of a relationship on one level or another in society, meaning that humans will “always” actively participate in certain decision-making loops—either in-the-loop or on-the-loop—that will influence the operations of AI, regardless of how sophisticated it is.
|