AI Strategy and Concepts Bibliography

WIKINDX Resources

Bosch, K., & Bronkhorst, A. 2018, Human-ai cooperation to benefit military decision making. Paper presented at Proceedings of the NATO IST-160 Specialist'meeting on Big Data and Artificial Intelligence for Military Decision Making, Bordeaux, F, 30 May-1 June 2018, S3-1/1-S3-1/12. 
Added by: SijanLibrarian (2020-06-29 14:32:38)   Last edited by: SijanLibrarian (2020-06-29 14:34:19)
Resource type: Proceedings Article
BibTeX citation key: Bosch2018
Email resource to friend
View all bibliographic details
Categories: Artificial Intelligence, Cognitive Science, Computer Science, Decision Theory, Engineering, General, Military Science
Subcategories: Augmented cognition, Autonomous systems, Command and control, Decision making, Human decisionmaking, Human factors engineering, JADC2, Machine intelligence, Psychology of human-AI interaction, Strategy
Creators: Bosch, Bronkhorst
Publisher: NATO
Collection: Proceedings of the NATO IST-160 Specialist'meeting on Big Data and Artificial Intelligence for Military Decision Making, Bordeaux, F, 30 May-1 June 2018, S3-1/1-S3-1/12
Views: 36/51
Views index: 12%
Popularity index: 3%
Abstract
Military decision making takes place in variety of complex domains (defense, security, cyber, etc.). Artificial Intelligence not only allows for data reduction and synthesis, but also for the development of predictions about future events, and about outcomes to considered interventions. However, due to the often uncertain circumstances and ill-defined problems, AI cannot yet do this autonomously. Instead, deriving decisions from predictions and analysed data should be organized as an interactive human-technology activity, in which both parties become aware of one another’s strengths, limitations, and objectives. This paper addresses how humans and AI-systems should cooperate to achieve better decision making. It is argued that situation judgment can be improved through interactive explanatory dialogues, and that well-chosen explanations will support judgments and goal setting. An AI-system should be able to adapt itself dynamically to the decision maker, by taking into account his objectives, preferences, and track record (e.g. susceptibility to bias). Furthermore, this approach also contributes to ‘trust-calibration’: a level of warranted trust in each other’s competencies. It is proposed to discern different stages in human-AI collaboration, ranging from one-way messaging to actual teaming. Ideally, AI-systems should be able to function as intelligent team players, but also at lower levels of collaboration, human-machine performance can be substantially boosted.
  
wikindx 6.2.2 ©2003-2020 | Total resources: 1447 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA) | Database queries: 63 | DB execution: 0.12393 secs | Script execution: 0.13774 secs