Augmentation of the human mental system

Data acquisition, fusion, and use

Decisionmaking: probabilistic reasoning

Networked
Teams

Ethical human-machine behavior in moral dilemmas

Orientation to the AI Areas

Brief summaries offered for each of the five areas can be accessed onsite by selecting a blue button provided above. Within each area there is a brief outline describing the general principle, relevance for applications, near-term considerations.  There is also AI Resources offering an online AI bibliography. The collection of summarized publications offered in the AI bibliography can be searched by meta-topic themes offered on the site along with support for searching by author, title, and key words. The AI Strategy and Concepts Hub button offered above provides access to online instructional resources and versions of briefs as blog posts.

Introduction

Supporting how people and AI are teamed to make decisions in a distributive cognitive system brings special challenges because of perceived parallels between AI and human cognitive functions that often lead to assumptions that they are similar. Although there seem to be close parallels between AI and human cognitive functions they are not the same. This is most evident in the differences between how humans and AI process information and formulate decisions. The distinction between them is made clear when examining the model of reasoning employed by each. Humans employ a dual-process model of reasoning consisting of “intuitive” and “deliberate” reasoning that feed off each other and interact. AI, on the other hand, employs “deliberate” probabilistic reasoning. And, in the case of machine learning (ML) rule-based reasoning is typically employed.

Intuitive reasoning by humans employs “instinctive-emotive” features regarding whether or not information or people can be trusted or believed. Underlying belief systems are integral to intuitive reasoning. Since basic survival relies primarily on sensing, fairly quickly, whether others can be trusted, followed with belief in whether responses will be appropriate for survival when others can or cannot be trusted…the intuitive reasoning evolved first in humans from deep evolutionary roots. Much later, deliberate reasoning evolved as a dispassionate intellectual capability grounded on rationality and logic. Given this evolutionary sequence of human reasoning intuitive reasoning is labeled as System 1 (fast thinking; requiring little effort) with deliberate reasoning labeled as System 2 (slow thinking). Although AI has grown in capability over recent years with better processes to learn from experience and associate new phenomenon with networked models representing patterns and associative connections or relationships (similar to System 1), the strength of AI capabilities currently rests with System 2 processes involving analytical, deliberative, and overall inherently sequential reasoning functions suitable for rules-based automation.

Recognizing the current state of affairs with AI makes it important to speak of “narrow” AI compared to “broad” AI expectations, applications, and roles in integrated human-machine cognitive systems. For instance, narrow AI applications might involve the employment and use of AI to better support human dual-process model of reasoning (not supplanting either System 1 or System 2 human processes). For example, System 2 AI assistance can take the form of picking up cues or signals and providing basic interpretation of signals far more rapidly than humans can (e.g., in support of sensor data collection and automated processing of data to enable faster targeting). Other System 2 roles for narrow AI might be to help with deconfliction of data, detecting complex associations and interdependencies. And in the case of System 1 support for humans, a narrow AI role could be used to recommend caution or alternative propositions (to help slow System 1 thinking down) when cures or signals indicate a situation might be interpreted by humans as something familiar calling for a quick response when other indicators suggest greater deliberation is warranted before taking action.

There is considerable appeal for gaining advantages by employing highly compressed planning, decision, and execution (PDE) cycles that are made possible with high levels of machine automation (involving AI/ML). At critical moments during decision-making, when very high levels of uncertainty and complexity combine under intense time pressure, there is very likely to be greater dependence placed upon the employment of higher levels of AI/ML automation. This will naturally offer potential benefits for offsetting human limitations but bring greater risks associated with rapidly unfolding AI/ML decisions operating at speeds and scales surpassing human cognitive abilities. The figure below helps to illustrate several factors associated with highly compressed PDE cycles.

This presents a conundrum and speaks to the importance for designing and employing human-machine teaming frameworks in consideration of human cognitive abilities. Research and thought leadership from military strategists, spanning five areas from cognitive sciences, are being tracked to help inform human-machine teaming with AI/ML. Each area is briefly introduced below by highlighting the general principle of the area, its relevance, near term considerations for usage, and references to articles and briefs for helping to highlight core concepts in the area. Underlying references, supporting each area, can be found online at the AI Bibliography.