Davis, Z. (2019). Artificial intelligence on the battlefield. PRISM, 8(2), 114–131.
|
|
Abstract
|
Artificial intelligence has burst upon the national-security scene with a suddenness and intensity to surprise even the most veteran observers of national policy discourse. This spike of interest is driven in part by those who view AI as a revolutionary technology, on par with the discovery of fire, electricity, or nuclear weapons. It is driven in part by the rapid absorption of nascent AI-based technologies into diverse sectors of the U.S. economy, often with transformative effects (as, for example, in the sciences and social media). And it is driven in part by the ambitions of America’s potential adversaries. Echoing the nineteenth-century naval strategist Alfred Mahan (“Whoever rules the waves rules the world”), Russian president Putin has asserted that the nation that rules in AI “will be the ruler of the world.” China’s president is less outspoken on this matter, but has committed China to become the dominant AI power by 2030. There are mounting fears of a “Sputnik moment,” which might reveal the United States to be woefully underprepared to manage new AI challenges.
What should we make of all this? Are expectations of revolutionary AI sound? Will the consequences prove positive, negative, or perhaps both for U.S. security and international stability? Definitive answers to these questions will take shape in the coming years, as we gain a better appreciation of the potential military applications of AI. At this early stage, it is useful to explore the following questions:
-
What military applications of AI are likely in the near term?
-
Of those, which are potentially consequential for the stability of strategic
deterrence? Relatedly, how could AI alter the fundamental calculus of deterrence?
-
How could AI-assisted military systems affect regional stability?
-
What is the connection between regional stability and strategic deterrence?
-
What are the risks of unintended consequences and strategic surprise from AI?
This paper frames large questions and provides first-order arguments about them. It is intended to set an agenda, but not delve deeply into any particular aspect. It draws on ideas developed for a workshop convened at CGSR in September 2018 in partnership with Technology for Global Security, an NGO focused on these matters. The workshop engaged a diverse mix of public- and private-sector experts in an exploration of the emerging roles and consequences of AI. A summary of that workshop and an annotated bibliography aligned with the agenda are available at the CGSR website. This paper also draws on previous work at CGSR on disruptive and latent technologies and their roles in the twenty-first-century security environment.
|