Robots and the Future of War
Hosted by Eric S. Lander 63:26 min
Fully autonomous lethal weapons—robots that can select and engage targets without human intervention—are fast becoming possible. They might minimize casualties and protect civilians in times of war. But is it morally wrong to put a computer in charge of life or death decisions? Could system errors lead to flash wars? But, as war gets faster and more complex, is it even feasible to keep humans in the loop?
Professor, Stanford University & CEO, Kitty Hawk Corporation
Sebastian Thrun and his students developed Stanley, a robot that in 2005 won the $2 million DARPA Grand Challenge for driving itself across a 132-mile, winding course. He is a research professor at Stanford University and a Google Fellow. At Google, Thrun founded Google X, which is home to radically innovative projects like the Google self-driving car. He is also the co-founder of online educational company Udacity, and the CEO of Kitty Hawk, which works on flying cars.
Photo credit: World Economic Forum
Senior Fellow and Director of the Technology and National Security Program,
Center for a New American Security
Dr. Paul Scharre served as an infantryman, sniper, and reconnaissance team leader in the Army’s 75th Ranger Regiment and completed multiple deployments to Iraq and Afghanistan. He later worked in the Office of the Secretary of Defense helping to establish policies on autonomous systems and new weapons technologies. Today, he is Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of Army of None: Autonomous Weapons and the Future of War.
Regents Professor, Georgia Tech College of Computing & Director,
Mobile Robot Laboratory
Ronald Arkin is a roboticist and one of the country’s leading roboethicists. He is the Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at the Georgia Institute of Technology, where he studies behavior-based control, action-oriented perception for mobile robots, robot survivability, multiagent robotics, human-robot interaction, and more. He has also held positions at robotics labs in Stockholm, Tokyo, and Toulouse, France.
Director Humans and Autonomy Laboratory,
Mary “Missy” Cummings was one of the U.S. Navy's first female fighter pilots, serving from 1988-1999. She is currently at Duke University, where she is the director of the Humans and Autonomy Laboratory (HAL) and a professor in the Department of Electrical and Computer Engineering. Her research interests include human-robot interaction, human-autonomous system collaboration, and the ethical and social impact of technology.
Advocacy Director at Human Rights Watch,
Human Rights Watch Arms Division
Mary Wareham serves as the global coordinator of the Campaign to Stop Killer Robots. She is the Advocacy Director of the Arms Division at Human Rights Watch, where she leads advocacy against weapons that pose a significant threat to civilians. Previously, Wareham served as Advocacy Director for Oxfam New Zealand, leading its efforts to secure an arms trade treaty and the 2008 Convention on Cluster Munitions.
Director Belfer Center for Science ,
Belfer Center for Science and International Affairs
Ash Carter served as the 25th U.S. Secretary of Defense, from 2015 to 2017, after serving as the number two and three positions in the Pentagon. A Rhodes Scholar who received a Ph.D. in theoretical physics, he for more than 35 years used his expertise in national security and technology to defend the United States, under presidents of both political parties as well as in the private sector. Today, Carter is the director of the Belfer Center for Science and International Affairs at Harvard Kennedy School and also an Innovation Fellow at MIT.
Featured in the Boston Globe
An OpEd by guest Mary Wareham.
An OpEd by guest Ronald Arkin.
Referenced in the Episode
A 2013 BBC News interview with Stanislav Petrov, former member of the Soviet Air Defenses.
Paul Scharre’s 2018 book published by W.W. Norton engages military history, global policy, and cutting-edge science to explore the implications of giving weapons the freedom to make life and death decisions.
A 2017 arms control advocacy video explores a dystopian future where miniature drones, with AI, facial recognition and 3 g of explosive, programmable to kill a specific individual, become available.
Missy Cummings’s 2004 paper published in the IEEE Technology and Society Magazine examines the ethical and social issues that come up when humans are further removed from their violent actions.
A 2018 publication from Human Rights Watch that argues that fully autonomous weapons violate International Humanitarian Law.
This open letter calling for a “a ban on offensive autonomous weapons beyond meaningful human control” has been signed by over 4,500 AI and robotics researchers since 2015.
This Directive 2012 Directive establishes Department of Defense policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems.
In this 2018 paper published on the Belfer Center’s website, Former Secretary of Defense Ash Carter compares the development of modern day disruptive technologies to the Manhattan Project.
In this 2013 paper published in AISB Quarterly, robot ethicist Ron Arkin argues that Lethal Autonomous Weapons could make for a more “humane” battlefield.
A 2018 pledge from The Future of Life Institute calling for laws against lethal autonomous weapons. The pledge was signed by Elon Musk, Demis Hassabis, Mustafa Suleyman, and more than 170 individuals organizations and 2,4000 individuals.
An August 2020 report by Human Rights Watch reviews the policies of 97 countries regarding the use of lethal autonomous weapons.