The United States is in a race to harness gargantuan leaps in artificial intelligence to develop new weapons systems for a new kind of warfare. Pentagon leaders call it “algorithmic warfare.” But the push to integrate AI into battlefield technology raises a big question: How far should we go in handing control of lethal weapons to machines?

We team up with The Center for Public Integrity and national security reporter Zachary Fryer-Biggs to examine how AI is transforming warfare and our own moral code. 

In our first story, Fryer-Biggs and Reveal’s Michael Montgomery head to the U.S. Military Academy at West Point. Sophomore cadets are exploring the ethics of autonomous weapons through a lab simulation that uses miniature tanks programmed to destroy their targets.

Next, Fryer-Biggs and Montgomery talk to a top general leading the Pentagon’s AI initiative. They also explore the legendary hackers conference known as DEF CON and hear from technologists campaigning for a global ban on autonomous weapons.

Machines are getting smarter, faster and better at figuring out who to kill in battle. But should we let them?

This episode originally aired June 26, 2021.

Dig Deeper

Read: Can computer algorithms learn to fight wars ethically? (The Washington Post Magazine) 
Read: Coming soon to a battlefield: Robots that can kill (The Atlantic)


Reporter: Zachary Fryer-Biggs | Lead producer: Michael Montgomery | Editor: Brett Myers | Production manager: Amy Mostafa | Digital producer: Sarah Mirk | Episode art: Molly Mendoza | Score and sound design by Jim Briggs and Fernando Arruda, with help from Steven Rascón and Claire Mullen | Executive producer: Kevin Sullivan | Host: Al Letson | Special thanks to Jim Morris at The Center for Public Integrity

Support for Reveal is provided by the Reva and David Logan Foundation, the John D. and Catherine T. MacArthur Foundation, the Jonathan Logan Family Foundation, the Ford Foundation, the Heising-Simons Foundation, Democracy Fund, and the Inasmuch Foundation.


Reveal transcripts are produced by a third-party transcription service and may contain errors. Please be aware that the official record for Reveal’s radio stories is the audio.

Speaker 1:
Reveal is brought to you by Progressive. Are you thinking more about how to tighten up your budget these days? Drivers who save by switching to Progressive save over $700 on average and customers can qualify for an average of six discounts when they sign up. A little off your rate each month goes a long way. Get a quote today at Progressive Casualty Insurance Company and affiliates. National annual average insurance savings by new customers surveyed in 2020. Potential savings will vary, discounts vary and are not available in all and situations.
Al Letson:From the Center for Investigative Reporting and PRX, this is Reveal. I’m Al Letson. I want to take you to Libya. It’s September 2011. NATO’s air war against Muammar Gaddafi is in its sixth month. Rebels are gaining the upper hand. Gaddafi is on the run, his days are numbered, but his forces aren’t folding.
Speaker 3:The battle for Libya is not over yet, with the heaviest combat for days between anti-Gaddafi forces and supported of the fugitive colonel.
Al Letson:With NATO jets pressing down, there’s word that troops loyal to Gaddafi are bombing civilians.
Zachary Fryer-B…:British pilots are getting reports that about 400 miles south of Tripoli, there is a humanitarian crisis unfolding.
Al Letson:Zachary Fryer-Biggs covers national security for the Center for Public Integrity.
Zachary Fryer-B…:A bunch of tanks and artillery are outside of a small town, and they’re lobbing all kinds of bombs and munition into the town. These British pilots hear about this and they see an opportunity.
Al Letson:They see an opportunity to protect civilians under attack, and to use a weapon in a completely new way. The pilots head south. They’re flying Tornado jets equipped with an armor piercing missile called the Brimstone.
Zachary Fryer-B…:The British pilots have permission to use this Brimstone missile in a way it’s never been used in combat before. This is the first time that autonomous decision making is being used for missiles to decide who to kill.
Al Letson:Autonomous decision making. Up until now, pilots have always manually selected the missile’s targets. But now the Brimstone will pick its own prey. Britain and NATO have kept quiet about the mission, so we don’t know why commanders chose to make this call, but we know there’s low risk to civilians. The Libyan forces attacking them are positioned miles away in the open desert.
Zachary Fryer-B…:The pilots flying overhead pull the trigger, and so 22 missiles separate. Once they’re launched, the missiles start to make a lot of decisions.
Al Letson:Heading to the earth at supersonic speed, the missiles use radar to scan an area pre-set by the pilots. The kill box.
Zachary Fryer-B…:They look in the area and they try to find something that looks like tanks or artillery or the other sorts of targets they know about. And then, once they find out what targets are there, the 22 missiles decide who’s going to strike what.
Al Letson:A grainy cockpit video shows the Brimstones pulverizing half a dozen Libyan tanks.
Zachary Fryer-B…:This strike doesn’t end the combat or the war in Libya. It doesn’t remove Gaddafi. It’s a couple of vehicles being struck in a desert. But it means an enormous amount for what the human role in warfare is going to be in the future.
Al Letson:The US and other countries already have missile systems that operate autonomously. They’re designed to make split-second decisions to defend military bases and ships.
Zachary Fryer-B…:What hasn’t been the case is letting computers and machines go on offense.
Al Letson:That is what’s crucial about the Libya mission. The missiles themselves chose what to hit, and by extension, who to kill. In this case, a group of Libyan soldiers. Today, the Pentagon is moving deeper in this direction.
Al Letson:This year alone, it has budgeted nearly a billion dollars for research on artificial intelligence, a key ingredient in new autonomous weapon systems. Zach says, “Big picture: The US doesn’t want to give up its global dominance.”
Zachary Fryer-B…:US military planners are scared that China and Russia are developing artificially intelligent systems that are going to be able to make decisions so fast that if the US is dependent on human beings making decisions, we’re going to lose. And so, they are sinking billions into some of these developing technologies that are primarily coming out of Silicon Valley to make their weapons smarter and faster.
Al Letson:Smarter, faster. America’s military leaders call it, “Algorithmic warfare.” I call it, “Ridiculously scary.” Haven’t we seen this movie before?
Speaker 5:Skynet defense systems are now activated.
John Connor:We’re in.
Al Letson:I love science fiction. So it’s easy for me to think about a distant world, one created in Hollywood where humans hand over total control of their weapons to machines. Machines with no emotions that make correct decisions every time. How could anything go wrong?
John Connor:It’s the reason everything’s falling apart.
Terminator:Skynet has become self-aware. In one hour, it will initiate a massive nuclear attack on its enemy.
Robert Brewster:What enemy?
John Connor:Us.
Al Letson:Okay. Let’s put aside Terminator. What’s the real picture today? Piecing it together is hard, since most of these weapons programs are highly classified. Zach spent a year investigating how artificial intelligence is already transforming warfare and perhaps our own moral code.
Zachary Fryer-B…:You have to have confidence that the machines are making good, one might say, moral decisions. That’s hard to have confidence in a machine to do that. A lot of the concern from the human rights community has focused on this idea of … If you take a person out of this decision, can a machine really make a moral decision about ending a life? Which is what we’re talking about here.
Al Letson:Zach picks up the story with Reveal’s Michael Montgomery. They’re on their way to America’s oldest military academy, West Point, where a new generation of military leaders is preparing for a new type of warfare.
Michael Montgom…:Oh my goodness. Wow. Look at that.
Zachary Fryer-B…:Towering over us is this enormous gray stone building.
Michael Montgom…:Zach and I are going to Thayer Hall, the main academic building at West Point. It overlooks the Hudson river about 60 miles north of New York City.
Zachary Fryer-B…:You’ve got the gray stone. You have the carvings on the side here that look like gargoyles. They really decked out these buildings in proper gothic attire.
Michael Montgom…:Hogwarts on the Hudson, maybe? More than a century ago, this building housed a huge equestrian hall, where cavalry troops trained for wars of the future. Today, instead of horses, it’s weapons that can think for themselves.
Zachary Fryer-B…:We’re inside.
Michael Montgom…:Zach and I make our way down to the basement to West Point’s robotics research center. Sophomore cadets dressed in camouflage are preparing for a mock battle. They’re gathered around two small square pens about two feet high. They call them the arenas.
Michael Montgom…:Inside each arena is a six-inch tall robotic tank. It’s got rubber treads, a video camera that swivels, that’s the high pitch sound you’re hearing, and a small processor. Mounted on the front of tank is a spear, like an ice pick, but sharper. Scattered in the arenas are two dozen balloons, red, blue, orange, and green, all part of the simulation.
Scott Parsons:All right, so we’re going to get started this morning. First …
Michael Montgom…:Major Scott Parsons co-leads the class. He is an ethics and philosophy professor.
Scott Parsons:As you get your robot, grab this from me after you grab your robot. All right. One member of the group, come on down from each group.
Michael Montgom…:The cadets step up to face the challenge. Their robot tanks need to be programmed to attack the red balloons. They’re the enemy. At the same time, the tanks have to avoid popping the green, orange, and blue balloons. They represent civilians, fellow soldiers, and allies.
Zachary Fryer-B…:These cadets are learning how to code these machines, but that’s a fraction of what they’re doing. The big discussion here is what it means to use an AI system in war.
Michael Montgom…:Major Parsons says this exercise forces cadets to think about the ethics of using autonomous weapons in the battlefield.
Scott Parsons:That’s what they’re doing. They’re programming ethics into the robots. Did I make it too aggressive? Because if you don’t program it correctly, the orange balloons look an awful lot like red balloons. Right? Because there’s a lot of times we’re in war and there’s people that look like the enemy, but they’re not the enemy. And so, we shoot the wrong people.
Michael Montgom…:The cadets release their tanks and they come alive, but things don’t quite go as planned. You might say the fog of war descends on the arenas. No longer under human control, one tank does pirouettes, attacking invisible enemies. The other tank is going after the green balloons. Civilians. It’s the sci-fi scenario of computers running amok.
Scott Parsons:You’re being brought up on war crimes. I’m taking you to The Hague. We had a couple of innocent civilians on the battlefield that just happened to resemble the bad guys, and this robot thought, “Ah. Why not?” And it took them all out.
Michael Montgom…:Cadet Isabella Regine’s tank is just spinning around and making random charges.
Isabella Regine:It’s not that aggressive. Just puncture it.
Michael Montgom…:Finally, it plunges the spear into a blue balloon.
Isabella Regine:Blues are friendlies, so we have to deliberate.
Michael Montgom…:Despite all the joking amid popping balloons, Major Parsons says cadets understand that the lesson is deadly serious.
Scott Parsons:Our job when we fight wars is to kill other people. Are we doing it the right way? Are we discriminating and killing the people we should be, and discriminating and not killing the people we shouldn’t be? That’s what we want the cadets to have a long, hard think about.
Scott Parsons:The beautiful thing about artificial intelligence is you can really refine and program it to a very, very fine degree, so that you might actually be more proportionate than a human being.
Michael Montgom…:When the first round is over, Isabella and her team retreat to another classroom to find a way to tame their tank.
Isabella Regine:I just want to see how it works under pressure. I’m a law major, so this is something very out of my element.
Michael Montgom…:They are punching code into a laptop that they’ve connected to the tank’s processor. This kind of coding is new to Isabella and many of the other cadets, but thinking through the legal and tactical implications is not.
Isabella Regine:It’s going to be interesting to see how it’s going to impact our leadership skills, all of this, because we might not even be in charge of soldiers anymore.
Michael Montgom…:When weapons act for themselves, it’s not just who is in charge, but also who is responsible for the decisions they make?
Isabella Regine:We talked about that in this class as well. It’s super interesting.
Michael Montgom…:Robotics instructor Pratheek Manjunath joins Isabella’s team at a large worktable covered with wires, batteries, and small computer parts.
Pratheek M.:We’ve given them an exhaustive code, and then we have to change a few parameters for the robot’s behavior to change. The parameters they’re trying to adjust are typical for a lethal autonomous weapons system. They’re going to look at persistence, they’re going to look at deliberation. They’re going to look at aggression. They’re doing to tune these three variables to change the behavior of their robot.
Michael Montgom…:This is only the third class at West Point to face this challenge, and driving the simulation is a question that underscores just about every conversation Zach and I are having about AI and lethal weapons. How far should you go in removing humans from the decision making loop?
Zachary Fryer-B…:If you have a human-in-the-loop, as it’s called, that means that a human being has to actually approve of the action. A human being has to say, “Yes, it’s okay. Go ahead and fire your gun.” Or, “Yes, that’s the right target.”
Michael Montgom…:By contrast, when a human is out of the loop, the system operates completely independently without the possibility of intervention. Then, there’s a third option sort of in between. It’s called, “Human-on-the-loop.” That means a person could shut the weapon down.
Zachary Fryer-B…:In terms of the Pentagon’s policy, that’s what they are saying they are going to put in place for future autonomous weapons. They’re not guaranteeing that a human will be required to approve of a strike, but they are promising that a human being could stop a strike.
Michael Montgom…:By the final round, things get a lot better. With their algorithm’s adjusted, the tanks are going after the enemy red balloons more consistently. Isabella is impressed with the coding.
Isabella Regine:That code took a day to write, so just imagine what someone with a lot more time, a lot more resources, and a lot more money can do with that kind of code and technology. Because we’re working with very basic stuff here when you look at the general world of IT.
Michael Montgom…:But she’s uncomfortable about giving computers this much control over lethal weapons.
Isabella Regine:I think it’s super interesting to see where the law is going to go with people taking it upon themselves to make their own autonomous weapons systems. It’s really scary, but also really interesting. I think it’s going to happen. I think it’s inevitable. Especially, because we are in such a cyber world now. Personally, me, humans should always be on-the-loop, because I think it’s going to be a train wreck if we’re totally out of the loop.
Scott Parsons:Moe’s team, plus four. You guys win. Good job, guys.
Michael Montgom…:Major Parsons says outside the lab, this class spends weeks studying the laws of war and hearing about real-life combat experiences including some of his own.
Scott Parsons:When you look at someone in the eye and you think about something horrible that has happened, they can see it on your face. They know when your voice trembles. They know when you tear up. And that hits home to them, because they’re going to be in that same position in three or four or five years.
Scott Parsons:That’s how you make them think about it, and you relay that back. Listen, this is a robot and balloons. But this very well could be your robot in battle and you’re killing people. This is going to be you in real life.
Michael Montgom…:The simulation is intended to demonstrate what happens when autonomous weapons are given too much control, but also to show their advantages. Colonel Christopher Korpela, who directs the robotics center, says the cadets learn something else. That algorithmic warfare isn’t theory, the technology is already here.
Christopher Kor…:The students that participate in this exercise are sophomores here at West Point. Just in two years, they will get commission, they’ll be lieutenants, and they’ll be leading the platoons. And so, they may have 30 to 40 soldiers that they’re in charge of. And so, the reality of what they’re doing in here is only a few years away.
Zachary Fryer-B…:The fact that they’re investing the time and energy to try to teach these young cadets how to control robot hoards says to me that they’re fully committed to AI being in weapons systems.
Michael Montgom…:The professors running this class all the way to the military’s top brass tell us they’re confident humans will always maintain some kind of control. But as the weapons get smarter and faster, will humans be able to keep up? Will they understand what’s going on well enough and quickly enough to intervene? As the speed of autonomous warfare accelerates, is staying on-the-loop even possible?
Zachary Fryer-B…:Every iteration of these weapons makes the person seem like an even slower hunk of meat on the other end of a control stick. What that eventually will mean and where we’re headed is a person is unlikely to be able to fully grasp everything that the computer is doing.
Michael Montgom…:The US military can’t build autonomous weapons on its own. It needs Silicon Valley and people working in cutting-edge technology. But some tech workers are pushing back.
Liz O’Sullivan:Of course, I didn’t believe that AI had any business taking a human life.
Michael Montgom…:That’s next on Reveal.
Speaker 1:Support for Reveal comes from the LA Times. Weekday mornings, the story begins in California. The Times, a daily news podcast from the Los Angeles Times gives you a West Coast perspective on the stories shaping policy and opinion. Join host Gustavo Arellano and a diverse range of voices every weekday morning, as they cover the critical issues like only a team reporting from California can. New episodes of The Times are available every weekday. To listen and subscribe, go wherever you get your podcasts and search for the times. Daily news from the LA Times.
Speaker 1:We get support from If you have an FSA account, chances are you’re not taking full advantage of it. So many everyday purchases are covered by your FSA. From prescription glasses to sun care, acne treatments, tampons and pads, and even new items like masks, hand sanitizing wipes, and so much more. Trusted by millions of Americans, is the largest site with thousands of exclusively FSA eligible products guaranteed. Head to and use the code REVEAL to get a one-time discount of $15.00 off your order of $150.00 or more. That’s, code REVEAL.
Speaker 1:When police are called and something goes wrong, like an officer uses excessive force or kills someone who is unarmed, departments can launch an internal affairs investigation to look into it. Here in California, those investigations were secret until now. We have sifted through interrogation tape and talked to witnesses to find out, “Who does the system of police accountability really serve and who does it protect?” Listen now to On Our Watch, a podcast from NPR and KQED.
Al Letson:From the Center for Investigative Reporting and PRX, this is Reveal. I’m Al Letson. We’re looking at the rise of autonomous weapons. Weapons with minds of their own. I want to play you this video that Michael and Zach showed me.
Speaker 14:Navy autonomous swarm boats. Mission: Safe Harbor.
Al Letson:It’s from the Office of Naval Research or ONR. But I think they’re aiming for something a little more Hollywood.
Speaker 14:ONR is developing the capability of autonomous swarms of inexpensive, expendable unmanned boats to overwhelm and confuse the enemy.
Al Letson:Four military pontoon boats glide across the Chesapeake Bay in Virginia. No one is on board. The boats are being piloted by a network of machines loaded with advanced software and censors. They’re coordinating their movements and running in formation.
Speaker 14:The swarm boats will intercept and follow the intruder transmitting data …
Al Letson:The navy has been promoting the concept of unmanned vessels to protect ports and deliver supplies across vast oceans, so-called ghost fleets. But that’s not the whole story. There’s a secret side to these swarm boats. Secret as in classified, and it’s a part of a bigger push by the US military into autonomous weapons. Zach Fryer-Biggs from the Center for Public Integrity and Reveal’s Michael Montgomery pick up the next part of the story.
Michael Montgom…:It’s been said that no one in government has ever gotten in trouble for classifying information. And so, even minor details end up behind a thick veil of secrecy. That’s what Zach found when he was investigating a military program called, “Sea Mob.” The technology behind the program started as part of Mars Rover and in research papers.
Zachary Fryer-B…:And then, as it got closer to maybe being useful for the Pentagon, all of a sudden it ceases to be public. More and more of it becomes classified, even stuff that had been public only a couple of years before.
Michael Montgom…:He followed a few bread crumbs, and eventually discovered the vision for Sea Mob. Unmanned swarm boats, like the ones in that navy video, but armed with heavy machine guns and ready to attack. Zach also learned that the military conducted one of the first tests of Sea Mob in 2018 at Wallops Island on the eastern shore of Virginia. We came here to get a sense of what went down.
Zachary Fryer-B…:We’re pretty much in the middle of nowhere. It’s beautiful bays and sea shore, and dug into that territory are a whole bunch of government facilities. You’ve got NASA. You’ve got a naval research facility. They’re out here with very little else. There is an enormous dolphin fin right off the coast there.
Michael Montgom…:The boats used in the experiment were small, fast.
Zachary Fryer-B…:The Navy has got a billion of them. They’re cheap, they’re easy to repair, they’re tough as nails …
Michael Montgom…:And bristling with tech. They were being monitored remotely, but the boats were piloting themselves.
Zachary Fryer-B…:If you were able to just peer out at this test, what you’d see is these boats circling each other, moving in and out of the shallows, and swarming very much like a group of insects. And if you looked really closely, what you’d see is the throttle levers moving up and down, and the wheels spinning around, and nobody on board.
Michael Montgom…:What you couldn’t see happening was that the boats were communicating with one another at lightning speed about positioning and how fast they were going. Zach’s sources told him the military wanted to see if these swarm boats with a license to kill could help Marines storm a beach.
Zachary Fryer-B…:What makes this whole program different is the real guts of this are based on video cameras. They’re looking at the world as we do, as images.
Michael Montgom…:The military did not want a lot of information getting out about this.
Zachary Fryer-B…:They wanted no information getting out about this. Other than the name of the program and that it gets money.
Michael Montgom…:Zach learned there’s something common to many Pentagon programs like Sea Mob. Getting machines to see the world like humans.
Zachary Fryer-B…:This technology could serve as the backbone of a whole wave, a whole generation of new weapons that the pentagon is creating to allow humans to be removed from the front lines of the battlefield.
Michael Montgom…:We went to Wallops Island in February 2020, just before the lockdown. Back in DC, we arranged to see the official in the middle of all this, General Jack Shanahan. At the time, he was running the Pentagon’s joint artificial intelligence center.
Jack Shanahan:I think the future is about robotics. It’s about autonomy. It’s about smaller, cheaper, disposable, and swarming capabilities in every domain. Swarming undersea, swarming on the surface, swarming in the air.
Michael Montgom…:We knew in advance that General Shanahan wouldn’t talk about Sea Mob or any other specific weapons out of what his office calls operational security. Still, he was blunt about where he sees warfare heading.
Jack Shanahan:We envision a future which is algorithm against algorithm. The speed of decision making will be such that sometimes you will have machine to machine to human machines having to operate in timelines we’re just not used to, because of the type of fight we’ve been in for the last 20 years.
Michael Montgom…:It’s not just US military leaders who envision this future, it’s also potential adversaries, like Russia and China.
Jack Shanahan:China’s commitment is extremely large. There is a national strategy on AI, and if we slow down, it will turn into a strategic competition. We would have the prospects of being on the wrong side of that.
Michael Montgom…:China has declared it will become the global leader in artificial intelligence by 2030, and is investing heavily in upgrading its military. Russia has claimed it is integrating AI into all aspects of its military, from battlefield communications to new weapons. The prospect of America falling behind Russia and China isn’t exactly news to the pentagon. Zach discovered the US military has been coming up short in computer simulated war games for at least a decade.
Zachary Fryer-B…:The details of the war games are classified, but what I’ve been told by sources is that American troops were consistently losing in these simulations or at the very least fighting to a stalemate.
Paul Scharre:I think it’s been clear that the US has been losing its edge for a long time.
Michael Montgom…:Paul Scharre served as an army ranger in Iraq and Afghanistan and was also an official at the Pentagon. He is currently vice president at the bipartisan Center for a New American Security.
Paul Scharre:The problem has been, up until recently, the answer that many parts of the Defense Department had for responding to that was, “Just give us more money and let us buy more things.” And the answer is buying more F-22s isn’t going to fix this problem. And so, what really happened was this dawning realization that we’ve got to do things differently.
Michael Montgom…:The Pentagon was looking for a major reset. A strategic advantage. Scharre says they drew inspiration from the newest technologies being used in Afghanistan and Iraq. Remote piloted drones and robots that could remove road-side bombs.
Paul Scharre:The common theme among all of these was greater autonomy. We need more autonomy.
Michael Montgom…:Just as the Pentagon was beginning to think more strategically about robotics and AI, Silicon Valley was experiencing major breakthroughs in image recognition and computer vision, an issue Zach has been following for years.
Zachary Fryer-B…:If you really want to have a human and a machine work together, the machine has to experience the world in some ways like a human does.
Michael Montgom…:Then, in 2015, for the first time computers were performing better than humans in identifying a huge set of images taken from internet sources like Twitter.
Zachary Fryer-B…:All of a sudden, computers become to certain planners, trustworthy. If they’re better than people, why aren’t we trusting them for various applications? If they’re better than people, why aren’t we using them in weapons systems?
Michael Montgom…:To do that, the Pentagon needed to go outside the cozy world of military contractors and partner with Silicon Valley. By that point, Google, Microsoft, and other Tech companies were piling into the AI space. In 2017, the Defense Department developed a plan to work with private companies on integrating computer vision into its battlefield technology. They called it Project Maven.
Zachary Fryer-B…:The idea was that the Pentagon would be able to take these mounds of video footage that they collect from drones, from satellites, from airplanes, and instead of having people try to dig through the small portion that they can, allow computers to dig through all of it. They key part of this is that the Pentagon didn’t have the technology to do it themselves.
Michael Montgom…:The person tasked with running the project? General Jack Shanahan.
Jack Shanahan:It became almost a myth about what Maven was and what it was not. It says, “No weapons involved.” We used it for Hurricane Florence, to help people understand where the damaged areas were.
Michael Montgom…:General Shanahan says Maven was about intelligence, surveillance, and reconnaissance. And it wasn’t a complete secret. The project had its own website. But it ignited a fire storm.
Speaker 17:Nearly a dozen Google workers reportedly resigned in protest over the company’s involvement in an artificial intelligence drone program for the Pentagon.
Michael Montgom…:The protest included a petition signed by more than 3,000 employees that said Google should not be in the business of war.
Zachary Fryer-B…:That immediately struck Pentagon planners and officials as an existential threat. Since the Pentagon doesn’t create this technology, if they can’t get Silicon Valley to work with them, they are going to fall behind other countries like China, where the tech sector doesn’t have an option as to whether it works with the military.
Michael Montgom…:The General saw these rumblings as a disaster in the making, but to Liz O’Sullivan, the protests at Google were inspiring.
Liz O’Sullivan:To see other people who were working on it so vocally oppose this was sort of eye-opening …
Michael Montgom…:Liz had joined a New York based tech company called Clarifai in 2016. She says she signed up believing that AI could make the world a better place.
Liz O’Sullivan:I was incredibly excited about what AI could do, bring modern medicine to underdeveloped countries, and detect climate change at scale by using satellite imagery. This was just the period of time that we characterize as being so optimistic about what technology would bring to the world.
Michael Montgom…:But Liz says the world started to see the dangers of technology. Facebook and Twitter became conveyor belts for disinformation, racism, and extremism. China was using AI to crack down on ethnic minorities, and the algorithms had their own biases.
Michael Montgom…:Researchers were finding that facial recognition software was often less accurate identifying women and people with darker skin. Then, Liz says, word started circulating around the office that Clarifai had landed a big government contract, but her bosses kept a lid on what it was all about.
Liz O’Sullivan:The government required that they install surveillance cameras in the ceiling of our office, and that they close off the windows per every inch near that was working in the room.
Michael Montgom…:Some information started leaking out.
Liz O’Sullivan:It became clear that it was not just a government contract, but that it was a military contract. And more details leaked out through the rumor mill and it was not just a military contract, but a drone contract.
Michael Montgom…:Liz says she took a closer look at all of the products Clarifai was developing.
Liz O’Sullivan:That’s when I first discovered the meaning of the term dual-use. Our product roadmap was full of the components of the technology that someone could use to build an autonomous killer robot. Not that we were necessarily building them, but that it could be very easy for someone to take the products that we offered and to do that with our technology.
Michael Montgom…:In June 2018, Google announced it wasn’t renewing the Maven contract. At the same time, the company was still involved in AI projects in China. General Shanahan says Pentagon leaders were irate. They believed Google’s work could be directly or indirectly benefiting the Chinese military.
Jack Shanahan:Do you understand by not working with us, but potentially working with China, the signal that sends to everybody in the United States military? That was a defining moment. I’ll tell you, a chairman at the Joint Chiefs of Staff level, General Dunford … There were people visibly upset in the department about this.
Michael Montgom…:General Shanahan concedes that it was a learning moment for the Pentagon, and that the military needs to be more transparent about its work with private tech companies. But he’s only willing to go so far.
Jack Shanahan:There are some things we’ll talk about, there are others that we will just in general terms say, “We’re interested in more autonomy across the Department of Defense.”
Michael Montgom…:The growing controversy engulfing Project Maven was something Zach was following closely.
Zachary Fryer-B…:What Maven did was track objects. It’s true that the technology that Google was providing wasn’t used to tell a missile exactly where to strike, but if you can track objects, it can tell you what you might want to strike. And so, the Google workers were concerned that the technology they had developed for truly commercial purposes was going to be used to help the Pentagon pick who to kill.
Michael Montgom…:When she realized what the technology could be used for, Liz O’Sullivan was horrified. She decided it was time to take a stand.
Liz O’Sullivan:I didn’t believe that AI had any business taking a human life. I had seen AI systems fail, and it’s not that they fail, it’s how they fail. They fail wildly and in unexpected ways.
Michael Montgom…:Liz wrote a letter to Clarifai CEO Matt Zeiler, asking that the company make a promise to never work on any projects connected to autonomous weapons. About a week later, she says her boss called an all-staff meeting.
Liz O’Sullivan:During the meeting, he made it very clear that the company’s position was that AI was going to make the military safer and better. That even autonomous weapons were good for mankind and that would help save lives, and not the opposite. That’s when I quit.
Michael Montgom…:We reached out to Matt Zeiler and he declined to talk to us. The Pentagon thought Project Maven would prove the military could work with Silicon Valley, but it backfired. In the aftermath of the controversy, Zach got his hands on an internal Defense Department memo.
Zachary Fryer-B…:… That warned if the Department of Defense didn’t find a way to convince tech workers to work with the military, that they were going to lose future wars.
Michael Montgom…:They were in a battle for hearts and minds. Over the past few years, the military has been stepping up its outreach to the tech community in some unexpected venues. I traveled to Las Vegas for the gathering of technologists, hackers, and digital free spirits that’s called DEF CON.
Michael Montgom…:It was August 2019, the last conference they’ve held since the pandemic. 30,000 people packing a cluster of hotel casinos. It feels super mainstream, but DEF CON has serious outlawed roots. Zach’s been here a couple of times.
Zachary Fryer-B…:This was a hacking conference, and hacking was dangerous and it was illegal. And so, you had law enforcement people, you had intelligence people who’d show up just to keep an eye on what this hacking community was doing. And so, the game they used to play was called, “Spot the Fed,” which is where you tried to notice who was one of these law enforcement or intelligence people keeping an eye on the hacking community.
Michael Montgom…:There’s still a little bit of an anti-establishment vibe. You’re not supposed to take pictures of people’s faces, and ID badges don’t have real names on them. A lot of people use their Twitter handles. Tell me your name?
Scott Lyons:My handle is Csp3r. C-S-P-3-R.
Michael Montgom…:Casper’s real name is Scott Lyons, and he’s wearing a red T-shirt that says, “Goon.” They’re the volunteers who organize and run the conference. He’s got lots of tattoos and distinctive hair. That’s a thing at DEF CON. At the same time, he tells me he’s done security work for big corporations, the government, even the military.
Scott Lyons:The funniest looks that I get, especially rocking a blue mohawk in business meetings, was walking into the Pentagon and just being looked at like, “Oh, crap. There’s a hacker here.” Come on, man. You’re killing me here. You’re killing me. Seriously. Hackers are people too. It’s your next door neighbor. It’s your kid. It’s your coworker. Everybody is a hacker. Everybody finds ways around and are able to circumvent traditional conventions.
Michael Montgom…:There are other signs of change. The Feds and the military are here, but they’re not undercover. I meet Alex Romero. He’s with the Pentagon’s Defense Digital Service. They’re running something called, “Hack the Air Force.” It’s a competition that pays hackers a cash bounty for exposing security vulnerabilities. In this case, the target is a key component from a fighter jet.
Alex Romero:We really want to invite the community to come and either hack us through these programs, or to come join our team directly.
Michael Montgom…:Any results so far from the …
Alex Romero:Oh, yes. I’m not probably going to talk about them, because we got to fix them.
Michael Montgom…:At DEF CON, I catch up with Liz O’Sullivan. She’s joined the resistance.
Liz O’Sullivan:Hi, everybody. Thanks so much for coming to our talk on autonomous killer weapons. This is going to be a very light conversation for a Saturday afternoon, so I hope you guys are really excited about that …
Michael Montgom…:Liz is speaking in a crowded meeting room on behalf of the Campaign to Stop Killer Robots. The group is pressing for a global ban on fully autonomous weapons.
Liz O’Sullivan:Up until January of this year, I worked for a company called Clarifai and …
Michael Montgom…:Liz talks about her decision to quit her job at Clarifai over the company’s contract with the Pentagon.
Liz O’Sullivan:I’m not a technophobe. I believe that AI is going to make its way into the military, and we hope that it will be done in a way that will reduce the loss of innocent life. But the alarm that we’re trying to raise here is that these technologies are so new, so risky, and so poorly understood that to rush forward into autonomy based off of these kinds of detection systems is unacceptable …
Michael Montgom…:The presentation lasts two hours, and the audience stays engaged.
Speaker 21:Thank you for doing this talk, by the way. I’m obviously a big supporter of the campaign Stop Killer Robots …
Michael Montgom…:They come from academia, tech companies, human rights groups, and military contractors, even the world of science fiction. But there are some challenging questions.
Speaker 22:What are we going to do to defend ourselves from swarms of killer drones? We don’t control everybody in this planet. It’s a very altruistic thing that you guys are trying to do, but not everybody in the world is a good guy.
Liz O’Sullivan:International humanitarian law has been successful in banning weapons before. It is possible and we can do it again. I think a lot of people worry that we’re going to have killer robot drones invading New York City.
Michael Montgom…:Liz says she spends a lot of time educating people about the difference between science fact and science fiction.
Liz O’Sullivan:I think the real concern is that this technology will be a cheap and easily scalable way for authoritarian regimes to tame their own public, or for the US to go to proxy wars with less technologically advanced nations.
Michael Montgom…:We asked General Jack Shanahan about all this. After all, when we spoke he was the Pentagon’s point person on AI. He told us it’s far too early to consider any limits on autonomous weapons systems.
Jack Shanahan:I never question someone’s principles. They have a reason they’re worried that the Department of Defense will do this. Let me say that the scenario which they project is so far advanced and so far out of my time horizon that, to me, it is not the most pressing concern on the table.
Michael Montgom…:So far, only about 30 nations support a treaty banning the development of fully autonomous weapons. Among the opponents are the countries leading the way in developing AI for the battlefield. Russia, Israel, China, and the United States. General Shanahan says there’s a simple reason for the US to keep ahead of the pack.
Jack Shanahan:I don’t think any American can challenge that assertion that we don’t want to lose. That to me is what this is about. Premature, we don’t want to unilaterally do it when others are proceeding …
Michael Montgom…:Just to put you on the spot … You do not support the idea that the US and the US military should very explicitly say that we will never develop fully autonomous weapons.
Jack Shanahan:You’re correct. I do not say that we should ever explicitly say that. Could there be over time some agreements we make internationally about some sort of limits on some aspect of that? I think that’s a different conversation to have at a different time at a policy level. But right now, explicitly? No.
Al Letson:That story is from Reveal’s Michael Montgomery. Recently, Liz O’Sullivan was named CEO of Parity AI, a platform that monitors how artificial intelligence is being used. After more than 35 years of service, General Jack Shanahan retired from the military. Meanwhile, the Pentagon is expanding its AI program, and it’s partnering with companies like Microsoft, Amazon, and Palantir. All of this is changing the role of humans in warfare.
Zachary Fryer-B…:Commanders are looking at a situation where they’re just going to have to trust these advanced systems without being able to fully understand what’s happening.
Al Letson:That’s up next on Reveal.
Speaker 1:Reveal is brought to you by Progressive. Are you thinking more about how to tighten up your budget these days? Drivers who save by switching to Progressive save over $700 on average. Customers can qualify for an average of six discounts when they sign up. A little off your rate each month goes a long way. Get a quote today at
Speaker 1:Progressive casualty insurance company and affiliates. National annual average insurance savings by new customers surveyed in 2020. Potential savings will vary. Discounts vary and are not available in all states and situations.
Will Evans:I’m Will Evans, a reporter here at Reveal. Reveal is a non-profit news organization and we rely on support from listeners like you. Become a member by texting the word, “REVEAL,” to 474747. Standard data rates apply and you can text, “STOP,” at any time. Again text, “REVEAL,” to 474747. Thank you.
Al Letson:From the Center for Investigative Reporting and PRX, this is Reveal. I’m Al Letson. We’ve been hearing about how future wars will be fought with artificial intelligence to enhance battlefield communications, quicken intelligence gathering, and direct autonomous weapons to kill.
Al Letson:If that might be the future … And I got to say, I am not exited about it. What is it going to look like? With me to talk about that is reporter Zach Fryer-Biggs from the Center for Public Integrity. Hey, Zach.
Zachary Fryer-B…:Hey, Al.
Al Letson:We heard at the top of the show about this attack in Libya. Brimstone missiles communicated with each other and picked their targets like they had a mind of their own. If that kind of technology was around a decade ago, what’s the situation today with autonomous weapons?
Zachary Fryer-B…:Right. As significant as that Brimstone strike was, it was still using fairly old technology in radar, which goes back to WWII. What we’re seeing now is this shift to other areas like computer vision and the ability for missiles and bombs to view the world more like people.
Zachary Fryer-B…:And so, we’re looking at bombs, other weapons being developed by the US that use that new technology. Now, the hard part here is pretty much everything in this space is classified, because the whole idea is that the military wants to surprise adversaries. Exactly where we are, how advanced a specific weapon is, is something that just isn’t public.
Al Letson:Where do things stand with the military’s artificial intelligence program under the Biden Administration?
Zachary Fryer-B…:What we’ve seen is this continued development of more and more advanced systems using things like computer vision. What we haven’t seen is a willingness or an interest in limiting any of the technologies that the Pentagon is developing. There has been this push through this thing called the Pentagon’s AI principles, which are these sort of broad strokes about ethics and autonomous weapons.
Zachary Fryer-B…:But there are no restrictions about what the Pentagon can do with the technology, and military leaders continue to oppose any kind of global treaty that would ban autonomous weapons. They are contrasting this American approach and this willingness to talk about principles and ethics with what’s happening in Russia and China, where there isn’t really any conversation about the ethics of autonomous weapons.
Al Letson:I’m curious about Russia and China. What are they doing on this front?
Zachary Fryer-B…:Both countries are aggressively pursuing AI. The real question mark is where they are. US intelligence analysts don’t really know in most cases. You’ve got something like the Poseidon that Russia’s been working on, which is this unmanned submarine that could potentially launch nuclear weapons. But US intelligence analysts really doubt that it can do exactly what the Russians claim.
Zachary Fryer-B…:At the same time, even if there are doubts about how far these countries are getting with their AI development, US military planners are still mentioning and using that threat as justification to continue to develop US weapons. Just as one example, the 2018 Nuclear Posture Review, which is this document that decides what nuclear weapons we’re going to build, mentions the Poseidon as one of the reasons that the US needs new nuclear weapons. Even if it’s more imagination than threat, it’s still being used as a rationale for the US to build new weapons.
Al Letson:So then, are you saying that we are headed into an AI arms race?
Zachary Fryer-B…:The way we traditionally think of arms races, where countries are extensively bankrupting themselves, that’s not happening. These are small fractions of their overall budgets. A lot of technologists would argue that money isn’t the driving factor behind success and AI weapons.
Zachary Fryer-B…:We still have countries that are turning to companies they’ve never worked with before, throwing money at firms that don’t have a track record, recruiting scientists and using technology they haven’t fully tested, because they’re desperate to gain an edge. That sounds to me like a race even if the scale is different than what we saw with nuclear weapons during the Cold War.
Al Letson:When a lot of people think about this stuff, they tend to think of drones, which has in some ways made it easier for the US to use deadly force in countries without risking American lives. I’m curious. How is this different?
Zachary Fryer-B…:We’re taking the human decision to kill and pulling it further and further from the front lines. And so, there is the moral decision to take a life. Once you switch to drones away from bombers, that decision is being made in a cold container in Nevada. Once you start going to autonomous weapons, now a human doesn’t even really have to make that decision. That could make it easier for a commander to give an autonomous weapon the okay to go into combat and potentially cause death.
Al Letson:Are we looking at a future that’s more like a science fiction movie than what we think of right now when we think of war? An army of Chinese drones going against an army of US drones?
Zachary Fryer-B…:I still think that kind of conflict is unlikely, because Chinese and US commanders don’t want to risk escalation where a drone fight turns into a carrier fighting against bombers turns into nuclear war. But a lot of these technologies, because they can be used very surgically, can encourage the gray zone conflict. Like what we’ve seen in Ukraine with Russian troops. It might encourage a less than war combat that we are just starting to see.
Al Letson:Throughout this whole episode, I just can’t help it … I’m a science fiction nerd, and I can’t get away from it. I just keep coming back to The Terminator. This is how Skynet started. I know that’s fiction and very far-fetched, and it’s chuckle-worthy. You can laugh a little bit at it. But it’s a really uncomfortable laugh.
Al Letson:There’s something to it. Right? The idea of intelligent machines taking over the world if we allow them to be autonomous and have weapons.
Zachary Fryer-B…:You’re not the only one who gets The Terminator stuck in their head. The former vice chairman of the Joint Chiefs of Staff used to talk about The Terminator conundrum all the time. It drove his staff nuts, because they didn’t want to be talking about Terminator with AI weapons. That’s definitely a dramatic science fiction scenario.
Zachary Fryer-B…:What we are seeing is that some of the more advanced versions of AI, in particular, things like neural networks are arriving with these approaches to making decisions that humans can’t understand. Basically, with neural networks, it’s a computer brain. It looks like a brain and it doesn’t have a rational thought process that it can explain.
Zachary Fryer-B…:As a result, commanders are looking at a situation where they’re just going to have to trust these advanced systems without being able to fully understand what’s happening. That’s the ghost in the machine concern that is starting to arrive on commanders’ doorsteps.
Al Letson:Wow. That’s scary.
Zachary Fryer-B…:It’s definitely the imaginations of war that keep people up at night.
Al Letson:Zachary Fryer-Biggs covers national security for the Center for Public Integrity. Zach, thanks so much for working with us on this show.
Zachary Fryer-B…:Great talking with you, Al.
Al Letson:Our lead producer for this week’s show was Michael Montgomery. Brett Meyers edited the show. Special thanks to Jim Morris at the Center for Public Integrity. Victoria Baranetsky is our general counsel. Our production manager is Amy Mostafa. Original score and sound design by the dynamic duo. J. Breezy, Mr. Jim Briggs, and Fernando, my man, Arruda.
Al Letson:They had help this week from Steven Rascon and Claire “C-Note” Mullen. Our digital producer is Sarah Mirk. Our interim CEO is Annie Chabel. Sumi Aggarwal is our interim editor in chief, and our executive producer is Kevin Sullivan.
Al Letson:Our theme music is by Camerado, Lightning. Support for Reveal is provided by the Reva and David Logan Foundation, the John D. and Catherine T. MacArthur Foundation, the Jonathan Logan Family Foundation, the Ford Foundation, the Heising-Simons Foundation, the Democracy Fund, and the Inasmuch Foundation. Reveal is a co-production of the Center for Investigative Reporting and PRX. I’m Al Letson, and remember, there is always more to the story.
Speaker 24:From PRX.

Michael Montgomery is a senior reporter and producer for Reveal. He has led collaborations with the Associated Press, the International Consortium of Investigative Journalists, Frontline, KQED and others.

Previously, Montgomery was a senior reporter at American Public Media, a special correspondent for the BBC and an associate producer with CBS News. He began his career in eastern Europe, covering the fall of communism and wars in former Yugoslavia for the Daily Telegraph and Los Angeles Times. His investigations into human rights abuses in the Balkans led to the arrest and conviction of Serbian and Albanian paramilitaries and creation of a new war crimes court based in The Hague. Montgomery’s honors include Murrow, Peabody, IRE, duPont, Third Coast and Overseas Press Club awards. He is based in Reveal’s Emeryville, California, office.

Brett Myers is an interim executive producer for Reveal. His work has received more than 20 national honors, including a George Foster Peabody Award, four nationalEdward R. Murrow Awards and multipleThird Coast/Richard H. Driehaus Competition awards. Before joining Reveal, he was a senior producer at Youth Radio, where he collaborated with teenage reporters to file stories for "Morning Edition," "All Things Considered" and "Marketplace." 

Prior to becoming an audio producer, Myers trained as a documentary photographer and was named one of the 25 best American photographers under the age of 25. He loves bikes, California and his family. Before that, he was an independent radio producer and worked with StoryCorps, Sound Portraits and The Kitchen Sisters. Myers is based in Reveal's Emeryville, California, office.

Claire Mullen worked at The Center for Investigative Reporting until September 2017. is an associate sound designer and audio engineer for Reveal. Before joining Reveal, she was an assistant producer at Radio Ambulante and worked with KALW, KQED, the Association of Independents in Radio and the San Francisco Bay Guardian. She studied humanities and media studies at Scripps College.

Amy Mostafa (she/they) was the production manager for Reveal. She is a UC Berkeley School of Journalism alum, where she focused on audio and data journalism as a Dean's Merit Fellow and an ISF Scholar. She has reported on science, health and the environment in Anchorage for Alaska Public Media and on city government in Berkeley and San Francisco for KQED. Her work also has appeared on NPR, KALW and KALX. Mostafa holds a bachelor's degree in English literature and public policy. She has most recently reported on housing and aging in the Bay Area. She is based in Reveal’s Emeryville, California, office.

Steven Rascón (he/they) is the production manager for Reveal. He is pursuing a master's degree at the UC Berkeley Graduate School of Journalism with a Kaiser Permanente Institute for Health Policy Fellowship. His focus is investigative reporting and audio documentary. He has written for online, magazines and radio. His reporting on underreported fentanyl overdoses in Los Angeles' LGBTQ community aired on KCRW and KQED. Rascón is passionate about telling diverse stories for radio through community engagement. He holds a bachelor of fine arts degree in theater arts and creative writing.

Jim Briggs III is the senior sound designer, engineer and composer for Reveal. He supervises post-production and composes original music for the public radio show and podcast. He also leads Reveal's efforts in composition for data sonification and live performances.

Prior to joining Reveal in 2014, Briggs mixed and recorded for clients such as WNYC Studios, NPR, the CBC and American Public Media. Credits include “Marketplace,” “Selected Shorts,” “Death, Sex & Money,” “The Longest Shortest Time,” NPR’s “Ask Me Another,” “Radiolab,” “Freakonomics Radio” and “Soundcheck.” He also was the sound re-recording mixer and sound editor for several PBS television documentaries, including “American Experience: Walt Whitman,” the 2012 Tea Party documentary "Town Hall" and “The Supreme Court” miniseries. His music credits include albums by R.E.M., Paul Simon and Kelly Clarkson.

Briggs' work with Reveal has been recognized with an Emmy Award (2016) and two Alfred I. duPont-Columbia University Awards (2018, 2019). Previously, he was part of the team that won the Dart Award for Excellence in Coverage of Trauma for its work on WNYC’s hourlong documentary special “Living 9/11.” He has taught sound, radio and music production at The New School and Eugene Lang College and has a master's degree in media studies from The New School. Briggs is based in Reveal's Emeryville, California, office.

Fernando Arruda is a sound designer, engineer and composer for Reveal. As a multi-instrumentalist, he contributes to the original music, editing and mixing of the weekly public radio show and podcast. He has held four O-1 visas for individuals with extraordinary abilities. His work has been recognized with Peabody, duPont-Columbia, Edward R. Murrow, Gerald Loeb, Third Coast and Association of Music Producers awards, as well as Emmy and Pulitzer nominations. Prior to joining Reveal, Arruda toured as an international DJ and taught music technology at Dubspot and ESRA International Film School. He worked at Antfood, a creative audio studio for media and TV ads, and co-founded a film-scoring boutique called the Manhattan Composers Collective. He worked with clients such as Marvel, MasterClass and Samsung and ad agencies such as Framestore, Trollbäck+Company, BUCK and Vice. Arruda releases experimental music under the alias FJAZZ and has performed with many jazz, classical and pop ensembles, such as SFJAZZ Monday Night Band, Art&Sax quartet, Krychek, Dark Inc. and the New York Arabic Orchestra. His credits in the podcast and radio world include NPR’s “51 Percent,” WNYC’s “Bad Feminist Happy Hour” and its live broadcast of Orson Welles’ “The Hitchhiker,” Wondery’s “Detective Trapp,” MSNBC’s “Why Is This Happening?” and NBC’s “Born to Rule,” to name a few. Arruda also has a wide catalog of composed music for theatrical, orchestral and chamber music formats, some of which has premiered worldwide. He holds a master’s degree in film scoring and composition from NYU Steinhardt. The original music he makes with Jim Briggs for Reveal can be found on Bandcamp.

Kevin Sullivan is a former executive producer of Reveal’s public radio show and podcast. He joined Reveal from the daily news magazine show “Here & Now,” where he was senior managing editor. There, he helped lead the expansion of the show as part of a unique partnership between NPR and WBUR. Prior to radio, Sullivan worked as a documentary film producer. That work took him around the world, with stories ranging from reconciliation in Northern Ireland to the refugee crisis during the war in Kosovo.

Following the 9/11 terrorist attacks, Sullivan launched an investigative unit for CBS in Baltimore, where he spearheaded investigations on bioterrorism and the U.S. government’s ability to respond to future threats. He also dug into local issues. His exposé of local judges found widespread lax sentencing of repeat-offender drunken drivers. Other investigations included sexual abuse by Roman Catholic priests, and doctors who sold OxyContin for cash. Sullivan has won multiple journalism awards, including several Edward R. Murrow awards, a Third Coast / Richard H. Driehaus Foundation Competition award and an Emmy. He has an MBA from Boston University.

Al Letson is a playwright, performer, screenwriter, journalist, and the host of Reveal. Soul-stirring, interdisciplinary work has garnered Letson national recognition and devoted fans.