When Technology Can Be Used To Build Weapons, Some Workers Take A Stand

May 13, 2019

On the night of Jan. 16, Liz O'Sullivan sent a letter she'd been working on for weeks. It was directed at her boss, Matt Zeiler, the founder and CEO of Clarifai, a tech company. "The moment before I hit send and then afterwards, my heart, I could just feel it racing," she says.

The letter asked: Is our technology going to be used to build weapons?

With little government oversight of the tech industry in the U.S., it's tech workers themselves who increasingly are raising these ethical questions.

O'Sullivan often describes technology as magic. She's 34 — from the generation that saw the birth of high-speed Internet, Facebook, Venmo and Uber. "There are companies out there doing things that really look like magic," she says. "They feel like magic."

Her story began two years ago, when she started working at Clarifai. She says one of her jobs was to explain the company's product to customers. It's visual recognition technology, used by websites to identify nudity and inappropriate content. And doctors use it to spot diseases.

Clarifai was a startup, founded by Zeiler, a young superstar of the tech world. But shortly after O'Sullivan joined, Clarifai got a big break — a government contract, reportedly for millions of dollars.

Matt Zeiler, CEO of Clarifai, says his company's technology will help American soldiers and civilians.
Courtesy of Clarifai

It was all very secretive. At first, the people assigned to work on the project were in a windowless room, with the glass doors covered.

O'Sullivan would walk by and wonder: What are they doing in there?

Zeiler says the contract required secrecy, but everyone working directly on the project knew what it was about. "We got briefed before even writing a single line of code," he says. "And I also briefed everybody I asked to participate on this project."

NPR spoke to one employee who did work directly on the project. That person, who requested anonymity for fear of retaliation, says many of the workers in that room were not entirely clear what this was going to be used for. After all, the technology they were putting together is the same that they had been working on for other projects.

In the months that followed, former employees say, information started trickling down.

They were working with the Department of Defense.

Then, people working on the project got an email that outlined some details. The text included a brief reference to something called Project Maven.

The Pentagon told NPR that the project, also called Algorithmic Warfare, was created in April 2017. Its first task was to use computer vision technology for drones in the campaign against ISIS.

"This could be more effective than humans, who might miss something or misunderstand something," explains Ben Shneiderman, a computer scientist at the University of Maryland. "The computer vision could be more accurate."

Shneiderman had serious ethical concerns about the project. And he wasn't alone. Many people in the tech world were starting to wonder: What will the technology we're building be used for down the road?

O'Sullivan says this question began to haunt her too.

The big fear among tech activists is that will this be used to build autonomous weapons — ones that are programmed to find targets and kill people, without human intervention.

The Department of Defense's current policy requires that autonomous weapons "allow commanders and operators to exercise appropriate levels of human judgment."

It's a definition many find murky. And last year, tech workers began to ask a lot of questions. "It's a historic moment of the employees rising up in a principled way, an ethical way and saying, we won't do this," Shneiderman says.

In 2018, Microsoft employees protested their company's work with Immigration and Customs Enforcement. And several thousand employees demanded that Google stop working on Project Maven. Google did not renew its contract with the project.

Last June, Clarifai CEO Matt Zeiler also weighed in. In a blog post, he explained why the company was working on a military project.

O'Sullivan read that with interest. "You know, the people running these companies are sort of techno-Utopians. And they believe that tech is going to save the world and that we really just have to build everything that we can, and then figure out where the cards fall. But there are a lot of us out here saying, should we be building this at all?"

Former Clarifai employees told NPR that at the office, the mood got tense.

There were plenty of people who felt comfortable working on Project Maven. Others resented that it had been so secretive. And some just found it morally troubling.

As the months went by, O'Sullivan says she realized she couldn't change the direction of the company. So at the beginning of this year, she wrote that letter to Zeiler and sent it to the whole staff.

"We have serious concerns about recent events and are beginning to worry about what we are all working so hard to build," she wrote.

She went on to ask a bunch of questions. Many of them are the same ones being asked across the tech world today.

Are you going to let us know who we're selling our stuff to?

Are you going vet how it's used?

Do we care if this is used to hurt people?

A week after she sent that letter, she says Zeiler spoke at a staff meeting. "He did say that our technology was likely to be used for weapons," O'Sullivan says, "and autonomous weapons at that."

Zeiler does not deny this. In fact, he says, countries like China, are already doing it. The U.S. needs to step it up.

"We're not going to be building missiles, or any kind of stuff like that at Clarifai," he says. "But the technology ... is going to be useful for those. And through partnerships with the DOD and other contractors, I do think it will make its way into autonomous weapons."

This is where he and O'Sullivan disagree.

Should companies like Clarifai, Google and Amazon be involved in military projects?

Zeiler says Clarifai's technology will help save American soldiers. "At the end of the day, they're out there to do a mission. And if we can provide the best technology so that they can accurately do their mission, in the worst case, there might be a human life at the other end that they're targeting. But in many cases it might be a weapons cache, [without] any humans around or a bridge, to slow down an enemy threat."

And, Zeiler says, it's going to help minimize civilian casualties by improving the accuracy of weapons.

O'Sullivan wasn't buying that. She quit the day after the staff meeting. She describes herself as a conscientious tech objector.

She went on to join a startup that advises companies on how to make trustworthy artificial intelligence.

She says she still thinks tech can be really wonderful — or really dangerous. Like playing with magic.

Copyright 2019 NPR. To see more, visit https://www.npr.org.

ARI SHAPIRO, HOST:

Even if you can advance technology, create the next great app or a robot that fights wars, should you? We're exploring that question on this month's All Tech Considered.

(SOUNDBITE OF ULRICH SCHNAUSS' "NOTHING HAPPENS IN JUNE")

SHAPIRO: Here in the U.S., there is little government oversight of the tech industry. So more and more it is the tech workers themselves who are raising ethical concerns. NPR's Jasmine Garsd reports on one company and an employee who says she'd had enough.

JASMINE GARSD, BYLINE: Earlier this year, on the night of January 16, Liz O'Sullivan hit send on a letter she'd been working on for weeks. It was directed at her boss, Matt Zeiler, the founder and CEO of Clarifai, a tech company.

LIZ O'SULLIVAN: The moment before I hit send, I mean, and then afterwards, my heart - I could just feel it racing.

GARSD: The letter asked the question, is our technology going to be used to build weapons? O'Sullivan is 34. She's from the generation that saw the birth of high-speed Internet, Facebook, Venmo, Uber. She often describes technology as magic.

O'SULLIVAN: There are companies out there doing things that really look like magic. They feel like magic.

GARSD: O'Sullivan's story begins two years ago, when she started working at Clarifai. She says one of her jobs was to explain the company's product to customers. It's visual recognition technology. It's used by websites to identify nudity and inappropriate content. Doctors use it to spot disease. It was a startup. But shortly after O'Sullivan joined, Clarifai got a big break - a government contract reportedly for millions of dollars. It was all very secretive.

At first, the people assigned to work on that government project were in a windowless room with the glass doors covered. O'Sullivan would walk by and wonder, what are they doing in there? Matt Zeiler, CEO of Clarifai, says the contract required secrecy. But everyone working directly on the project knew what it was about. Here's Zeiler.

(SOUNDBITE OF ARCHIVED RECORDING)

MATT ZEILER: We got briefed before even writing a single line of code. And I also briefed everybody I asked to participate on this project.

GARSD: NPR spoke to one employee who did work directly on the project. That person, who requested anonymity for fear of retaliation, said many of the workers in that room were not entirely clear what this was going to be used for. The technology they were putting together, it's the same that they had been working on for other projects. In the months that followed, former employees say information started trickling down. They were working with the Department of Defense.

Then people working on the project got an email that outlined some details. In the text, a blink-and-you'll-miss-it reference to something called Project Maven. The Pentagon told NPR that Project Maven was created in April 2017. It's also called algorithmic warfare. Its first task was to use computer vision technology for drones in the campaign against ISIS.

BEN SHNEIDERMAN: This could be more effective than humans, who might miss something or misunderstand something, that the computer vision could be more accurate.

GARSD: That's professor Ben Shneiderman, a computer scientist at the University of Maryland, talking on Skype. He had serious ethical concerns about the project. He wasn't alone. Many people in the tech world were starting to wonder, what is this technology we're building going to be used for down the road? Liz O'Sullivan says this question began to haunt her, too. The big fear among tech activists is, will this be used toward building autonomous weapons? That's weapons that are programmed to find targets and kill people without human intervention.

The Department of Defense's current policy requires that autonomous weapons, quote, "allow commanders and operators to exercise appropriate levels of human judgment." It's a definition many find murky. And in 2018, tech workers began to ask a lot of questions. Here's professor Shneiderman again.

SHNEIDERMAN: It's a historic moment of the employees rising up in a principled way, an ethical way and saying, we won't do this.

GARSD: Microsoft employees protested their company's work with Immigration and Customs Enforcement. And several thousand employees demanded that Google stop working on Project Maven. Google did not renew its contract with the project. In June of last year, Clarifai CEO Matt Zeiler also weighed in. In a blog post, he explained why the company was working on a military project. Liz O'Sullivan read that with interest.

O'SULLIVAN: You know, the people running these companies are sort of techno utopians. And they believe that tech is going to save the world and that we really just have to build everything that we can and then figure out where the cards fall. But there are a lot of us out here saying, should we be building this at all?

GARSD: Former Clarifai employees told NPR that at the office, the mood got tense. There were plenty of people who felt comfortable working on Project Maven. Others resented that it had been so secretive. And some just found it morally troubling. As the months went by, O'Sullivan says she realized she couldn't change the direction of the company. So at the beginning of this year, she wrote that letter to CEO Matt Zeiler and sent it to the whole staff. Here she is reading an excerpt.

O'SULLIVAN: (Reading) We have serious concerns about recent events and are beginning to worry about what we're all working so hard to build.

GARSD: She goes on to ask a bunch of questions. Many of them are the same questions being asked across the tech world today. Like, are you going to let us know who we're selling our stuff to? Are you going to vet how it's used? Do we care if this is used to hurt people? A week after she sent that letter, there was a staff meeting where Zeiler spoke.

O'SULLIVAN: He did say that our technology was likely to be used for weapons - and autonomous weapons at that.

GARSD: Clarifai CEO Matt Zeiler does not deny this. In fact, he says, countries like China, they're already doing it. The U.S. needs to step it up.

(SOUNDBITE OF ARCHIVED RECORDING)

ZEILER: We're not going to be building missiles or any kind of stuff like that at Clarifai. But the technology, like I was saying, is going to be useful for those. And through partnerships with the DOD and other contractors, I do think it will make its way into autonomous weapons.

GARSD: Here's where he and O'Sullivan disagree. Should companies like Clarifai, Google and Amazon be involved in military projects? Zeiler says Clarifai's technology is going to help save American soldiers.

(SOUNDBITE OF ARCHIVED RECORDING)

ZEILER: At the end of the day, they're out there to do a mission. And if we can provide the best technology so that they can accurately do their mission, you know, in the worst case, there might be a human life at the other end that they're targeting. But in many cases, it might be a weapons cache that's not - any humans around or a bridge to slow down an enemy threat.

GARSD: And Zeiler says also it's going to help minimize civilian casualties by improving the accuracy of weapons. O'Sullivan wasn't buying that. She quit the day after the staff meeting. She describes herself as a conscientious tech objector. She went on to join a startup that advises companies on how to make trustworthy artificial intelligence. She says she still thinks tech can be really wonderful or really dangerous, like playing with magic. Jasmine Garsd, NPR News, New York. Transcript provided by NPR, Copyright NPR.