“I was ready to risk it all, just to get people to talk about this issue.”
It was late 2018. Liz O’Sullivan began to wonder what was happening in a windowless room at Clarifai, the artificial intelligence startup for which she was head of data operations. Information that trickled out made her fear that her company’s visual recognition technology was being used in the creation of weaponry, specifically Lethal Autonomous Weapons Systems (LAWS). LAWS, or more colloquially “killer robots” are weapons able to select and kill people without any human control involved.
In 2019, Liz sent a three page letter to the company CEO, asking for an assurance that this wasn’t and would never be the case. Instead, she says “he announced that he was totally willing to sell autonomous weapons technology to our government.”
Liz resigned, and both her letter of concern and a “Why I Quit” piece written for the American Civil Liberties Union (ACLU) went public. With the protest, Liz, 35, now co-founder and Vice President of commercial operations at an AI Safety startup, “discovered a calling.” She now serves as a member of the International Committee for Robot Arms Control and technology director for the Surveillance Technology Oversight Project (STOP). Autonomous weapons systems already exist, she emphasizes, and “endanger us all.”
Did you ever anticipate that tech work would lead you to activism?
Never! But a lot of us in the tech industry who originally felt we were working for the greater good believe there’s so much that has gone wrong. The concentration of money and power, behavioural targeting, the danger that facial recognition can pose...
Is it a swell of discontent—or a tsunami?
The latter. The industry is largely libertarian, but that doesn’t mean people are naïve. It’s painful to be working for a company that you can tell has compromised its values.
What pushed you to step forward?
In 2012, Elon Musk, Stephen Hawking and thousands of scientists signed a letter pledging that [artificial intelligence] research would be used to benefit humankind. I’d been nominated to work with the board on ethics oversight at Clarifai, and through a nerdy conversation with a friend realized that autonomous weapons weren’t some future problem. I worried that if we didn’t work as a society to set rules on robot-human engagement, it might be too late. I began the uphill battle of trying to put some constraints on my own company’s facial recognition technology—who we’d sell it to and what projects we’d work on for the Department of Defense. My goal was to eliminate autonomous weapons from that roadmap. When I realized that wasn’t possible, I knew I needed to approach things differently. If the company was willing to make a decision, unilaterally, on behalf of humanity, I couldn’t live with that.
Was there a price in going so public?
There were months I woke up and once I realized where I was, the cortisol would flood my body and my heart would race. I was in a situation where I could lose everything. I thought there was a good chance I’d never work in tech again. It may sound crazy, but I was ready to risk it all, just to get people to talk about this issue.
Why? What’s so frightening about killer robots?
Claims that weapons powered by artificial intelligence are safe and reliable are wrong. Artificial intelligence models are very brittle. They’re trained for a particular purpose and are predictive, meaning they can only understand something they’ve seen before. If anything changes, they fail, often in unpredictable ways. Simplified, let’s say you’re trying to train a machine to recognize what a cat looks like. You can present a million photos of different cats, but the minute you include the image of a dog with it, the algorithm won’t perform as expected. With [killer robots], you’re not talking about sending out a single drone with a job to do, but [you are sending out] swarms of them—and when they make mistakes, it will be in weird ways we can’t imagine. I see a lot of collateral damage, and I think it will happen in the most vulnerable parts of the world.
Whether or not we decide to arm robots is a species level question. It should be decided by the human community not a set of lone governments, powerful CEOs, or organized groups. But we have no global policy. We’ve seen in the last five years that when you let unregulated tech operate on its own, it does harm.
What does the Campaign to Stop Killer Robots seek?
The Campaign to Stop Killer Robots has 130 groups in 60 countries. We’re working at the international level for a United Nations treaty that would ban fully autonomous weapons, and require meaningful human control over the use of lethal force. I help as someone who has firsthand experience with artificial intelligence. I go with the policy and disarmament experts to talk to diplomats and their staff about how these systems actually work and what my fears are.
How hopeful of progress can we be?
To be truthful, I don’t think this issue will go away any time soon. Even if we have a treaty, it would need to be auditable and enforceable. Artificial intelligence can do quite a lot of good for the world, and we need to find a way to retain those benefits while we eliminate the threats. It will take time. What’s most heartening to me – and I just said this when I spoke at the UN – is that no matter who I’m talking to, whether Republican or Democrat, young or old, I’ve never met a single person who thinks that giving guns to robots is a good idea.
LEARN MORE
Read Liz's piece for the ACLU, "I Quit My Job to Protest My Company’s Work on Building Killer Robots".
Learn more about the Campaign to Stop Killer Robots, endorsed by Nobel peace laureate Jody Williams, on their website.
Watch Liz speak at the United Nations about the dangers of lethal autonomous weapons systems.