During U.N. meetings this week about the future of warfare, experts grappled with the possibility that AI and robots could take over the decisions to use lethal force.
Some autonomous weapons already exist in the field. Many U.S. naval ships, for example, carry cannons that will shoot down incoming missiles without any human input. On the border between North and South Korea, there are automatic turrets that were designed to autonomously fire on their targets before human minders decided they wanted to keep control after all.
But is any military actually proposing giving more robots fire control right now? For years, stakeholders have met in a U.N. committee to weigh the possibilities, and the debate still isn't settled.
Human Rights Watch attends the U.N. meetings to push for a hard ban on lethal robots. It's long held that countries should commit to a legal treaty that would keep future weapons systems from running themselves.
"We're talking about the loss of human, lack of human control, be it meaningful or adequate or appropriate over the selection and attacks of targets," said Mary Wareham, advocacy director for the Arms Division at Human Rights Watch.
But on the other side of the table, the U.S. has argued that more research into autonomous tools could reduce the risk of injury from warfare for the military and for civilians alike.
For what it's worth, Department of Defense policy currently allows the U.S. military to develop whatever autonomous systems it wants. It just can't deploy any of them unless "human judgment" is still involved.
In the meantime, no government will cut humans all the way out of the picture. But the picture is getting clearer and clearer: The usefulness of AI and autonomous tools in warfare could be too great to ignore. Robots can be tougher than humans, they don't lose focus, they process information faster, and they don't get emotional.
"We're not trying to stop autonomy," Wareham said. "We're not trying to stop artificial intelligence, robotics or their uses by militaries around the world. ... We understand that there are many benefits. We just draw the line at weaponizing artificial intelligence to the point that you do not control the weapon system."
The U.N. committee's continuing job is to define that line as clearly as it can. Because whatever role robots end up filling, their presence will eventually influence warfare, security and human rights everywhere.