A new report commissioned by the International Committee of the Red Cross raises concerns about militaries’ use of artificial intelligence systems in warfare.
The report, authored by researcher Arthur Holland Michel, who is an external researcher contracted by the Red Cross, argues that current AI and computer systems introduce significant risks of “unaccountable errors” due to uncertainties, hidden assumptions, and biases—and that military personnel who make decisions based on AI-reached decisions need to be fully aware of that those qualities are inherent in AI systems.
“The discourse on military AI at the moment kind of operates on this belief that computerized systems and AI systems are either right or wrong,” says Michel. For example, he says, if an AI system mischaracterizes an ambulance as a tank, causing a human to pull the trigger on a missile to destroy that vehicle, that human can currently pass the blame on to an AI system. But they shouldn’t be able to do that, reckons Michel.
The idea that AI systems are right or wrong in a binary sense is a “faulty narrative,” he says. It’s also a damaging one, as the trust in AI systems used in warfare means that AI tools are being rolled out further and more widely on the battlefield—compounding the issue of sorting AI’s good advice from the bad.
“The fact is, anytime that you put a computerized interface between a human and the thing that they’re looking at, there’s this gray area in which things can go wrong and no one can really be held accountable for it,” he says. “To think that these computerized systems that currently exist that can be perfect and highly accountable, and there is no such thing as a blameless error with the arrival of AI systems is factually wrong at best, and very dangerous at worst.”
The issue is particularly prescient now given reporting by 972 Magazine on the Israeli military’s use of the Lavender and Gospel programs in Gaza. Both programs use AI to select targets in complicated, densely populated areas in which military and civilians are alleged to intermingle, with what 972 Magazine reports are sometimes disastrous consequences. (Spokespeople for the Israel Defense Forces deny the claims of errors made in 972 Magazine.)
Michel, for his part, hopes the Red Cross report’s core findings foster greater understanding around the complexities of the AI issue. “These are uncomfortable questions about the optimization of any kind of decision in warfare,” he says. “We simply do not know [enough about current systems]. And that’s why the discourse around the use of AI in Gaza is kind of floundering.”
Login to add comment
Other posts in this group

The AI copyright courtroom is heating up.
In back-to-back rulings last we

A software engineer became X’s main character last week after being outed as a serial moonlighter at multiple Silicon Valley startups.
“PSA: there’s a guy named Soham Parekh (in India) w

The flash floods that have devastated Texas are already a difficult crisis to manage. More than 100 people are confirmed dead

Amazon is extending its annual Prime Day sales and offering new membership perks to Ge

How would you spend $342 billion?
A number of games called “Spend Elon Musk’s Money” have been popping up online, inviting users to imagine how they’d blow through the

On Tuesday, AI lab Moonvalley
