A new report commissioned by the International Committee of the Red Cross raises concerns about militaries’ use of artificial intelligence systems in warfare.
The report, authored by researcher Arthur Holland Michel, who is an external researcher contracted by the Red Cross, argues that current AI and computer systems introduce significant risks of “unaccountable errors” due to uncertainties, hidden assumptions, and biases—and that military personnel who make decisions based on AI-reached decisions need to be fully aware of that those qualities are inherent in AI systems.
“The discourse on military AI at the moment kind of operates on this belief that computerized systems and AI systems are either right or wrong,” says Michel. For example, he says, if an AI system mischaracterizes an ambulance as a tank, causing a human to pull the trigger on a missile to destroy that vehicle, that human can currently pass the blame on to an AI system. But they shouldn’t be able to do that, reckons Michel.
The idea that AI systems are right or wrong in a binary sense is a “faulty narrative,” he says. It’s also a damaging one, as the trust in AI systems used in warfare means that AI tools are being rolled out further and more widely on the battlefield—compounding the issue of sorting AI’s good advice from the bad.
“The fact is, anytime that you put a computerized interface between a human and the thing that they’re looking at, there’s this gray area in which things can go wrong and no one can really be held accountable for it,” he says. “To think that these computerized systems that currently exist that can be perfect and highly accountable, and there is no such thing as a blameless error with the arrival of AI systems is factually wrong at best, and very dangerous at worst.”
The issue is particularly prescient now given reporting by 972 Magazine on the Israeli military’s use of the Lavender and Gospel programs in Gaza. Both programs use AI to select targets in complicated, densely populated areas in which military and civilians are alleged to intermingle, with what 972 Magazine reports are sometimes disastrous consequences. (Spokespeople for the Israel Defense Forces deny the claims of errors made in 972 Magazine.)
Michel, for his part, hopes the Red Cross report’s core findings foster greater understanding around the complexities of the AI issue. “These are uncomfortable questions about the optimization of any kind of decision in warfare,” he says. “We simply do not know [enough about current systems]. And that’s why the discourse around the use of AI in Gaza is kind of floundering.”
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině

IShowSpeed and Jynxzi are teaming up to host a $100,000 Fortnite tournament, bringing together 100 top creators for what’s shaping up to be the biggest celebrity Fortnite match to date.

Mark Zuckerberg said on Monday that Meta Platforms would spend hundreds of billions of dollars to build several massive

Meta may not currently lead the race for AI superintelligence, but it&

Southern small-town drama has made its way to TikTok. If you’re not familiar

A preliminary finding into last month’s Air India

In May of 1995, the video game industry hosted its first major trade show. Electronic Entertainment Expo (E3) was designed to shine a spotlight on games, and every major player wanted to stand in

Robinhood cofounder and CEO Vlad Tenev channeled Hollywood glamour last month in Cannes at an extravagantly produced event unveiling of the trading platform’s newest products, including a tokenize