Report warns AI could usher in a new era of bioweapons

There have been lots of warnings about AI’s impact on people’s jobs and mental health, as well as the technology’s ability to spread misinformation. But a new report from the RAND Corporation, a California research institute, might be the most disturbing of all.

One day after venture capitalist Marc Andreessen published a lengthy manifesto on techno optimism, arguing that AI can save lives “if we let it,” the think tank cautioned that the rapid advancement of the technology could increase its potential to be used in the development of advanced biological weapons. While the report acknowledges that AI, alone, isn’t likely to provide step-by-step instructions on how to create a bioweapon, it argues that it could fill in areas that were previously unknown to bad actors—and that could be all the help they need.

“The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations,” the report reads. “Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.”

The current lack of oversight, it further argues, could be just the window of opportunity that terrorists need.

AI is already offering more help than most people would be comfortable with, even if it is inadvertent. In one fictional scenario researchers ran, the Large Language Model (LLM)—a building block of generative AI that’s essential to tools like ChatGPT, Bard, and more—“suggested aerosol devices as a method and proposed a cover story for acquiring Clostridium botulinum while appearing to conduct legitimate research.” In another, it discussed how to cause a large number of casualties.

The report did not discuss which AI system it had run its scenarios on.

Bioweapons are an especially frightening threat for officials, not only because they can spread so wildly (and mutate), but because they’re a lot easier to create than cure. RAND notes that the cost to resurrect a virus similar to smallpox can be as little as $100,000, but developing a vaccine can run over $1 billion.

While the report did raise a red flag about AI’s potential for harm in this space, RAND also noted that it “remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online.”

This isn’t the first time RAND has warned about the potential catastrophic effects of AI. In 2018, the group looked at how artificial intelligence could affect the risk of a nuclear war. That study found that AI had “significant potential to upset the foundations of nuclear stability and undermine deterrence by the year 2040,” later adding “Some experts fear that an increased reliance on AI could lead to new types of catastrophic mistakes. There may be pressure to use it before it is technologically mature.”

RAND’s emphasizing its findings only hint at potential risk and do not provide a full picture of real-world impact.

However, RAND is hardly alone in its warnings about the catastrophic potential of AI. Earlier this year, a collective of AI scientists and other notable figures signed a statement from the Center for AI Safety, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

https://www.fastcompany.com/90968614/rand-report-ai-dangers-bioweapons?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 2y | 17. 10. 2023 21:20:07


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

How Tesla’s Autopilot verdict could stifle Musk’s robotaxi expansion

A court verdict against Tesla last week, stemming from a fatal 2019 crash of an Aut

5. 8. 2025 17:50:11 | Fast company - tech
Cloudflare vs. Perplexity: a web scraping war with big implications for AI

When the web was established several decades ago, it was built on a number of principles. Among them was a key, overarching standard dubbed “netiquette”: Do unto others as you’d want done unto you

5. 8. 2025 17:50:09 | Fast company - tech
Taiwanese authorities investigate TSMC chip trade secrets leak

Taiwanese authorities have detained three people for allegedly stealing technology trade secrets from Taiwan Semiconductor Manufacturing Co (

5. 8. 2025 17:50:08 | Fast company - tech
AT&T to pay $177 million in data breach settlement. Here’s how to claim up to $5,000

After suffering two significant data breaches in recent years, AT&T has agreed to pay $177 million to customers affected by the incidents. Some individuals could receive

5. 8. 2025 11:10:02 | Fast company - tech
What the White House Action Plan on AI gets right and wrong about bias

Artificial intelligence fuels something called automation bias. I often bring thi

5. 8. 2025 8:40:04 | Fast company - tech