Can Europe actually lead the world on AI regulation?

The generative AI boom has sent governments worldwide scrambling to regulate the emerging technology, but it also has raised the risk of upending a European Union push to approve the world’s first comprehensive artificial intelligence rules.

The 27-nation bloc’s Artificial Intelligence Act has been hailed as a pioneering rule book. But with time running out, it’s uncertain if the EU’s three branches of government can thrash out a deal Wednesday in what officials hope is a final round of closed-door talks.

Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce humanlike work but raised fears about the risks they pose.

Those concerns have driven the U.S., U.K., China, and global coalitions like the Group of 7 major democracies into the race to regulate the rapidly developing technology, though they’re still catching up to Europe.

Besides regulating generative AI, EU negotiators need to resolve a long list of other thorny issues, such as a full ban on police use of facial recognition systems, which have stirred privacy concerns.

Chances of clinching a political agreement between EU lawmakers, representatives from member states, and executive commissioners “are pretty high partly because all the negotiators want a political win” on a flagship legislative effort, said Kris Shrishak, a senior fellow specializing in AI governance at the Irish Council for Civil Liberties.

“But the issues on the table are significant and critical, so we can’t rule out the possibility of not finding a deal,” he said.

Some 85% of the technical wording in the bill already has been agreed on, Carme Artigas, AI and digitalization minister for Spain, which holds the rotating EU presidency, said at a press briefing Tuesday in Brussels.

If a deal isn’t reached in the latest round of talks, starting Wednesday afternoon and expected to run late into the night, negotiators will be forced to pick it up next year. That raises the odds the legislation could get delayed until after EU-wide elections in June—or go in a different direction as new leaders take office.

One of the major sticking points is foundation models, the advanced systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.

Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

The AI Act was intended as product safety legislation, like similar EU regulations for cosmetics, cars, and toys. It would grade AI uses according to four levels of risk—from minimal or no risk posed by video games and spam filters to unacceptable risk from social scoring systems that judge people based on their behavior.

The new wave of general purpose AI systems released since the legislation’s first draft in 2021 spurred European lawmakers to beef up the proposal to cover foundation models.

Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks, or creation of bioweapons. They act as basic structures for software developers building AI-powered services so that “if these models are rotten, whatever is built on top will also be rotten—and deployers won’t be able to fix it,” said Avaaz, a nonprofit advocacy group.

France, Germany, and Italy have resisted the update to the legislation and are calling instead for self-regulation—a change of heart seen as a bid to help homegrown generative AI players, such as French startup Mistral AI and Germany’s Aleph Alpha, compete with big U.S. tech companies like OpenAI.

Brando Benifei, an Italian member of the European Parliament who is co-leading the body’s negotiating efforts, was optimistic about resolving differences with member states.

There’s been “some movement” on foundation models, though there are “more issues on finding an agreement” on facial recognition systems, he said.


By Kelvin Chan, Associated Press

https://www.fastcompany.com/90993812/can-europe-actually-lead-the-world-on-ai-regulation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 2y | 6 dic 2023, 20:50:06


Accedi per aggiungere un commento

Altri post in questo gruppo

Linda Yaccarino was supposed to tame X. Elon Musk wouldn’t let her

Some news stories are gobsmackingly obvious in their importance. Others are complete nonstories. So what to make of the

9 lug 2025, 19:10:07 | Fast company - tech
Apple’s next CEO: A new look at Tim Cook’s potential successors after latest exec shakeup

Yesterday, Apple unexpectedly announced the most radical shakeup to its C-suite in years. The company revealed that Jeff Williams, its current chief operating officer (COO), will be departing the

9 lug 2025, 16:40:09 | Fast company - tech
PBS chief Paula Kerger warns public broadcasting could collapse in small communities if Congress strips federal funding

As Congress moves to make massive cuts to public broadcasting this week, Paula Kerger, president and CEO of the Public Broadcasting Service (PBS), gives an unflinching look at the organization’s f

9 lug 2025, 14:30:04 | Fast company - tech
These personality types are most likely to cheat using AI

As recent graduates proudly showcase their use of ChatGPT for final projects, some may wonder: What kind of person turns to

9 lug 2025, 14:30:04 | Fast company - tech
Samsung fixed everything you hated about foldable phones—except the price

Just over a month ago, Samsung did something strange to start hyping up its next foldable phone announcements.

Those phones, which Samsung revealed today, are officially called the Samsu

9 lug 2025, 14:30:04 | Fast company - tech