Civil society groups issue a surprise open letter to the U.K.’s AI Safety Summit

The U.K.’s landmark AI Safety Summit, a conclave of around 120 representatives from leading AI companies, academics, and civil society representatives taking place at Bletchley Park this week, has encountered an unexpected wrinkle thanks to an urgent letter by nearly a dozen participants who believe the summit is targeting the wrong issues.

Representatives from 11 civil society groups, all of which are included on the government’s official guestlist for the event, have signed an open letter urging the politicians gathered to prioritize regulation to address the full range of risks that AI systems can raise, including current risks already impacting the public.

“While potential harms of ‘frontier’ models may have motivated the Summit, existing AI systems are already having significant harmful impacts on people’s rights and daily lives,” the letter explains.

Focusing on those risks, rather than the existential risks of a superintelligent AI—something that many of those studying the field question is a real likelihood—that have so far framed the schedule of the AI Safety Summit would be a better use of time, the letter argues.

“Governments must do better than today’s discussions suggest,” the letter reads. “It is critical that AI policy conversations bring a wider range of voices and perspectives into the room, particularly from regions outside of the Global North.” The guest list for this week’s summit includes representatives from the U.S., Chinese, Nigerian, and Indian governments but is notably absent of many academics and civil society campaigners.

The letter continues: “Framing a narrow section of the AI industry as the primary experts on AI risks further concentrating power in the tech industry, introducing regulatory mechanisms not fit for purpose, and excluding perspectives that will ensure AI systems work for all of us.”

In a statement issued to Fast Company, Alexandra Reeve Givens, one of the signatories and CEO of the Center for Democracy & Technology, says the letter was prepared “because we worried that the Summit’s narrow focus on long-term safety harms might distract from the urgent need for policymakers and companies to address ways that AI systems are already impacting people’s rights.” (Reeve Givens is attending the summit this week.)

Chinasa T. Okolo, a fellow at the Brookings Institution, which is a participant at the summit, adds that the summit has been light on any discussion around the harms that AI can cause to data labelers, “who are arguably the most essential to AI development.”

“There is much more work needed to understand the harms of AI development and increase labor protections for data workers, who are often left out of conversations on the responsible development of AI,” she adds.

The letter also asks participants at the AI Safety Summit to ensure that “companies cannot be allowed to assign and mark their own homework. Any research efforts designed to inform policy action around AI must be conducted with unambiguous independence from industry influence.”

Criticism has been levied against the summit organizers for focusing on points of discussion that favor incumbent companies that have already developed successful AI tools. The list of those invited to the event, critics argue, also skews conversations in favor of maintaining the status quo.

The hope from the letter is that the conversation is recast at the second day of the summit to consider more of those issues that have been overlooked to date at Bletchley Park.

“Today what we’ve seen is the U.S. and the U.K. set out voluntary policies for the management of the most powerful models we have,” says Michael Birtwistle, associate director, Ada Lovelace Institute, a research institute represented at the summit. “The last decade of voluntary commitments from tech leaders are no meaningful replacement for hard rules.”

https://www.fastcompany.com/90976518/civil-society-groups-open-letter-uk-ai-safety-summit?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 2y | 1 nov 2023, 16:40:04


Accedi per aggiungere un commento

Altri post in questo gruppo

This planet is drawing huge flares from its young star

Scientists are tracking a large gas planet experiencing quite a quandary as it orbits extremely close to a young star – a predicament never previously observed.

This exoplanet, as

7 lug 2025, 20:40:06 | Fast company - tech
5 lesser-known Google Pixel phone tricks to make your life a little easier

Journey with me back to the good old days, if you will. There was a time that, when you’d buy a gadget, it’d come with a sometimes verbose but often helpful “instruction manual.”

Not a q

7 lug 2025, 18:30:04 | Fast company - tech
Everything you need to know about Elon Musk’s ‘America Party’

After more than a week of threats, Elon Musk formally launched the America

7 lug 2025, 18:30:02 | Fast company - tech
Napster is back—and it’s betting big on holographic avatars

Copyright lawsuits and ethical debates have led some to say the AI ind

7 lug 2025, 16:10:04 | Fast company - tech