A group of U.S. senators demand OpenAI turn over safety data

A group of five U.S. senators are demanding OpenAI submit data showing how it plans to meet safety and security commitments that it has made about its artificial intelligence systems after a growing number of employees and researchers have raised red flags about the technology and the company’s safety protocols.

In a letter to OpenAI CEO Sam Altman, the four Democrats and one Independent lawmaker asked a series of questions on how the company is working to ensure AI cannot be misused to provide potentially harmful information—such as giving instructions on how to build weapons or assisting in the coding of malware—to members of the public. In addition, the group sought assurances that employees who raise potential safety issues would not be silenced or punished.

The concerns voiced by former employees have led to a flurry of media reports—and the senators expressed concerns about how the company is addressing safety concerns.

“We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats,” the five senators wrote in a letter obtained by The Washington Post.

Earlier this month, a whistleblower filed a complaint with the Securities and Exchange Commission (SEC), accusing OpenAI of preventing employees from warning officials about the risks of the company’s products. These allegedly included nondisclosure (and non-disparagement) agreements and the need to obtain prior consent from the company before discussing confidential information with federal regulators.

OpenAI did not reply to Fast Company’s request for comment about the lawmakers’ letter.

The whistleblower letter followed several public warnings from workers who opted to leave OpenAI, the maker of ChatGPT. Jan Leike, a coleader in the company’s superalignment group (dedicated to ensuring that AI stays aligned with the goals of its makers) who left the company in May, was harshly critical of Altman and the company, saying executives made it harder for him and his team to ensure the company’s AI systems aligned with human interests.

“Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute [total computational resources] and it was getting harder and harder to get this crucial research done,” he wrote. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

Months before Leike’s comments, another member of the superalignment team, Leopold Aschenbrenner, was fired for allegedly leaking information to journalists, according to The Information.

OpenAI has previously said it has removed non-disparagement terms from staff employment agreements, but the senators asked company officials to confirm that those will not be enforced for current or former employees and if OpenAI was willing to “commit to removing any other provisions from employment agreements that could be used to penalize employees who publicly raise concerns about company practices.”

The letter also requested additional information on a number of other issues, including:

  • Whether OpenAI has procedures in place for when employees raise safety concerns
  • Details about its security protocols
  • Whether independent experts are allowed to test OpenAI systems before those systems are released
  • A detailing of which patterns of misuse and safety risks the team has observed following the release of the most recent large language models

OpenAI was reportedly given a deadline of August 13 to supply the information requested by the senators.

https://www.fastcompany.com/91161510/senators-demand-openai-sam-altman-turn-over-safety-data?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 1y | 23 lug 2024, 23:50:05


Accedi per aggiungere un commento

Altri post in questo gruppo

RushTok is back. TikTok still can’t get enough of sorority recruitment

The internet’s favorite programming is back on: #RushTok season is officially upon us. 

If this is your first time tuning in, “rush” is the informal name for the recruitment process

7 ago 2025, 07:10:02 | Fast company - tech
Instagram launches map feature. It looks a lot like Snap Map

Location sharing among friends, family, and significant others has quietly become the norm in recent years.

Now Instagram is looking for a piece of the action with the launch of a

7 ago 2025, 00:10:05 | Fast company - tech
WhatsApp removes 6.8 million accounts linked to scam centers

WhatsApp has taken down 6.8 million accounts that were “linked to criminal scam centers” target

6 ago 2025, 21:40:06 | Fast company - tech
Google wants you to be a citizen data scientist

For more than a decade, enterprise teams bought into the promise of business intelligence platforms delivering “decision-making at the speed of thought.” But most discovered the opposite: slow-mov

6 ago 2025, 19:30:04 | Fast company - tech
Apple to invest another $100 billion in the U.S.

President Donald Trump on Wednesday is expected to celebrate at the White House a commitment by

6 ago 2025, 19:30:03 | Fast company - tech
Character.AI launches social feed to let users interact, create, and share with AI personas

Character.AI is going social, adding an interactive feed to its mobile apps. 

Rolled out on Monday, the new social feed may initially look similar

6 ago 2025, 17:10:05 | Fast company - tech
Exclusive: Google Gemini adds AI tutoring, heating up the fight for student users

Just in time for the new school year, Google has introduced a tool called Guided Learning within its Gemini chatbot. Unlike tools that offer instant answers, Guided Learning breaks down complex pro

6 ago 2025, 17:10:04 | Fast company - tech