OpenAI promises greater transparency on model hallucinations and harmful content

OpenAI has launched a new web page called the safety evaluations hub to publicly share information related to things like the hallucination rates of its models. The hub will also highlight if a model produces harmful content, how well it behaves as instructed and attempted jailbreaks. 

The tech company claims this new page will provide additional transparency on OpenAI, a company that, for context, has faced multiple lawsuits alleging it illegally used copyrighted material to train its AI models. Oh, yeah, and it's worth mentioning that The New York Times claims the tech company accidentally deleted evidence in the newspaper's plagiarism case against it.

The safety evaluations hub is meant to expand on OpenAI's system cards. They only outline a development's safety measures at launch, whereas the hub should provide ongoing updates. 

"As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety," OpenAI states in its announcement. "By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts⁠ to increase transparency across the field." OpenAI adds that its working to have more proactive communication in this area throughout the company. 

Introducing the Safety Evaluations Hub—a resource to explore safety results for our models.

While system cards share safety metrics at launch, the Hub will be updated periodically as part of our efforts to communicate proactively about safety.https://t.co/c8NgmXlC2Y

— OpenAI (@OpenAI) May 14, 2025

Interested parties can look at each of the hub's sections and see information on relevant models, such as GPT-4.1 through 4.5. OpenAI notes that the information provided in this hub is only a "snapshot" and that interested parties should look at its system cards. assessments and other releases for further details. 

One of the big buts to the entire safety evaluation hub is that OpenAI is the entity doing these tests and choosing what information to share publicly. As a result, there isn't any way to guarantee that the company will share all its issues or concerns with the public.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-promises-greater-transparency-on-model-hallucinations-and-harmful-content-184545691.html?src=rss https://www.engadget.com/ai/openai-promises-greater-transparency-on-model-hallucinations-and-harmful-content-184545691.html?src=rss
Létrehozva 2mo | 2025. máj. 14. 19:10:06


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

Judge rules Apple must face antitrust lawsuit brought by the US DOJ

The US Department of Justice's antitrust

2025. jún. 30. 22:10:23 | Engadget
How to buy the Switch 2: Nintendo's restock updates from Walmart, Best Buy and more

The Nintendo Switch 2 has been available in the US for more than three weeks — and we finally saw a second wave of a

2025. jún. 30. 22:10:22 | Engadget
Apple may power Siri with Anthropic or OpenAI models amid AI struggles

Apple is considering using AI models from OpenAI or Anthropic to deliver the

2025. jún. 30. 22:10:21 | Engadget
Video Games Weekly: Summer Game Fest ends when I say so

Welcome to Video Games Weekly on Engadget. Expect a new story every Monday or Tuesday, broken into two parts. The first is a space for short essays and ramblings about video game trends and rel

2025. jún. 30. 22:10:20 | Engadget
11 Bit Studios clarifies its AI use in The Alters after player outcry

11 Bit Studios has drawn the ire of players for the undisclosed use of artificial intelligence in its recent release, The Alters. The new project from the team behind Frostpunk an

2025. jún. 30. 22:10:18 | Engadget