OpenAI promises greater transparency on model hallucinations and harmful content

OpenAI has launched a new web page called the safety evaluations hub to publicly share information related to things like the hallucination rates of its models. The hub will also highlight if a model produces harmful content, how well it behaves as instructed and attempted jailbreaks. 

The tech company claims this new page will provide additional transparency on OpenAI, a company that, for context, has faced multiple lawsuits alleging it illegally used copyrighted material to train its AI models. Oh, yeah, and it's worth mentioning that The New York Times claims the tech company accidentally deleted evidence in the newspaper's plagiarism case against it.

The safety evaluations hub is meant to expand on OpenAI's system cards. They only outline a development's safety measures at launch, whereas the hub should provide ongoing updates. 

"As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety," OpenAI states in its announcement. "By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts⁠ to increase transparency across the field." OpenAI adds that its working to have more proactive communication in this area throughout the company. 

Introducing the Safety Evaluations Hub—a resource to explore safety results for our models.

While system cards share safety metrics at launch, the Hub will be updated periodically as part of our efforts to communicate proactively about safety.https://t.co/c8NgmXlC2Y

— OpenAI (@OpenAI) May 14, 2025

Interested parties can look at each of the hub's sections and see information on relevant models, such as GPT-4.1 through 4.5. OpenAI notes that the information provided in this hub is only a "snapshot" and that interested parties should look at its system cards. assessments and other releases for further details. 

One of the big buts to the entire safety evaluation hub is that OpenAI is the entity doing these tests and choosing what information to share publicly. As a result, there isn't any way to guarantee that the company will share all its issues or concerns with the public.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-promises-greater-transparency-on-model-hallucinations-and-harmful-content-184545691.html?src=rss https://www.engadget.com/ai/openai-promises-greater-transparency-on-model-hallucinations-and-harmful-content-184545691.html?src=rss
Utworzony 4h | 14 maj 2025, 19:10:06


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

Mission: Impossible should never have gone full sci-fi

The Mission: Impossible film franchise has always dabbled in the, well, impossible. We've seen Tom Cruise's Ethan Hunt climb his way up the Burj Khalifa, have a motorcycle joust t

14 maj 2025, 21:20:18 | Engadget
OpenAI rolls out GPT-4.1 to all ChatGPT paying subscribers

OpenAI is making GPT-4.1, the latest addition to its collection of AI models, availa

14 maj 2025, 21:20:17 | Engadget
SoundCloud backtracks on 'too broad' AI terms of service

SoundCloud is updating its Terms of Use again after

14 maj 2025, 21:20:16 | Engadget
Now's a good time to check in on your Steam account security

Steam has allegedly suffered a data breach in the past week. Details are scant and difficult to confirm, but a known hacker has

14 maj 2025, 21:20:15 | Engadget
PS Plus Game Catalog additions for May include Sand Land and Battlefield V

The PlayStation Plus Game Catalog for May

14 maj 2025, 19:10:09 | Engadget