A group of internet businesses, including Roblox, Google, OpenAI, and Discord, have cofounded a nonprofit called Robust Open Online Safety Tools (ROOST).
The new organization will fund free, open-source tools for online businesses to promote online safety, says Naren Koneru, Roblox’s vice president of engineering, trust, and safety. The move follows years of efforts by Roblox to restrict inappropriate messaging on its platform, which is widely used by children and has at times come under fire for not doing enough to combat sexual content and adult sexual predators.
And while human moderators are part of that equation, AI and automation have become critical for intercepting real-time unwanted messages across the platform’s 85 million daily active users, Koneru says.
“These decisions need to happen within milliseconds,” he says.
Among the tools Roblox has developed is an open-source AI model that analyzes audio clips to detect profanity, racism, bullying, sexting, and other disallowed content. The model was released to the public last July, available on GitHub and the AI platform Hugging Face, and it’s since been downloaded more than 20,000 times. The company has since developed a new version of the model, with support for additional languages including Spanish, French, German and Japanese, as well as additional infrastructure for fine-tuning the model to particular needs. That will likely be open sourced by the end of the first quarter of 2025, and the company anticipates unveiling other open-source tools for classifying content later in the year.
By contributing to ROOST (and acting as co-chair of a technical advisory committee), Roblox will also be able support such open-source efforts, aiming to create AI models that can be used by organizations of all sizes to moderate content, especially around child safety.
“While large companies like us can invest in systems like this, ” says Koneru, “if you’re a small game developer and you want to build all these safety systems, it’s almost next to impossible today to do it right.”
Some AI systems may even be hosted by ROOST itself, allowing outside companies to easily integrate them via API calls rather than handling complex infrastructure, Koneru says.
“They may actually not just open-source models, but they will also possibly run these hosted services where you can just call an API, as opposed to even worrying about all of these nitty-gritty details of, how do you run this model in a super efficient way,” he says.
ROOST may also release open-source infrastructure for labeling sample training data, like examples of allowed and disallowed content, and manage how it’s used to train and refine AI systems. That includes technology to effectively manage large-scale human moderation efforts and ensure consistency in decisions around rules (to ensure AI models are trained on reliable samples).
In addition to the for-profit companies, ROOST is backed by a variety of philanthropic organizations, including the Future of Online Trust and Safety Fund, Knight Foundation, AI Collaborative and the McGovern Foundation. It’s raised over $27 million to support its first four years of operation.
Melden Sie sich an, um einen Kommentar hinzuzufügen
Andere Beiträge in dieser Gruppe

Forget Cowboy Carter or the Eras tour, the hottest ticket this year is for your favorite podcast.
Content creator tours sold nearly 500% more tickets this year compared to 20

In late July, the Trump administration released its long-awaited AI Action Pla

Matthew Williams has slept very little since he learned about Sacha Stone’s plan to build a “sovereign” micronation on 60 acres of land near his home in rural Tennessee. What began as a quic

Let’s be honest: Your phone is a jerk. A loud, demanding, little pocket-size jerk that never stops buzzing, dinging, and begging for your attention. It’s the first thing you see in the


I don’t know if you’ve noticed, but email scams are getting surprisingly sophisticated.
We’ve had a handful of instances here at The Intelligence International Headquarters where we’ve h

Interest in virtual private networks (VPNs) has surged in America and Europe this year. Countries on both sides of the Atlantic have recently enacted new age-verification laws designed to prevent