Most U.S.-based companies have no idea how to mitigate AI risk. Credo AI wants to change that

Companies are at a crossroad when it comes to AI adoption: Either embrace the technology—along with all of its flaws, unknowns, and alarming capability to spread disinformation—or risk falling into obsolescence. 

Navrina Singh, founder and CEO of AI governance platform Credo AI, told attendees of Fast Company’s Impact Council annual meeting earlier this month that we have entered a “state of reinvention.” It’s no longer an option for companies to adopt and embrace the opportunities artificial intelligence promises. Rather, it’s essential to their survival and success. It’s also crucial for businesses to understand the risks the technology poses to their organization.

“It’s really important to think about this lens of how is trust going to be built for responsible practices, rather than just trying to give into the sphere of regulations?” Singh said. 

[Photo: Alyssa Ringler for Fast Company]

Understanding the risks

Singh, who founded Credo AI in 2020, was working in the robotics industry around 2010 when machine learning began to hit its stride. While companies were understandably bullish about the technology’s capabilities, Singh was concerned by the lack of discussion around potential dangers.

“My daughter was born 10 years ago, and I was seeing these very, very powerful AI systems evolve, I would say, as quickly as human brains. And there was this realization that as engineers, we don’t take responsibility,” Singh said. “We are just excited by the thrill of innovation and we are excited by putting our technology out in the market and making a lot of money, which is great. But now, we can’t take [that] chance on AI.” 

Credo AI helps businesses understand what risks the technology poses to their organization, how to mitigate those risks, and how to ensure companies are in compliance with government standards. Singh said the company has partnered with both the European Commission, the politically independent executive arm of the European Union, and the Biden Administration to advise both institutions on rights-based and risk-based regulations. 

In Europe, where the EU AI Act passed in March, Singh said there’s an understanding that new technology allows for progress. At the same time, companies at the forefront of the AI revolution are not only ensuring compliance with current and future government standards, but also prioritizing the rights of users and cultivating a sense of trust. 

“In order to enable innovation in Europe, they’re going to put European citizens front and center, and the rights of those citizens front and center,” she said. In the U.S., the path to regulation has proven more complex due to a more state-level approach to regulation as opposed to a federal one. 

[Photo: Alyssa Ringler for Fast Company]

Developing AI literacy

Although there’s been a lack of concrete federal regulation around AI in the U.S., the Biden Administration issued an executive order in October 2023, which included a mandate that agencies hire a chief artificial intelligence officer. Singh said that at this point, most officers have been hired or are in the process of being recruited. 

While it’s important to have a chief AI officer at the helm, Singh stressed the need for AI proficiency and literacy across job titles. 

“We really need a multi-stakeholder oversight mechanism for artificial intelligence,” she said. “What we are seeing is if you just put in AI experts as the stakeholders of managing oversight, they are going to be so far removed from the business outcomes like reputional damage, regulatory risk, impact, [and] mission.”

Acting, not reacting

According to Singh, the U.S. has fallen behind on AI literacy because of this lack of government oversight and the treatment of regulation as an afterthought. When companies that are not as technologically forward outsource their AI adoption to third party sources, that’s where the risk comes in. 

Singh argued that when companies employ technology like ChatGPT, they need to ask themselves what the risk implications are, which could range from chatbots producing hallucinations to live agents lacking an understanding around how the adoption of AI will impact their roles. Without a standard approach to risk management, companies are forced into reactionary positions. 

“Governance needs to be front and center,” Singh said. “The organizations who are able to tackle that very proactively have a very good sense of where true artificial intelligence or generative AI is actually used in their organization.”

https://www.fastcompany.com/91137361/most-u-s-based-companies-have-no-idea-how-to-mitigate-ai-risk-credo-ai-wants-to-change-that?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 1y | 21 giu 2024, 11:20:04


Accedi per aggiungere un commento

Altri post in questo gruppo

This TikToker is going viral for calling out the ‘bad walkers’ of NYC

If you’ve ever experienced incommensurate rage from slow or oblivious walkers, this Ti

3 ago 2025, 12:40:03 | Fast company - tech
Is Apple getting ready to launch a PlayStation and Xbox competitor?

The Apple TV is probably my favorite device that Apple makes. While the Apple TV app is in dire need

2 ago 2025, 11:10:06 | Fast company - tech
This free Adobe tool offers Photoshop-strength background removal

Sometimes, the simplest photo feats are the most satisfying of all.

Me? I’ve lost count of the number of times I’ve needed to remove the background from an image for one reason or anothe

2 ago 2025, 11:10:04 | Fast company - tech
Google loses appeal in antitrust battle with Epic Games

A federal appeals court has upheld a jury verdict condemning Google’s Android app store as an illegal monopoly, clearing the way for a federal judge to enforce a potentially disruptive shake

1 ago 2025, 18:50:03 | Fast company - tech
Apple shares are up 2% after iPhone maker posts strong Q3 results

Apple shares rose 2% in premarket trading on Friday, after the

1 ago 2025, 16:30:05 | Fast company - tech
OpenAI pulls ChatGPT feature that showed personal chats on Google

OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversa

1 ago 2025, 14:20:02 | Fast company - tech
YouTube channels are being sold and repurposed to spread scams and disinformation, says new research

YouTubers dedicate their lives to building a following in hopes of creating and sustaining a livelihood. For top creators, the rewards are immense: MrBeast, the world’s biggest YouTuber, is

1 ago 2025, 11:50:06 | Fast company - tech