Responsible AI governance: The top 3 myths holding us back

There’s nothing more overhyped and less understood in the business world right now than artificial intelligence (AI). In a time when every new flashy startup claims to be an AI vendor, there are a ton of myths that need to be busted for us to apply AI where it matters most. Perhaps even more concerning: Many of these myths obscure what we should actually be learning about—responsible use of AI—and who should have a hand in building it.

As one of the first companies to bring AI to many of the world’s biggest brands, LivePerson (where I work as CMO) has been deeply involved in the fight against bias for years, including as a founding member of EqualAI. Since EqualAI’s launch five years ago, I’ve found that this nonprofit—which brings together experts across disciplines to reduce bias in the technology—always has new things to teach us about what’s real and what’s not in the AI space.

Recently, one of our LivePerson HR leaders, Catherine Goetz, completed EqualAI’s Badge Program, which educates leaders across wildly different functional areas and industries about how they can establish responsible AI governance frameworks at their own organizations. Catherine’s cohort included experts and executives from across telco, consumer packaged goods, defense, security, and tech companies, just to name a few. Together, they learned about some AI myths that all of us need to understand as we apply AI to our own spheres of influence:

Myth 1: AI isn’t my problem

Reality: AI is an “everyone” problem and an “everyone” opportunity. All companies should now consider themselves AI companies to some degree, because we should all be exploring and testing how it can help us do what we do best. But with AI more accessible than ever, there is no company that can avoid thinking deeply about the potential misuse of AI and its negative impacts.

Myth 2: Okay, so we need to do something, but our tech guys will handle it

Reality: We’re not going to code or program our way to responsible AI. Leaders across all functions (not just tech) need to play a part in establishing governance frameworks within their organizations. This obviously includes putting standards in place for product design, data integrity, and testing, among other things, but it also includes areas for teams like legal, HR, and recruitment to lead. Have you considered applicable laws around privacy? Do you have a designated point of contact for employees and customers? All functional areas have a role to play in making sure that you can safely stand by any AI you put out into the world.

Myth 3: We don’t need DEI to make AI

Reality: Diversity, equity, and inclusion (DEI) help make AI better. Full stop. One of the ways that we can be sure we don’t perpetuate historical and new forms of bias in AI is by making sure that the people developing these systems reflect the world at large, especially the populations that will use them to live, work, and play. Do you have the necessary diverse workforce to understand how your products and services impact different kinds of people that will—if you’re a successful business—use them every day?

Putting these myths to bed requires buy-in and action from cross-functional leaders at all levels of your business. That’s why several LivePerson leaders like Catherine are now badge-certified in responsible AI governance. They’ve learned about operationalizing AI principles, implementing tools to detect risks and biases, ensuring accountability, and creating a cohesive process to address potential harms. And their roles at our company are similarly wide-ranging, including leaders from HR, legal, product development, and engineering.

Today, there’s a serious lack of consensus when it comes to creating (let alone following) responsible AI standards, but leaders like Catherine are helping us make progress. Most recently, she coauthored a first-of-its-kind whitepaper from EqualAI called An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework. Working with cross-sectoral leaders in business (including Google DeepMind and Microsoft), government, and civil society, she helped develop a framework meant to apply to organizations of any size, industry, and maturity level. Their hope is that this framework can serve as a resource for any professional on the journey toward making the world better through more responsible AI.

I think this new whitepaper is also a powerful sledgehammer for busting persistent myths about AI in general, and who is responsible for making it responsible. AI can serve as a force for good in our world, and for our businesses, but there are profound implications if we fail to govern it effectively. Understanding that we’re all in this together will help usher us all into a safer and more responsible, AI-enabled future.

Ruth Zive is chief marketing officer at LivePerson and host of the Generation AI podcast.

https://www.fastcompany.com/90968258/responsible-ai-governance-the-top-3-myths-holding-us-back?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Oct 18, 2023, 12:40:04 PM


Login to add comment

Other posts in this group

How AI will radically change military command structures

Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, m

Aug 20, 2025, 5:20:02 PM | Fast company - tech
This startup knows what AI is saying about your brand

Internet users are increasingly turning to AI tools like ChatGPT, rath

Aug 20, 2025, 2:50:08 PM | Fast company - tech
OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

It’s rare for a tech titan to show any weakness or humanity. Yet even OpenAI’s notoriously understated CEO Sam Altman had to admit this week that the rollout of the company’s

Aug 20, 2025, 2:50:06 PM | Fast company - tech
An engineer explains how AI can prevent satellite disasters in space

With satellite mega-constellations like SpaceX’s Starlink deploying thousands of spacecraft, monitoring their health has become an enormous challenge. Traditional methods can’t easily scale

Aug 20, 2025, 12:30:15 PM | Fast company - tech
Landline phones are back—and they’re helping kids connect safely with friends

In today’s world, communication is largely done through one of two methods: smartphones or social media. Young children, however, rarely have access to either—and experts say they shouldn’t have a

Aug 20, 2025, 12:30:12 PM | Fast company - tech