Biden still won’t hold AI companies’ feet to the fire

This might be the busiest and longest AI-related week to date: The Senate AI forum held two meetings in one day, there was a two-day AI Safety Summit in the U.K., G7 leaders released an AI governance code, and the Biden administration issued a long-awaited executive order on safe, secure, and trustworthy AI. The only thing longer than the week itself is President Joe Biden’s 100-plus-page executive order—a document that looks good on paper but falls short in terms of expectations and enforcement.

The anticipation and fanfare that took place at Monday’s presentation was well received by those in the room, but since then many—including me—have questioned the noticeable holes and overreach within the nearly 20,000-word order, particularly the suggested mechanism for enforcement and implementation. The intent is there, but this latest effort feels like lots of speed and no teeth.

Let’s start with the topic that is on everyone’s minds: the risks associated with AI. On Monday, Biden said that to realize the promise of AI and avoid the risk, we need to govern the technology. But he doesn’t get it quite right. To truly address the risks associated with AI, we need to govern the use cases associated with the technology.

Open-Source vs. Closed-Source AI: We need a combination of the two

By mandating government review of large language models before they can be made public, the executive order calls into question the future legality of open-source AI. The potential loss of the ability to leverage open-source AI would be a missed opportunity.

The debate around open-source software and tools has been around for decades with two sides: those who are in favor of open-source AI or software that is open to the public to use and modify without running into licensing issues, and those who prefer closed-source (or proprietary) software, which has source code that is not available to the public and therefore can’t be modified.

My fear is that the mandates set forth in the order will mean that we lose out on the benefits of open-source AI, particularly the speed of innovation, the community best practices that it builds, and the learnings that it offers. Case in point: Open-source can help eliminate instances of bias as developers are able to identify potential sources by inspecting the code.

Coincidentally, this was a major point raised on Monday—preventing bias in AI algorithms. It’s true that ungoverned AI algorithms can be harmful and come with many risks, but how will we learn if all AI is closed to the public? Perhaps even more important, what will the impact be on research if scientists and academics are unable to drive innovation and build cutting-edge tools and technologies? The truth is that open-source democratizes AI, which is important to a secure AI future.

This isn’t to suggest that there should be no closed-source AI. We can and should have a combination of the two. It’s possible for some companies to have proprietary algorithms and datasets while other companies—like Hugging Face—help users build and train machine learning models. According to a report by Gartner, 70% of new and internally developed applications will incorporate AI- or ML-based models by 2025. All the more reason to embrace a combination of open-source and closed-source AI.

And we don’t need to accept the loss of open-source AI in order to properly control the worst implications of AI. Here’s how.

We need the right approach to AI governance: Start with the use case, focus on data

To govern AI, we need to focus enforcement efforts on the use of AI, not the underlying R&D. Why? The risk associated with AI fundamentally depends on what it is used for. AI governance is crucial in mitigating risks and ensuring AI initiatives are transparent, ethical, and trustworthy. Think of it like a system of checks and balances for AI models.

AI governance is a necessary framework that sets the right policies and ensures organizational accountability. Companies should reuse their existing data governance framework as a starting point. It ensures that AI models are subject to the necessary data quality, trust, and privacy standards. Using the same blueprint for both data and AI governance ensures that AI is utilized responsibly and provides clear rules and accountability in its development and deployment.

Just look at the mandate around the creation of new AI governance boards and that all federal agencies have chief AI officers. Also consider the requirement for developers to share data and training information prior to publicly releasing future large AI models or updated versions of those models.

For the U.S. government to get AI governance right, regulators first must insist that organizations get AI governance right. It’s about defining use cases, identifying and understanding data, documenting models and results, and verifying and monitoring your model. The right approach to AI governance provides best practices to build on and learn from. With that approach, we will all win at AI.


Felix Van de Maele is the CEO of Collibra.

https://www.fastcompany.com/90977398/biden-still-wont-hold-ai-companies-feet-to-the-fire?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Nov 3, 2023, 9:10:11 AM


Login to add comment

Other posts in this group

No, you don’t need to get 10,000 steps per day

The gospel according to fitness influencers: drink three liters of water per day, get a minimum of eight hours of sleep, and walk at least 10,000 steps per day.

From the

Jul 30, 2025, 8:30:11 PM | Fast company - tech
White House to release highly anticipated crypto policy report

A cryptocurrency working group formed by President Donald Trump is set to release a report on Wednesday that is expected to outline t

Jul 30, 2025, 8:30:09 PM | Fast company - tech
Google is indexing ChatGPT conversations, potentially exposing sensitive user data

Google is indexing conversations with ChatGPT that users have sent to friends, families, or colleagues—turning private exchanges intended for small groups into search results visible to millions.

Jul 30, 2025, 8:30:04 PM | Fast company - tech
‘I legitimately smelled like onion’: TikTok users are ditching natural deodorant and going back to antiperspirant

It’s hot. Everyone is sweating, and anyone who chooses to venture into the world armed with nothing but natural deodorant knows they’re playing a risky game.

But online, the backlash aga

Jul 30, 2025, 6:10:06 PM | Fast company - tech
This influencer is braving the brutal summer without A/C to help families pay electric bills

If you’ve been thanking the heavens for your A/C this week, spare a thought for Paul Farmer, who’s enduring the peak of Arizona’s summer without it—by choice.

Last year, Farmer went with

Jul 30, 2025, 3:50:04 PM | Fast company - tech
Panasonic announces new CEO, a former Boeing executive

Japanese electronics and technology company Panasonic has chosen a new chief executive after eking out a 1.2% rise in its first quarter

Jul 30, 2025, 3:50:03 PM | Fast company - tech
How Cloudflare declared war on AI scrapers

Cloudflare supports more than 20% of total internet traffic. The company recently made headlines with breakthrough technology that blocks

Jul 30, 2025, 1:30:04 PM | Fast company - tech