Biden still won’t hold AI companies’ feet to the fire

This might be the busiest and longest AI-related week to date: The Senate AI forum held two meetings in one day, there was a two-day AI Safety Summit in the U.K., G7 leaders released an AI governance code, and the Biden administration issued a long-awaited executive order on safe, secure, and trustworthy AI. The only thing longer than the week itself is President Joe Biden’s 100-plus-page executive order—a document that looks good on paper but falls short in terms of expectations and enforcement.

The anticipation and fanfare that took place at Monday’s presentation was well received by those in the room, but since then many—including me—have questioned the noticeable holes and overreach within the nearly 20,000-word order, particularly the suggested mechanism for enforcement and implementation. The intent is there, but this latest effort feels like lots of speed and no teeth.

Let’s start with the topic that is on everyone’s minds: the risks associated with AI. On Monday, Biden said that to realize the promise of AI and avoid the risk, we need to govern the technology. But he doesn’t get it quite right. To truly address the risks associated with AI, we need to govern the use cases associated with the technology.

Open-Source vs. Closed-Source AI: We need a combination of the two

By mandating government review of large language models before they can be made public, the executive order calls into question the future legality of open-source AI. The potential loss of the ability to leverage open-source AI would be a missed opportunity.

The debate around open-source software and tools has been around for decades with two sides: those who are in favor of open-source AI or software that is open to the public to use and modify without running into licensing issues, and those who prefer closed-source (or proprietary) software, which has source code that is not available to the public and therefore can’t be modified.

My fear is that the mandates set forth in the order will mean that we lose out on the benefits of open-source AI, particularly the speed of innovation, the community best practices that it builds, and the learnings that it offers. Case in point: Open-source can help eliminate instances of bias as developers are able to identify potential sources by inspecting the code.

Coincidentally, this was a major point raised on Monday—preventing bias in AI algorithms. It’s true that ungoverned AI algorithms can be harmful and come with many risks, but how will we learn if all AI is closed to the public? Perhaps even more important, what will the impact be on research if scientists and academics are unable to drive innovation and build cutting-edge tools and technologies? The truth is that open-source democratizes AI, which is important to a secure AI future.

This isn’t to suggest that there should be no closed-source AI. We can and should have a combination of the two. It’s possible for some companies to have proprietary algorithms and datasets while other companies—like Hugging Face—help users build and train machine learning models. According to a report by Gartner, 70% of new and internally developed applications will incorporate AI- or ML-based models by 2025. All the more reason to embrace a combination of open-source and closed-source AI.

And we don’t need to accept the loss of open-source AI in order to properly control the worst implications of AI. Here’s how.

We need the right approach to AI governance: Start with the use case, focus on data

To govern AI, we need to focus enforcement efforts on the use of AI, not the underlying R&D. Why? The risk associated with AI fundamentally depends on what it is used for. AI governance is crucial in mitigating risks and ensuring AI initiatives are transparent, ethical, and trustworthy. Think of it like a system of checks and balances for AI models.

AI governance is a necessary framework that sets the right policies and ensures organizational accountability. Companies should reuse their existing data governance framework as a starting point. It ensures that AI models are subject to the necessary data quality, trust, and privacy standards. Using the same blueprint for both data and AI governance ensures that AI is utilized responsibly and provides clear rules and accountability in its development and deployment.

Just look at the mandate around the creation of new AI governance boards and that all federal agencies have chief AI officers. Also consider the requirement for developers to share data and training information prior to publicly releasing future large AI models or updated versions of those models.

For the U.S. government to get AI governance right, regulators first must insist that organizations get AI governance right. It’s about defining use cases, identifying and understanding data, documenting models and results, and verifying and monitoring your model. The right approach to AI governance provides best practices to build on and learn from. With that approach, we will all win at AI.


Felix Van de Maele is the CEO of Collibra.

https://www.fastcompany.com/90977398/biden-still-wont-hold-ai-companies-feet-to-the-fire?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 2y | 3 nov 2023, 9:10:11


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

The ‘cortisol cocktail’ is blowing up on TikTok. Does it really work?

Rather than a headache and hangxiety, a new viral cocktail recipe is claiming to lower cortisol levels and reduce stress.

The nonalcoholic drink, known as the “

3 sept 2025, 17:30:08 | Fast company - tech
This one line from Google’s antitrust ruling could reshape every Big Tech case

Google dodged a bullet Tuesday when a federal judge ruled the company does no

3 sept 2025, 17:30:07 | Fast company - tech
Grab’s $20 billion playbook for becoming a super app

Grab is a rideshare service-turned superapp, not available in the U.S. but rapidly growing in Southeast Asia. It’s even outmaneuvered global players like Uber to reach a valuation north of $20 bil

3 sept 2025, 15:20:04 | Fast company - tech
Kids aren’t reading for pleasure—and more than tech is to blame

A quarter-century ago, David Saylor shepherded the epic Harry Potter fantasy series onto U.S. bookshelves. As creative director of

3 sept 2025, 12:50:11 | Fast company - tech
Samsung’s Galaxy Z Fold7 ruined other foldables for me—including mine

There’s no other phone I’d rather be using right now than Samsung’s Galaxy Z Fold7—and that’s a problem.

I’ve been a foldable phone appreciator for a while now, and a couple of years ago

3 sept 2025, 12:50:10 | Fast company - tech
Fantasy football nerds are using AI to get an edge in their leagues this year

This fantasy football season, Aaron VanSledright is letting his bot call the shots.

Ahead of the NFL season, the Chicago-based cloud engineer built a custom

3 sept 2025, 12:50:09 | Fast company - tech
Your phone’s ‘Share’ button doesn’t get enough love

One of the most powerful buttons on your phone is also one of the easiest to ignore.

I’m referring to the humble “Share” button, a mainstay of both iOS and Android that unloc

3 sept 2025, 12:50:06 | Fast company - tech