Biden still won’t hold AI companies’ feet to the fire

This might be the busiest and longest AI-related week to date: The Senate AI forum held two meetings in one day, there was a two-day AI Safety Summit in the U.K., G7 leaders released an AI governance code, and the Biden administration issued a long-awaited executive order on safe, secure, and trustworthy AI. The only thing longer than the week itself is President Joe Biden’s 100-plus-page executive order—a document that looks good on paper but falls short in terms of expectations and enforcement.

The anticipation and fanfare that took place at Monday’s presentation was well received by those in the room, but since then many—including me—have questioned the noticeable holes and overreach within the nearly 20,000-word order, particularly the suggested mechanism for enforcement and implementation. The intent is there, but this latest effort feels like lots of speed and no teeth.

Let’s start with the topic that is on everyone’s minds: the risks associated with AI. On Monday, Biden said that to realize the promise of AI and avoid the risk, we need to govern the technology. But he doesn’t get it quite right. To truly address the risks associated with AI, we need to govern the use cases associated with the technology.

Open-Source vs. Closed-Source AI: We need a combination of the two

By mandating government review of large language models before they can be made public, the executive order calls into question the future legality of open-source AI. The potential loss of the ability to leverage open-source AI would be a missed opportunity.

The debate around open-source software and tools has been around for decades with two sides: those who are in favor of open-source AI or software that is open to the public to use and modify without running into licensing issues, and those who prefer closed-source (or proprietary) software, which has source code that is not available to the public and therefore can’t be modified.

My fear is that the mandates set forth in the order will mean that we lose out on the benefits of open-source AI, particularly the speed of innovation, the community best practices that it builds, and the learnings that it offers. Case in point: Open-source can help eliminate instances of bias as developers are able to identify potential sources by inspecting the code.

Coincidentally, this was a major point raised on Monday—preventing bias in AI algorithms. It’s true that ungoverned AI algorithms can be harmful and come with many risks, but how will we learn if all AI is closed to the public? Perhaps even more important, what will the impact be on research if scientists and academics are unable to drive innovation and build cutting-edge tools and technologies? The truth is that open-source democratizes AI, which is important to a secure AI future.

This isn’t to suggest that there should be no closed-source AI. We can and should have a combination of the two. It’s possible for some companies to have proprietary algorithms and datasets while other companies—like Hugging Face—help users build and train machine learning models. According to a report by Gartner, 70% of new and internally developed applications will incorporate AI- or ML-based models by 2025. All the more reason to embrace a combination of open-source and closed-source AI.

And we don’t need to accept the loss of open-source AI in order to properly control the worst implications of AI. Here’s how.

We need the right approach to AI governance: Start with the use case, focus on data

To govern AI, we need to focus enforcement efforts on the use of AI, not the underlying R&D. Why? The risk associated with AI fundamentally depends on what it is used for. AI governance is crucial in mitigating risks and ensuring AI initiatives are transparent, ethical, and trustworthy. Think of it like a system of checks and balances for AI models.

AI governance is a necessary framework that sets the right policies and ensures organizational accountability. Companies should reuse their existing data governance framework as a starting point. It ensures that AI models are subject to the necessary data quality, trust, and privacy standards. Using the same blueprint for both data and AI governance ensures that AI is utilized responsibly and provides clear rules and accountability in its development and deployment.

Just look at the mandate around the creation of new AI governance boards and that all federal agencies have chief AI officers. Also consider the requirement for developers to share data and training information prior to publicly releasing future large AI models or updated versions of those models.

For the U.S. government to get AI governance right, regulators first must insist that organizations get AI governance right. It’s about defining use cases, identifying and understanding data, documenting models and results, and verifying and monitoring your model. The right approach to AI governance provides best practices to build on and learn from. With that approach, we will all win at AI.


Felix Van de Maele is the CEO of Collibra.

https://www.fastcompany.com/90977398/biden-still-wont-hold-ai-companies-feet-to-the-fire?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 2y | 3 nov 2023, 09:10:11


Accedi per aggiungere un commento

Altri post in questo gruppo

SoftBank’s profit rebounds on AI stocks

Japanese technology conglomerate SoftBank Group Corp. posted a 421.8 billion yen ($2.9 billio

7 ago 2025, 16:20:05 | Fast company - tech
iOS 26 has political groups panicking over fundraising. The truth might surprise them

Last week, Punchbowl News published an internal memo the National Republican Senatorial Committee (NRSC) sen

7 ago 2025, 13:50:07 | Fast company - tech
How crypto billionaires took over Trump’s political machine

Last week, President Donald Trump’s super PAC revealed that it has an unsettling amount of cash on hand for a president who is, his

7 ago 2025, 11:40:07 | Fast company - tech
RushTok is back. TikTok still can’t get enough of sorority recruitment

The internet’s favorite programming is back on: #RushTok season is officially upon us. 

If this is your first time tuning in, “rush” is the informal name for the recruitment process

7 ago 2025, 07:10:02 | Fast company - tech
Instagram launches map feature. It looks a lot like Snap Map

Location sharing among friends, family, and significant others has quietly become the norm in recent years.

Now Instagram is looking for a piece of the action with the launch of a

7 ago 2025, 00:10:05 | Fast company - tech
WhatsApp removes 6.8 million accounts linked to scam centers

WhatsApp has taken down 6.8 million accounts that were “linked to criminal scam centers” target

6 ago 2025, 21:40:06 | Fast company - tech
Google wants you to be a citizen data scientist

For more than a decade, enterprise teams bought into the promise of business intelligence platforms delivering “decision-making at the speed of thought.” But most discovered the opposite: slow-mov

6 ago 2025, 19:30:04 | Fast company - tech