The House’s AI Task Force leader says regulation can’t be rushed

California Congressman Jay Obernolte has his work cut out for him.

In February, Obernolte, a Republican, and Ted Lieu, a Democrat, were named cochairs of a new task force in the House of Representatives, which will define the House’s approach to regulating artificial intelligence. Its goal is to come up with proposals to protect consumers from—and spur investment in—emerging AI technology.

For Rep. Obernolte, who has a master’s degree in artificial intelligence from UCLA and who later became a video game entrepreneur, the assignment is, in some ways, the fulfillment of a long-held dream. “It’s a little known fact that I never wanted to be in Congress,” he told an audience of tech leaders, investors, and policymakers who gathered Wednesday at the Hill & Valley Forum, an event in Washington, D.C., focused on the national security implications of AI. “My ambition in life was to be an academic and do cutting edge research in AI.”

While leading a Congressional task force on AI isn’t exactly cutting edge research, it does stand to have a big impact on the field and everyone whose lives will be impacted by that field. Fast Company spoke with Obernolte about the task force’s mission and his case for going slow. This interview has been lightly edited for clarity.

We’re at this event with lots of tech people—companies, investors with a lot on the line in this AI discussion. There’s a natural reason that you want to consult with them. There’s also a natural fear from some folks that they might drive the discussion too much, and that industry influence is going to lead to a lax approach to AI. So how do you thread that needle as head of this task force?

Engagement with all different corners of the stakeholder universe is critically important to accomplishing our mission. And we say in Congress that our knowledge may look like it’s a mile wide, but it’s only an inch deep, because we legislate on such a wide variety of different topics. So we really need to find the experts in the fields that we regulate, to make sure that we’re doing things that are thoughtful and deliberate and well-founded. So it’s critically important that the people that are actually implementing the deployment of artificial intelligence in industry in the United States are part of that discussion. 

But you raise an interesting counterpoint, which is that there’s fear of industry capture, where sometimes when the federal government establishes regulation over an industry, industry seeks to turn that into a barrier to entry for smaller competitors. And that can easily be paired with some other factors at work in the AI space, which are the scarceness of compute and the requirements for a vast amount of compute to train frontier large language models to create a monopoly-like atmosphere. Obviously, that’s not something that would be good for consumers. 

What about concerns that the U.S. is in this intense escalating competition with China on emerging technologies, and there’s this idea that anything we do to slow down our industry gives an edge to China? Given your mandate is to establish guardrails, how do you navigate that? 

These two ideas are certainly in tension. On the one hand, you have to have a reason for regulating. We’re not regulating just for the purpose of regulation. That’s a fool’s errand. So we have to have a North Star when you regulate, and our North Star is the protection of our consumers and our society against the potential harms of the malicious use of AI. 

At the same time, though, you don’t want to prevent AI from being successful. And sometimes that’s a temptation, right? Technological revolutions are always disruptive. But the fact that it’s disruptive shouldn’t be pejorative. If you look, most recently, the internet has created a huge rush of expansion of our economy and a rising wave of prosperity, along with a disruption. So that’s what AI promises. Our geopolitical rivals certainly are not stopping in their efforts to develop the world’s best AI.

What are some examples of regulations that might align with that North Star that you just laid out?

One of the early decisions we’re going to have to make is whether or not we follow the lead of entities like the European Union and create a universal licensing requirement for AI and spin up a brand new bureaucracy to administer it, or whether we empower our existing sectoral regulators to regulate AI within their sectoral spaces. I actually think that makes a lot more sense. If you look at the risk management framework that NIST published last year, which I thought was excellent work, it makes it pretty clear that the risks of AI are highly contextual, and something that is unacceptably risky in one context might be completely benign in another context, so it just depends on the context. 

Look at the FDA, who has already processed over 500 applications for the use of AI and medical devices. It’s hard to imagine a risk context that’s more risky than the use of AI and medical devices. That’s important to get right. But I think it illustrates the difference in the approach, because it’s a lot easier to teach the FDA what the FDA might not already know about AI and how to regulate it than it is to teach a brand new agency everything the FDA has learned over the past 50 years about ensuring patient safety. 

It doesn’t minimize the work that has to be done. We need to create testing and evaluation standards for our sectoral regulators. We need to create regulatory test beds for the testing of potentially malicious AI. We need to create pools of technical talent to inform these agencies. All of those things are big tasks, and I don’t want to minimize them. But it’s still a much easier task than trying to spin up a brand new bureaucracy and teach them all of the different sectoral risks.

The counterpoint is that the one thing Europe has on the U.S. is that it has acted on AI. Your task force in the House just got stood up. Senator Schumer’s task force has been around for a year. How did Europe get so far ahead? What has been holding the U.S. back?

Your question is a very pertinent one, and it speaks to the fears that a lot of people have about AI. And the perception right now is that AI is largely unregulated in the United States. That is absolutely untrue. AI is a tool. It is not an outcome itself. And we already regulate outcomes.

A couple of years ago, there was a well publicized case where AI was being used for the automated screening of résumés, and it turned out that that algorithm had some significant biases built into it, not intentionally, but as a result of some inattention to the data that was used to train it. And those biases were discovered, which is good, as they should be. They were fixed, as they should be. But that should not distract from the fact that discriminating in hiring is already illegal. It doesn’t matter if you use AI to do it, or something else to do it.

So yes, it’s good that we are paying attention to algorithmic transparency in that way. But it’s not the case that that potentially bad outcome was not already taken care of. That’s true in a lot of different cases like cyber fraud. That’s one of the things I have a lot of concern about: The use of AI to enable cybercriminals to steal from people. But that behavior is already illegal. We need to equip our law enforcement agencies with the tools to detect it and combat it, absolutely. But we don’t need new laws that say it should be illegal to do this, because it already is.

What are your first points of priority in the task force?

Things are actually going very well in the task force. We’ve got 24 extremely engaged, intelligent people that are working well together. This is all working towards the delivery of our work product, which will be, by the end of the year, a report that details and proposes a federal regulatory structure for the regulation of AI. And I view that report as a to-do list. It’s not going to be legislation, although I do hope that there is some legislation we can get moving this year and get passed this year. But I think it also would be a fallacy for people to expect a 3,000-page AI Act like the EU has done. I think that that’s not the way to do regulation. I think that the job of regulating AI is not one 3,000-page bill next year. It’s a few bills a year for 20 years. That’s appropriate, because there’s some things that we know enough to be able to do a thoughtful job of regulating now. And there are other things that are urgent that need to be done now, but there are other things that are blurrier.

What’s something you would say is blurry?

Well, IP issues. Congress absolutely needs to act on that, because we put our courts in an impossible place of having to interpret written law that doesn’t exist. So they’re going to have to make up something out of whole cloth unless Congress acts to clarify. But it’s not a simple issue. There’s a lot of different contentious voices on both sides. And it’s going to be difficult and time consuming to sort through them. But we have an obligation to do it.

https://www.fastcompany.com/91117132/jay-obernolte-interview-house-ai-task-force?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 17d | 3. 5. 2024 13:10:07


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

Wakelet is a great tool for creating your digital scrapbook

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

20. 5. 2024 11:20:06 | Fast company - tech
How tech’s biggest products get designed to exclude people of color

In addition to unionization for more humane working conditions, employees are on the front lines fighting for a say in how companies deploy their products, and to whom, to ensure they’re not being

20. 5. 2024 11:20:05 | Fast company - tech
How Philadelphia startup Proscia is bringing cancer biopsies into the digital age

The American Cancer Society predicts that new cancer cases in the U.S. will hit an all-time high this year of over

20. 5. 2024 11:20:04 | Fast company - tech
I paid for Instagram’s Meta Verified service and still got locked out of my account

If you ever need a reminder of your own cosmic insignificance, just try to get some actual human tech support for Facebook or Instagram.

For years, no such thing existed. You are an ant

20. 5. 2024 11:20:03 | Fast company - tech
How to protect your keyless car from theft

They appear like ghosts in the night, standing outside your house, one holding up an antenna while the other crouches next to the car parked on the driveway. Within seconds, your car is gone, yet

19. 5. 2024 12:30:02 | Fast company - tech
What is a GenAI phone? Here’s everything you need to know

Generative AI (general artificial intelligence) has been the trendiest term in software for two years. Now it’s about to make its way to the consumer hardware market, too. By the la

18. 5. 2024 11:20:05 | Fast company - tech
As Nvidia grows stronger, Apple’s iPhone continues to struggle

This story originally appeared in The Technology Letter and is republished here with permission.

When an analyst leav

18. 5. 2024 9:10:03 | Fast company - tech