For AI to really succeed, we need to protect private data

A few days ago, 13 AI employees wrote an open letter calling for greater transparency into AI operations. They understand this is a turning point in the evolution of AI, and the future of our industry depends on maintaining public confidence.

The AI industry is approaching a transformation in which user data will become paramount, and trust will be the most important commodity. I call this “AI 2.0,” a parallel to Web 2.0, the second wave of the internet. Back in 1999, Web 2.0 made the internet participatory, creating bilateral relationships between websites and their audiences. AI is set for a similar revolution. To be useful, AI needs to be personal; AI 2.0 is going to be about us.

Generative AI 1.0 is impersonal. The algorithms offer the same answers regardless of who asks the questions. These answers are more amazing than useful because they’re unconnected to our identity and interests.

We can ask AI to write a letter, but not to write it in our own personal style. And what good are recommendations if AI doesn’t know our preferences? What’s true for people is also true for businesses: AI that doesn’t know the user isn’t very useful.

For AI to understand us, it must have data about us, and before we allow that, we must have trust. Trust is the most important factor blocking the progress of this industry today.

All new technologies make errors, and AI with hallucinations makes many. AI appears not merely to err, but to breach trust. Sometimes it conveys a distorted truth, as when Google Gemini offered counter-historical images that fit the political biases of its programmers. Other times it appears to take what belongs to others, like the content of copyrighted images or articles, or the sound of a famous actress’ voice.

We can build trust with regulation that protects private data and promotes transparency. Unfortunately, this is not the regulation that Washington has proposed so far.

The White House statement of October and the Schumer report last month say almost nothing about transparency or data rights. They’re heavily influenced by the AI elite, whose priorities are different from those of the rest of the AI economy. Most executives in this business understand that AI will be a stronger industry when it stops taking liberties with others’ intellectual property. We need to convince our customers that AI will respect their private information.

I propose the following rules for transparency and data privacy:

These points should be written into legislation, enforced by the government not by industry cooperation. Firms have too much incentive to break these rules unless they are asserted with force of law. Today it’s common for AI firms to process terabytes of private or copyrighted information without permission, disclosure, or compensation. Until regulation forbids that practice, there will be pressure on every company to do the same.

If AI firms need to train on copyrighted information, they can pay for it. Their budgets run to tens and hundreds of billions, allocated to things like infrastructure, energy, and personnel. If required to, they’d pay for training data as well.

Some people worry that regulating data privacy rights would restrain American AI innovation and give China an edge. In fact, it would do the opposite. Once AI is trusted with personal information it will be more useful and more broadly used. An American AI model based on clear property rights and data privacy will inspire more participation than a Chinese AI model with its data controlled by the CCP.

AI 2.0 will deliver greater value and require a new relationship with the public. The data-centric race is ending; the trust-centric race is beginning. America has a distinct advantage in this race, but we need the right regulations to win.

https://www.fastcompany.com/91137724/for-ai-to-really-succeed-we-need-to-protect-private-data?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Létrehozva 11mo | 2024. jún. 9. 12:20:02


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

AI scam calls are getting smarter. Here’s how telecoms are fighting back

Scam calls are turning the world on its head. The Global Anti-Scam Alliance estimates that scammers stole a staggering $1.03 tril

2025. máj. 8. 9:40:03 | Fast company - tech
‘Our mantra is: Be cowboy’: Professional bull riding is riding high on TikTok

Continuing from the “year of yeehaw,” professional bull riding is having a moment on TikTok.

Since the beginning of this year, Professional Bull Riding (PBR)—the largest bull riding leag

2025. máj. 8. 9:40:02 | Fast company - tech
CrowdStrike lays off 500 workers despite reaffirming a strong 2026 outlook

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast

2025. máj. 7. 19:40:05 | Fast company - tech
Apple eyes AI-powered search as Safari usage declines

Apple is considering reworking its Safari web browser across its devices to place a greater emphasis on AI-powered search engines, Bloomberg

2025. máj. 7. 19:40:04 | Fast company - tech
‘The school has to be evacuated’: Connecticut students are setting their Chromebooks on fire for TikTok

The latest TikTok trend is leading to fire evacuations at schools across Connecticut.

As part of the trend, students are filming themselves inserting items such as pencils, paper clips,

2025. máj. 7. 17:20:03 | Fast company - tech
Netflix is getting a big TV redesign and AI search

Netflix is finally pushing out the major TV app redesign it started testing last year, with a top navigation bar and new recommendation features. It’s also experimenting with generative AI a

2025. máj. 7. 14:50:06 | Fast company - tech
LinkedIn’s new AI tools help job seekers find smarter career fits

New AI features from LinkedIn will soon help job seekers find positions that best suit them—without the n

2025. máj. 7. 14:50:05 | Fast company - tech