California lawmakers are considering legislation that would require artificial intelligence companies to test their systems and add safety measures so they can’t be potentially manipulated to wipe out the state’s electric grid or help build chemical weapons — scenarios that experts say could be possible in the future as technology evolves at warp speed.
Legislators plan to vote Tuesday on this first-of-its-kind bill, which aims to reduce risks created by AI. It is fiercely opposed by tech companies, including Meta, the parent company of Facebook and Instagram, and Google. They say the regulations take aim at developers and instead should be focused on those who use and exploit the AI systems for harm.
Democratic state Sen. Scott Wiener, who authors the bill, said the proposal would provide reasonable safety standards by preventing “catastrophic harms” from extremely powerful AI models that may be created in the future. The requirements would only apply to systems that cost more than $100 million in computing power to train. No current AI models have hit that threshold as of July.
“This is not about smaller AI models,” Wiener said at a recent legislative hearing. “This is about incredibly large and powerful models that, as far as we know, do not exist today but will exist in the near future.”
Democratic Gov. Gavin Newsom has touted California as an early AI adopter and regulator, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide tax guidance. At the same time, his administration is considering new rules against AI discrimination in hiring practices. He declined to comment on the bill but has warned that overregulation could put the state in a “perilous position.”
The proposal, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices. The state attorney general also would be able to pursue legal actions in case of violations.
A growing coalition of tech companies argue the requirements would discourage companies from developing large AI systems or keeping their technology open-source.
“The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation,” Rob Sherman, Meta vice president and deputy chief privacy officer, wrote in a letter sent to lawmakers.
The proposal could also drive companies out of state to avoid the regulations, the state’s Chamber of Commerce said.
Opponents want to wait for more guidance from the federal government. Proponents of the bill said California cannot wait, citing hard lessons they learned not acting soon enough to reign in social media companies.
State lawmakers were also considering Tuesday another ambitious measure to fight automation discrimination when companies use AI models to screen job resumes and rental apartment applications.
—Trân Nguyễn, Associated Press
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině

From family photos in the cloud to email archives and social media accounts, the digital lives of Americans are extensive and growing.
According to recent studies by the password managem

A dozen years after its launch, fintech company Chime rang the bell this morning at the Nasdaq MarketSite in Times Square to ce

It hits at a certain time in the afternoon, when a familiar craving strikes. You walk to the kitchen. The satisfying sound of a can cracking, the hiss of bubbles. It’s time for a “fridge cigarette

Many developers find that AI programming assistants have made writing code easier than ever. But maintaining the infrastructure that actually runs that code remains a challenge, requiring engineer


Fraudulent job applications have become a serious issue in the era of

With the first family actively engaged in memecoin ventures, speculation about the future of cryptocurrency has never been hotter. Laura Shin, crypto expert and host of the podcast Unchained