Governor Gavin Newsom signed a flurry of AI bills—but not the most high-profile one

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Newsom signs pile of AI bills as the SB 1047 deadline approaches

California Governor Gavin Newsom signed a pile of AI bills into law on Tuesday. Two of those bills concern the rights of actors in a world where studios have the option to use an AI-generated version of an actor rather than the genuine article. AB 2602 requires studios to state explicitly in contracts with actors that they’re claiming the right to create an AI-generated likeness of their body or voice. AB 1836 imposes a 70-year requirement in which studios must get consent from a deceased actor’s estate before generating an AI likeness. (Both bills build on AI-related concessions that actors won during the writers’ strike.)

Another trio of bills signed into law by Newsom deal with the use of AI in politics. AB 2655 requires online platforms to remove or label deepfakes that misrepresent political candidates during election season. AB 2839, meanwhile, expands the time period around elections in which individuals are prohibited from knowingly sharing deepfakes and other AI-generated disinformation. And AB 2355 requires campaigns to disclose any use of AI-generated or AI-manipulated ad content. 

Tuesday’s news is just as notable for what it didn’t include: SB 1047, which would impose a basic set of safety and reporting requirements on companies developing large “frontier” models. The bill intends to get the state more involved in ensuring that AI companies don’t create unsafe models that could cause or enable catastrophic harm (for example, the creation of a bioweapon).

Many Silicon Valley venture capital and startup people, along with some powerful political allies, claim the bill’s requirements would slow research progress on a technology that could revolutionize technology and business. Proponents of the bill argue that frontier AI models may soon pose severe risks and that developers of such models should implement reasonable safeguards against such risks.

Tuesday’s signature of the politics- and entertainment-related AI bills is no indicator that Newsom will indeed sign AB 1047 (he has until September 30 to decide). In deciding SB 1047, Newsom must size up the real risk of catastrophic harm from large AI models, then balance the threat against the need of the state’s biggest industry to push forward—and profit from—a transformative technology.   

Microsoft rolls out the second “wave” of AI at work

Microsoft rolled out on Monday a new set of AI features within its productivity and collaboration apps. The showcase event,  called ">Microsoft 365 Copilot: Wave 2, was meant to display the second phase of Copilot’s integration into modern business workflows. As demonstrated on Monday, the AI Copilot is ever present in the interface, and has become more adept at fetching relevant contextual information, including proprietary or company-specific information from a knowledge graph.

“Copilot Pages” is a good example. The tool is something like Google Docs, with the AI copilot acting as a coworker in a collaboration group. One demo video shows a user asking Copilot to fetch information about a potential project. The user can then chat with the AI, and finally move all the AI’s responses onto a “Page,” where other users are invited to weigh in. As the team iterates and fleshes out the idea, it can use the Copilot to ">pull in documents that might help advance the work, like proposal templates or project plans from the past.

Another feature, Narrative Builder, takes a similar approach, but within the PowerPoint environment. The tool starts by generating a sample presentation outline based on a small amount of information provided by the user, then pulls in presentation templates and art that fit the company’s style. The tool lets a user get to a reasonably good draft quickly, then ">begin reacting to it, instead of staring at blank pages. A new Prioritize my Inbox tool uses AI to prioritize emails based on the body of the email itself. For instance, it can ">glean from an email if an action needs to be taken by the recipient, and how urgently the recipient needs to act. 

Perhaps most interesting of all, Microsoft is now rolling out a new tool called Copilot Studio where regular worker-users (not coders or people with AI skills) can build their own AI agents. Microsoft believes such agents are the way of the future for businesses. For instance, an HR department might use Copilot to build a “new employee assistant” that can guide a new hire through all the paperwork, orientation, and training that happens on the first day of a new gig. Such a bot would know the employee handbook and onboarding procedures and could ">answer any questions the employee might have.

Or a customer service department might use Copilot to build an agent that assists field service workers during customer calls. Such an agent would come armed with all the company’s product information, information on specific customers, and protocols for repairs and trouble-shooting. Microsoft originally announced the Copilot agents back in May, but the user-friendly agent builder is new, and will become available to businesses that subscribe to the Microsoft 365 Copilot services within the next few weeks.

NewsGuard: Two-thirds of top news sites block AI crawlers

Large language models are trained using massive amounts of data scraped from the public internet without explicit permission—and without paying for it. As this has become better understood, many publishers have included a line of code in their websites telling the web crawlers “do not scrape.” In a new report, NewsGuard says that 67% of news websites it rates as “top quality” now block web crawlers’ access to their content.

NewsGuard, which provides anti-misinformation tools, deduces that AI model developers must then rely disproportionately on news data from low-quality sources such as The Epoch Times and ZeroHedge, which may publish rumors or conspiracy theories, or have a political agenda. Among these low-quality sites, 91% allow the crawlers, NewsGuard finds. “This helps explain why chatbots so often spread false claims and misinformation,” the report states.

Of course, NewsGuard has no way of knowing what news data is used to train AI models because AI companies don’t disclose that (it acknowledges this). But it’s true that publishers are blocking crawlers used by AI companies, and that AI companies now routinely sign content deals with publishers so that they can train models using the publisher’s content. OpenAI, for example, has now signed such agreements with Time, The Atlantic, Vox Media, and others.

The AI industry is working on ways of fixing the news problem. Perplexity and OpenAI Search, for example, can call on an index of current web content to help inform the answers the AI generates. And a growing area of research focuses on “recursive” learning, or the ability of an AI model to constantly learn new information and integrate it into their training data.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91193350/governor-gavin-newsom-signed-a-flurry-of-ai-bills-but-not-the-most-high-profile-one?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 10mo | 19 сент. 2024 г., 15:20:04


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

This Florida company’s imaging tool helps speed up natural disaster recovery efforts

It has, to date, been a calm hurricane season in the state of Florida, but any resident of the Southeast will tell you that the deeper into summer we go, the more dangerous it becomes.

T

25 июл. 2025 г., 19:50:03 | Fast company - tech
TikTok reacts to alleged shoplifter detained after 7 hours in Illinois Target

TikTok has become obsessed with an alleged shoplifter who spent seven straight hou

25 июл. 2025 г., 15:10:09 | Fast company - tech
Is it safe to install iOS 26 on older iPhones like the 11 and SE?

Apple says the upcoming iOS 26, expected in a polished “release” version in September, will support devices back to the iPhone 11 from September 2019 and second-generation iPhone SE from April 202

25 июл. 2025 г., 15:10:08 | Fast company - tech
‘Democratizing space’ requires addressing questions of sustainability and sovereignty

India is on the moon,” S. Somanath, chairman of the Indian Space Research Organization, announced in

25 июл. 2025 г., 10:30:06 | Fast company - tech
iPadOS 26 is way more Mac-like. Where does that lead?

Greetings, everyone, and welcome back to Fast Company’s Plugged In.

It was one of the best-received pieces of Apple news I can recall. At the company’s

25 июл. 2025 г., 08:20:03 | Fast company - tech
Elon Musk says he’s bringing back Vine in AI form. Here’s what that could mean

Good news: Vine might be coming back. Bad news: in AI form, courtesy o

24 июл. 2025 г., 22:50:08 | Fast company - tech
Apple’s iOS 26 public beta is out. Here’s how to install it safely

A stable “release” version of Apple’s iOS 26 is due in September, but you can now try an in-progress version, called the public beta. It previews a revamped interface and new fea

24 июл. 2025 г., 20:40:06 | Fast company - tech