What to make of JD Vance’s speech at the Paris AI summit 

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Vance’s Paris speech shows a brash American exceptionalism for the AI age

Vice President JD Vance’s &list=RDNS64E9O1Gv99o&start_radio=1">speech to world leaders at the Artificial Intelligence Action Summit was by turns warm and conciliatory, and strident to the point of offensiveness. Vance emphasized that AI has the potential to bring significant benefits to the world, and its risks can be effectively managed—provided that the U.S. and its tech companies take the lead.

Vance argued that the U.S. remains the leader when it comes to developing cutting-edge AI models, and suggested that other countries should collaborate with the U.S. on AI rather than competing against it. (Vance also said AI companies shouldn’t try to dictate the political tenor of content or dialog their models will accept, citing the Google Gemini model’s failed attempt at generating “correct” images that resulted in a Black George Washington and female popes.)

“This administration will ensure that AI developed in the United States continues to be the gold standard worldwide,” he said. And key to the U.S.’s approach, according to Vance: leaving tech companies to regulate themselves on safety and security issues. 

Vance’s view that the U.S. should take a “collaborative” and “open” approach to AI with other Western countries, but stressed that the world needed “an international regulatory regime that fosters the creation of revolutionary AI technology rather than strangles it.” Vance went on to criticize the E.U. for its more intrusive regulatory approach. It would be a “terrible mistake for your own countries” if they “tightened the screws on U.S. tech companies,” he advised the assembly. 

But not everyone agrees: Every country attending the Paris summit signed a declaration ensuring artificial intelligence AI is “safe, secure, and trustworthy”—except for the U.S. and the U.K.

Vance said that his administration will take a different approach—using protectionist tactics to favor U.S. AI companies. The White House will continue the Biden-era chip bans, which restrict the sale of the most advanced AI chips to other countries. (The goal right now for the Trump administration is to hinder Chinese companies like DeepSeek.) It’s possible that a Trump administration could tighten these restrictions further or explore additional measures to slow down foreign AI competitors.

“To safeguard America’s advantage, the Trump administration will ensure that the most powerful AI systems are built in the U.S. with American designed and manufactured chips,” he said.

OpenAI’s models will no longer shy away from sensitive topics

In his Paris speech, JD Vance said his administration believes that AI companies shouldn’t try to restrict speech—even disinformation or outright propaganda—from their models and chatbots. That’s music to Silicon Valley bigwigs’s ears, many of whom don’t love the expensive and demanding and human-intensive work of content moderation. Two days after the speech, OpenAI announced that it’s pushing a new, more permissive code of conduct (a “model spec”) into its AI models. Going forward, its models will be less conservative about what they will and won’t talk about.

“The updated Model Spec explicitly embraces intellectual freedom–the idea that AI should empower people to explore, debate, and create without arbitrary restrictions–no matter how challenging or controversial a topic may be,” the company said in a blog post published Wednesday. As an example, OpenAI said that an AI model should be kept from outputting detailed instructions for building a bomb or violating personal privacy, but should be trained not to default to simply saying “I can’t help you with that” when given politically or culturally sensitive questions. “In essence, we’ve reinforced the principle that no idea is inherently off limits for discussion,” the blog post said, “so long as the model isn’t causing significant harm to the user or others (e.g., carrying out acts of terrorism).”

This policy shift sounds very much in line with the permissive posture adopted by right wing sites such as Gab and Parler, then by X, then, recently, by Meta’s Facebook. Now OpenAI is getting in on Big Tech’s vibe shift on content moderation. Stay tuned for the results.

PwC Champions Agentic AI as the Next Major Workplace Disruptor

The professional services firm PwC recently released a report asserting that AI agents could “dwarf even the transformative effects of the internet.” PwC predicts these agents will reshape workforce strategies, business models, and competitive advantages, while combining with human creativity to form “augmented intelligence,” enabling unprecedented innovation and productivity. The report emphasizes collaboration between humans and AI: “While AI agents offer remarkable autonomy, an effective model is one of collaboration and dynamic oversight. This principle of human-at-the-helm can guide the development of clear protocols that define the boundaries of AI autonomy and enable appropriate human intervention.”

PwC warns that businesses must reimagine work to adapt to this agentic world. But, the PwC authors stress, that shift is a necessary one, as evidenced by AI agents’ successful deployment in areas like software development and customer service. To facilitate this transition, PwC suggests a five-step approach: strategize, reimagine work, structure the workforce, help workers redefine their roles, and unleash responsible AI. “The question is,” the report states, “have you transformed to become a winner in the age of AI-enhanced work, or are you racing and perhaps too late to catch up?”

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91278387/what-to-make-of-jd-vances-speech-at-the-paris-ai-summit?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 3mo | Feb 13, 2025, 5:40:06 PM


Login to add comment

Other posts in this group

Anthropic hires a top Biden official to lead its new AI-for-social-good team (exclusive)

Anthropic is turning to a Biden administration alum to run its new Beneficial Deployments team, which is tasked with helping extend the benefits of its AI to organizations focused on social good—p

May 5, 2025, 9:20:03 PM | Fast company - tech
Speed-limiting devices could be coming for reckless U.S. drivers in these states

A teenager who admitted being “addicted to speed” behind the wheel had totaled two other cars in the year before he slammed into a minivan at 112 mph (180 kph) in a Seattle suburb,

May 5, 2025, 4:40:03 PM | Fast company - tech
Nvidia chips could face new tracking rules under a bipartisan bill to stop chip smuggling to China

A U.S. lawmaker plans to introduce legislation in coming weeks to verify the location of

May 5, 2025, 4:40:02 PM | Fast company - tech
Meta’s AI social feed is a privacy disaster waiting to happen

Since ChatGPT sparked the generative AI revolution in November 2022, interacting with AI has felt like using a digital confession booth—private, intimate, and shielded from public view (unless you

May 5, 2025, 2:20:05 PM | Fast company - tech
I have trouble focusing, but this AI browser feature helps

My worst workday habit is that I’m a compulsive web page checker.

Throughout the day, I’m constantly refreshing the same handful of sites for updates. I’ll check the me

May 5, 2025, 11:50:07 AM | Fast company - tech
This is the future of AI, according to Nvidia

​​Recent breakthroughs in generative AI have centered largely on language and imagery—from chatbots that compose sonnets and analyze text to voice models that mimic human speech and tools that tra

May 5, 2025, 11:50:06 AM | Fast company - tech