What to make of JD Vance’s speech at the Paris AI summit 

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Vance’s Paris speech shows a brash American exceptionalism for the AI age

Vice President JD Vance’s &list=RDNS64E9O1Gv99o&start_radio=1">speech to world leaders at the Artificial Intelligence Action Summit was by turns warm and conciliatory, and strident to the point of offensiveness. Vance emphasized that AI has the potential to bring significant benefits to the world, and its risks can be effectively managed—provided that the U.S. and its tech companies take the lead.

Vance argued that the U.S. remains the leader when it comes to developing cutting-edge AI models, and suggested that other countries should collaborate with the U.S. on AI rather than competing against it. (Vance also said AI companies shouldn’t try to dictate the political tenor of content or dialog their models will accept, citing the Google Gemini model’s failed attempt at generating “correct” images that resulted in a Black George Washington and female popes.)

“This administration will ensure that AI developed in the United States continues to be the gold standard worldwide,” he said. And key to the U.S.’s approach, according to Vance: leaving tech companies to regulate themselves on safety and security issues. 

Vance’s view that the U.S. should take a “collaborative” and “open” approach to AI with other Western countries, but stressed that the world needed “an international regulatory regime that fosters the creation of revolutionary AI technology rather than strangles it.” Vance went on to criticize the E.U. for its more intrusive regulatory approach. It would be a “terrible mistake for your own countries” if they “tightened the screws on U.S. tech companies,” he advised the assembly. 

But not everyone agrees: Every country attending the Paris summit signed a declaration ensuring artificial intelligence AI is “safe, secure, and trustworthy”—except for the U.S. and the U.K.

Vance said that his administration will take a different approach—using protectionist tactics to favor U.S. AI companies. The White House will continue the Biden-era chip bans, which restrict the sale of the most advanced AI chips to other countries. (The goal right now for the Trump administration is to hinder Chinese companies like DeepSeek.) It’s possible that a Trump administration could tighten these restrictions further or explore additional measures to slow down foreign AI competitors.

“To safeguard America’s advantage, the Trump administration will ensure that the most powerful AI systems are built in the U.S. with American designed and manufactured chips,” he said.

OpenAI’s models will no longer shy away from sensitive topics

In his Paris speech, JD Vance said his administration believes that AI companies shouldn’t try to restrict speech—even disinformation or outright propaganda—from their models and chatbots. That’s music to Silicon Valley bigwigs’s ears, many of whom don’t love the expensive and demanding and human-intensive work of content moderation. Two days after the speech, OpenAI announced that it’s pushing a new, more permissive code of conduct (a “model spec”) into its AI models. Going forward, its models will be less conservative about what they will and won’t talk about.

“The updated Model Spec explicitly embraces intellectual freedom–the idea that AI should empower people to explore, debate, and create without arbitrary restrictions–no matter how challenging or controversial a topic may be,” the company said in a blog post published Wednesday. As an example, OpenAI said that an AI model should be kept from outputting detailed instructions for building a bomb or violating personal privacy, but should be trained not to default to simply saying “I can’t help you with that” when given politically or culturally sensitive questions. “In essence, we’ve reinforced the principle that no idea is inherently off limits for discussion,” the blog post said, “so long as the model isn’t causing significant harm to the user or others (e.g., carrying out acts of terrorism).”

This policy shift sounds very much in line with the permissive posture adopted by right wing sites such as Gab and Parler, then by X, then, recently, by Meta’s Facebook. Now OpenAI is getting in on Big Tech’s vibe shift on content moderation. Stay tuned for the results.

PwC Champions Agentic AI as the Next Major Workplace Disruptor

The professional services firm PwC recently released a report asserting that AI agents could “dwarf even the transformative effects of the internet.” PwC predicts these agents will reshape workforce strategies, business models, and competitive advantages, while combining with human creativity to form “augmented intelligence,” enabling unprecedented innovation and productivity. The report emphasizes collaboration between humans and AI: “While AI agents offer remarkable autonomy, an effective model is one of collaboration and dynamic oversight. This principle of human-at-the-helm can guide the development of clear protocols that define the boundaries of AI autonomy and enable appropriate human intervention.”

PwC warns that businesses must reimagine work to adapt to this agentic world. But, the PwC authors stress, that shift is a necessary one, as evidenced by AI agents’ successful deployment in areas like software development and customer service. To facilitate this transition, PwC suggests a five-step approach: strategize, reimagine work, structure the workforce, help workers redefine their roles, and unleash responsible AI. “The question is,” the report states, “have you transformed to become a winner in the age of AI-enhanced work, or are you racing and perhaps too late to catch up?”

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91278387/what-to-make-of-jd-vances-speech-at-the-paris-ai-summit?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Utworzony 3mo | 13 lut 2025, 17:40:06


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

California’s location data privacy bill aims to reshape digital consent

Amid the ongoing evolution of digital privacy laws, one California proposal is drawing heightened attention from legal scholars, technologists, and privacy advocates.

13 maj 2025, 12:30:04 | Fast company - tech
Apple’s App Store is getting ‘nutrition labels’ for accessibility

You can learn a lot about an app before you download it from Apple’s App Store, such as what other users think of it, the access it

13 maj 2025, 12:30:04 | Fast company - tech
Anaconda launches an AI platform to become the GitHub of enterprise open-source development

AI integration remains a top priority across enterprises worldwide, yet success remains elusive despite widespread enthusiasm and significant investment. An

13 maj 2025, 12:30:03 | Fast company - tech
Going ‘AI-first’ appears to be backfiring on Klarna and Duolingo

Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems.

Klarna and Duloingo

12 maj 2025, 20:20:01 | Fast company - tech
Lyft CEO David Risher on competing with Uber and the future of rideshare

The rideshare market has reached a crossroads. Autonomous vehicles are on the rise, driver unrest is mounting, and customers are questioning everything from pricing to trust and safety. In the mid

12 maj 2025, 17:50:04 | Fast company - tech
Tech billionaires’ plan for a new California city may bypass voter approval

A group backed by tech billionaires spent years and $800 million secretly buying up over 60,

12 maj 2025, 13:20:04 | Fast company - tech