The New York Times’s OpenAI lawsuit could put a damper on AI’s 2024 ambitions

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

What The New York Times suit against OpenAI could mean for AI

The New York Times filed a lawsuit against OpenAI and Microsoft late last month, alleging the companies used its content to train their respective AI models without permission or compensation. Developers of large language models have routinely scraped huge batches of data from the internet, and then allowed their models to train by processing and finding patterns in the data. But the Times’s claims go deeper: the suit says OpenAI and Microsoft encoded the newspaper’s articles into their language models’ memory so that ChatGPT and Bing Chat (now called Copilot) could access and regurgitate the information—in some cases verbatim, and without proper citation (the suit contains numerous examples of this). The lawsuit demands that any chatbot trained using the data be taken offline. The lawsuit came as a surprise to OpenAI: A company spokesperson told Axios that the two sides had been discussing content licensing terms.

The suit marked a sobering coda to 2023, a year in which the AI industry sprinted forward unrestrainedly, and mostly without regulation. Many in the tech industry had hoped that 2024 would bring far wider application of AI systems. But lawsuits over copyrights could slow everything down, as legal exposure concerns become a bigger factor in AI companies’ plans for how and when to release new models. Could training data—not safety concerns or job destruction fears—become the AI industry’s Achilles’ heel?

The OpenAI lawyers may argue that an AI model isn’t much different from a human who ingests a bunch of information from the web then uses it as a basis for their own thoughts. That whole debate may be moot if the Times can prove that it was financially harmed when OpenAI’s and Microsoft’s AI models spat out line-for-line text lifted from the paper’s coverage. But the main issue is that this is all uncharted legal territory; a high-profile trial may begin to establish how copyright law applies to the training of AI models. Even if OpenAI ends up paying damages, the two parties may still come to an accommodation allowing the AI company to use Times content for training.

News publishers’ posture toward AI companies runs the gamut: The Wall Street Journal, News Corp, and Gannett want to license their stories to AI developers, while others such as Reuters and CNN have begun blocking AI companies from accessing their content. Meanwhile, it’s still not outside the realm of possibility that the courts or the Federal Trade Commission could order AI companies to delete training data they’ve already scraped from the web. (The FTC did, after all, open an inquiry on OpenAI’s training data acquisition practices last summer.)

“In the months ahead, we’ll continue to see additional licensing agreements between credible publishers and AI companies,” says Alon Yamin, the cofounder and CEO of Copyleaks, which makes an AI plagiarism detection tool. “And yes, additional lawsuits.”

Ready for another buzzy AI smartphone killer?

First there was Humane’s Ai Pin, an AI device you can wear on your lapel. Now, another company, L.A.-based Rabbit, is set to reveal its own AI-centered device, called the r1, during next week’s CES trade show. The demo video shows people instructing a device to order an Uber, find a new podcast, and “tell everybody I’m going to be a little late.”

The r1 (which appears as a mysterious pixelated blob in the video) uses a large language model to understand spoken requests in both content and context. But that’s just the front door. Rabbit’s big idea is a foundation model (called the Large Action Model, or LAM) that orchestrates the actions of a number of different apps (including Uber) in order to meet a user’s demand.

I’m skeptical of the work of companies like Humane and Rabbit only because we’re still in the early days of foundation models that are actually useful in everyday life. But I also love where this is all headed. These companies are early players in an AI-powered evolution away from smartphones to something far more personal and functional. As the models that power these devices improve, personal AI devices will only get better.

Vinod Khosla, who is a major investor in Rabbit, says the startup’s concept points to a future of autonomous AI agents. “In a decade, we could have tens of billions more agents than people on the planet running around the net doing things on our behalf,” he says in a statement. “The Rabbit team is bringing powerful new consumer experiences for every human to have an agent in their pocket.”

What IT decision-makers think about generative AI

The following insights come from new research conducted by the research and consulting firm Creative Strategies:

  • “The survey reveals a significant engagement with AI, with 76.47% of organizations either evaluating or deploying generative AI technologies.”
  • “[A] majority of organizations (59.4%) are likely to implement generative AI technologies in the near future. However, some resistance or hesitation is also present, with 15.3% viewing it as extremely unlikely.” 
  • “Integrating AI with existing systems and the lack of skilled personnel are the top challenges faced by businesses in AI projects.”
  • “Business analytics is the leading application of generative AI that captures the interest of IT decision-makers, with 15.52% looking to leverage AI for data-driven decision-making. This is closely followed by software coding and customer service applications.”
  • “Security remains the paramount concern for businesses considering AI implementation, with 22.53% of IT decision-makers highlighting it. The quality and accuracy of AI outputs, along with privacy concerns, are also top of mind.”

More AI coverage from Fast Company:

From around the web: 

https://www.fastcompany.com/91004693/new-york-times-openai-lawsuit?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2y | 3 janv. 2024, 20:10:04


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Tally lets you design great free surveys in 60 seconds

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

4 juil. 2025, 13:50:03 | Fast company - tech
How China is leading the humanoid robots race

I’ve worked at the bleeding edge of robotics innovation in the United States for almost my entire professional life. Never before have I seen another country advance so quickly.

In

4 juil. 2025, 09:20:03 | Fast company - tech
‘There is nothing that Aquaphor will not fix’: The internet is in love with this no-frills skin ointment

Aquaphor has become this summer’s hottest accessory.

The no-frills beauty staple—once relegated to the bottom of your bag, the glove box, or a bedside drawer—is now dangling from

3 juil. 2025, 23:50:07 | Fast company - tech
Is Tesla screwed?

Elon Musk’s anger over the One Big Beautiful Bill Act was evident this week a

3 juil. 2025, 17:10:05 | Fast company - tech
The fight over who gets to regulate AI is far from over

Welcome to AI DecodedFast Company’s weekly new

3 juil. 2025, 17:10:03 | Fast company - tech