The New York Times’s OpenAI lawsuit could put a damper on AI’s 2024 ambitions

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

What The New York Times suit against OpenAI could mean for AI

The New York Times filed a lawsuit against OpenAI and Microsoft late last month, alleging the companies used its content to train their respective AI models without permission or compensation. Developers of large language models have routinely scraped huge batches of data from the internet, and then allowed their models to train by processing and finding patterns in the data. But the Times’s claims go deeper: the suit says OpenAI and Microsoft encoded the newspaper’s articles into their language models’ memory so that ChatGPT and Bing Chat (now called Copilot) could access and regurgitate the information—in some cases verbatim, and without proper citation (the suit contains numerous examples of this). The lawsuit demands that any chatbot trained using the data be taken offline. The lawsuit came as a surprise to OpenAI: A company spokesperson told Axios that the two sides had been discussing content licensing terms.

The suit marked a sobering coda to 2023, a year in which the AI industry sprinted forward unrestrainedly, and mostly without regulation. Many in the tech industry had hoped that 2024 would bring far wider application of AI systems. But lawsuits over copyrights could slow everything down, as legal exposure concerns become a bigger factor in AI companies’ plans for how and when to release new models. Could training data—not safety concerns or job destruction fears—become the AI industry’s Achilles’ heel?

The OpenAI lawyers may argue that an AI model isn’t much different from a human who ingests a bunch of information from the web then uses it as a basis for their own thoughts. That whole debate may be moot if the Times can prove that it was financially harmed when OpenAI’s and Microsoft’s AI models spat out line-for-line text lifted from the paper’s coverage. But the main issue is that this is all uncharted legal territory; a high-profile trial may begin to establish how copyright law applies to the training of AI models. Even if OpenAI ends up paying damages, the two parties may still come to an accommodation allowing the AI company to use Times content for training.

News publishers’ posture toward AI companies runs the gamut: The Wall Street Journal, News Corp, and Gannett want to license their stories to AI developers, while others such as Reuters and CNN have begun blocking AI companies from accessing their content. Meanwhile, it’s still not outside the realm of possibility that the courts or the Federal Trade Commission could order AI companies to delete training data they’ve already scraped from the web. (The FTC did, after all, open an inquiry on OpenAI’s training data acquisition practices last summer.)

“In the months ahead, we’ll continue to see additional licensing agreements between credible publishers and AI companies,” says Alon Yamin, the cofounder and CEO of Copyleaks, which makes an AI plagiarism detection tool. “And yes, additional lawsuits.”

Ready for another buzzy AI smartphone killer?

First there was Humane’s Ai Pin, an AI device you can wear on your lapel. Now, another company, L.A.-based Rabbit, is set to reveal its own AI-centered device, called the r1, during next week’s CES trade show. The demo video shows people instructing a device to order an Uber, find a new podcast, and “tell everybody I’m going to be a little late.”

The r1 (which appears as a mysterious pixelated blob in the video) uses a large language model to understand spoken requests in both content and context. But that’s just the front door. Rabbit’s big idea is a foundation model (called the Large Action Model, or LAM) that orchestrates the actions of a number of different apps (including Uber) in order to meet a user’s demand.

I’m skeptical of the work of companies like Humane and Rabbit only because we’re still in the early days of foundation models that are actually useful in everyday life. But I also love where this is all headed. These companies are early players in an AI-powered evolution away from smartphones to something far more personal and functional. As the models that power these devices improve, personal AI devices will only get better.

Vinod Khosla, who is a major investor in Rabbit, says the startup’s concept points to a future of autonomous AI agents. “In a decade, we could have tens of billions more agents than people on the planet running around the net doing things on our behalf,” he says in a statement. “The Rabbit team is bringing powerful new consumer experiences for every human to have an agent in their pocket.”

What IT decision-makers think about generative AI

The following insights come from new research conducted by the research and consulting firm Creative Strategies:

  • “The survey reveals a significant engagement with AI, with 76.47% of organizations either evaluating or deploying generative AI technologies.”
  • “[A] majority of organizations (59.4%) are likely to implement generative AI technologies in the near future. However, some resistance or hesitation is also present, with 15.3% viewing it as extremely unlikely.” 
  • “Integrating AI with existing systems and the lack of skilled personnel are the top challenges faced by businesses in AI projects.”
  • “Business analytics is the leading application of generative AI that captures the interest of IT decision-makers, with 15.52% looking to leverage AI for data-driven decision-making. This is closely followed by software coding and customer service applications.”
  • “Security remains the paramount concern for businesses considering AI implementation, with 22.53% of IT decision-makers highlighting it. The quality and accuracy of AI outputs, along with privacy concerns, are also top of mind.”

More AI coverage from Fast Company:

From around the web: 

https://www.fastcompany.com/91004693/new-york-times-openai-lawsuit?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 2y | 3. 1. 2024 20:10:04


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

AI gives students more reasons to not read books. It’s hurting their literacy

A perfect storm is brewing for reading.

AI arrived as both

17. 8. 2025 10:20:08 | Fast company - tech
Older Americans like using AI, but trust issues remain, survey shows

Artificial intelligence is a lively topic of conversation in schools and workplaces, which could lead you to believe that only younger people use it. However, older Americans are also using

17. 8. 2025 10:20:06 | Fast company - tech
From ‘AI washing’ to ‘sloppers,’ 5 AI slang terms you need to know

While Sam Altman, Elon Musk, and other AI industry leaders can’t stop

16. 8. 2025 11:10:08 | Fast company - tech
AI-generated errors set back this murder case in an Australian Supreme Court

A senior lawyer in Australia has apologized to a judge for

15. 8. 2025 16:40:03 | Fast company - tech
This $200 million sports streamer is ready to take on ESPN and Fox

Recent Nielsen data confirmed what many of us had already begun to sense: Streaming services

15. 8. 2025 11:50:09 | Fast company - tech
This new flight deck technology is making flying safer, reducing delays, and curbing emissions

Ever wondered what goes on behind the scenes in a modern airliner’s cockpit? While you’re enjoying your in-flight movie, a quiet technological revolution is underway, one that’s

15. 8. 2025 11:50:07 | Fast company - tech
The case for personality-free AI

Hello again, and welcome to Fast Company’s Plugged In.

For as long as there’s been software, upgrades have been emotionally fraught. When people grow accustomed to a pr

15. 8. 2025 11:50:07 | Fast company - tech