Why the new rulings on AI copyright might actually be good news for publishers

The AI copyright courtroom is heating up.

In back-to-back rulings last week, the ongoing legal war between AI companies and content creators has significantly shifted, ostensibly favoring the former. First, Anthropic got the better outcome of a case that examined whether it could claim “fair use” over its ingestion of large book archives to feed its Claude AI models. In another case, a federal judge said Meta did not violate the copyright of several well-known authors who sued the company for training its Llama models on their books.

At a glance, this looks bad if you’re an author or content creator. Although neither case necessarily sets a precedent (the judge in the Meta case even went out of his way to emphasize how narrowly focused it was), two copyright rulings coming down so quickly and definitively on the side of AI companies is a signal—one that suggests “fair use” will be an effective shield for them, potentially even in higher-stakes cases like the ones involving The New York Times and News Corp.

As always, the reality is a little more complicated. The outcomes of both cases were more mixed than the headlines suggest, and they are also deeply instructive. Far from closing the door on copyright holders, they point to places where litigants might find a key.

What the Anthropic ruling says about AI inputs vs. outputs

Before I get going, I need to point out that I’m not a lawyer. What I offer here is my analysis of the cases based on my experience as a journalist and media executive and what I’ve learned following this space for the past two years. Consider this general guidance for those curious about what’s going on, but if you or your company is in the process of arguing one of these cases or thinking about legal action, you should consult a lawyer, preferably one who specializes in copyright law.

Speaking of, here’s a little refresher on that: Copyright law is well defined in the U.S., and it provides for a defense for certain violations, known as fair use. Almost all of the AI companies at the forefront of building models rely on this defense. Determining whether a fair-use defense holds water comes down to four factors:

  1. The purpose of the use, or whether it was for commercial or noncommercial purposes. Courts will be more forgiving for the latter, but obviously what the AI companies are doing is a massively commercial exercise. This also covers whether the allegedly violating work is a direct copy or “transformative.” Many have said that AI outputs, because they aren’t word-for-word copies and usually rely on many different sources, are transformative.
  2. The nature of the copyrighted work: More protection usually goes to creative works than factual ones. AI systems often deal with both.
  3. How much of the original work was copied: Reproducing short excerpts is usually OK, but AI companies typically ingest entire works for training. Courts have sometimes tolerated full copying as long as the output doesn’t reproduce the entire work or big chunks verbatim.
  4. Whether the violation caused market harm: This is a large focus in these cases and other ongoing litigation.

The outcome of the Anthropic case drew some lines between what was OK and what wasn’t. The fact is, anyone can buy a book, and for the books that were legally obtained, the judge said that training its AI on them qualified as fair use. However, if those books were illegally obtained—i.e. pirated—that would amount to a copyright violation. Since many of them undoubtedly were, Anthropic might still pay a price for training on the illegally copied books that happened to be in the archives.

An important aspect of the Anthropic case is that it focuses on the inputs of AI systems as opposed to the outputs. In other words, it answers the question, “Is copying a whole bunch of books a violation, independent of what you’re doing with them?” with “No.”

In his ruling, the judge cited the precedent-setting case of Authors Guild, Inc. v. Google, Inc. from 2015. That case concluded Google was within its rights to copy books for an online database, and the Anthropic ruling is a powerful signal that extends the concept into the AI realm. However, the Google case came out in favor of fair use in large part because the outputs of Google Books are limited to excerpts, not entire books.

This is important, because a surface-level reading of the Anthropic case might make you think that, if an AI service pays for a copy of something, it can do whatever it wants with it. For example, if you wanted to use the entire archive of The Information, all you’d need to do is pay the annual subscription. But for digital subscriptions, the permission is to access and read, not to copy and repurpose. Courts have not ruled that buying a digital subscription alone licenses AI training, even though many might read it that way.

The missing piece in the Meta case: harm

The Meta case has a little bit to say about that, and it has to do with the fourth point of fair-use defense: market harm. The reason the judge ruled in favor of Meta was because the authors, which include comedian Sarah Silverman and journalist Ta-Nehisi Coates, weren’t able to prove that they had suffered a decline in book sales. While that gives a green light for an AI to train on copyrighted works as long as it doesn’t negatively affect their commercial potential, the reverse is also true: content creators will be more successful in court if they can show that it does.

In fact, that’s exactly what happened earlier this year. In February, Thomson Reuters scored a win against a now-defunct AI company called Ross Intelligence in a ruling that rejected Ross’s claims of fair use for training on material derived from Thomson Reuters’ content. Ross’s business model centered around a product that competed directly with the source of the content, Westlaw, Thomson Reuters’s online legal research service. That was clear market harm in the judge’s eyes.

Taken together, the three cases point to a clearer path forward for publishers building copyright cases against Big AI:

  1. Focus on outputs instead of inputs: It’s not enough that someone hoovered up your work. To build a solid case, you need to show that what the AI company did with it reproduced it in some form. So far, no court has definitively decided whether AI outputs are meaningfully different enough to count as “transformative” in the eyes of copyright law, but it should be noted that courts have ruled in the past that copyright violation can occur even when small parts of the work are copied—if those parts represent the “heart” of the original.
  2. Show market harm: This looks increasingly like the main battle. Now that we have a lot of data on how AI search engines and chatbots—which, to be clear, are outputs—are affecting the online behavior of news consumers, the case that an AI service harms the media market is easier to make than it was a year ago. In addition, the emergence of licensing deals between publishers and AI companies is evidence that there’s market harm by creating outputs without offering such a deal.
  3. Question source legitimacy: Was the content legally acquired or pirated? The Anthropic case opens this up as a possible attack vector for publishers. If they can prove scraping occurred through paywalls—without subscribing first—that could be a violation even absent any outputs.

The case for a better case

This area of law is evolving rapidly. There will certainly be appeals for these cases and others that are still pending, and there’s a good chance this all ends up at the Supreme Court. Also, regulators or Congress could change the rules. The Trump administration has hardly been silent on the issue: It recently fired the head of the U.S. Copyright Office, ostensibly over its changing stance on AI, and when it solicited public comment on its AI action plan, both OpenAI and Google took the opportunity to argue for signing their interpretation of fair use into law.

For now, though, publishers and content creators have a better guide to strengthening their copyright cases. The recent court rulings don’t mean copyright holders can’t win, but that the broad “AI eats everything” narrative won’t win by itself. Plaintiffs will need to show that outputs are market substitutes, the financial harm is real, or that the AI companies used pirated sources in their training sets. The rulings aren’t saying “don’t sue”—they show how to sue well.

https://www.fastcompany.com/91362983/ai-copyright-cases-anthropic-meta-publishers?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 7h | 09.07.2025, 09:50:03


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

PBS chief Paula Kerger warns public broadcasting could collapse in small communities if Congress strips federal funding

As Congress moves to make massive cuts to public broadcasting this week, Paula Kerger, president and CEO of the Public Broadcasting Service (PBS), gives an unflinching look at the organization’s f

09.07.2025, 14:30:04 | Fast company - tech
These personality types are most likely to cheat using AI

As recent graduates proudly showcase their use of ChatGPT for final projects, some may wonder: What kind of person turns to

09.07.2025, 14:30:04 | Fast company - tech
Samsung fixed everything you hated about foldable phones—except the price

Just over a month ago, Samsung did something strange to start hyping up its next foldable phone announcements.

Those phones, which Samsung revealed today, are officially called the Samsu

09.07.2025, 14:30:04 | Fast company - tech
Tesla stock is tanking. Could shareholders fire Elon Musk?

It’s not a great time to be a Tesla shareholder. While the stock was up 2.5% in midday trading on Tuesday, July 8, it remains down for the month and has

09.07.2025, 12:10:05 | Fast company - tech
‘The /r/overemployed king’: A serial moonlighter was exposed for holding 19 jobs at Silicon Valley startups

A software engineer became X’s main character last week after being outed as a serial moonlighter at multiple Silicon Valley startups.

“PSA: there’s a guy named Soham Parekh (in India) w

08.07.2025, 22:20:04 | Fast company - tech
Texas flood recovery efforts face an unexpected obstacle: drones

The flash floods that have devastated Texas are already a difficult crisis to manage. More than 100 people are confirmed dead

08.07.2025, 17:40:02 | Fast company - tech