Big tech scored a major victory this week in the battle over using copyrighted materials to train AI models. Anthropic won a partial judgment on Tuesday in a case brought by three authors who alleged the company violated their copyright by storing their works in a library used to train its Claude AI model.
Judge William Alsup of the U.S. District Court for the Northern District of California ruled that Anthropic’s use of copyrighted material for training was fair use. His decision carries weight. “Authors cannot rightly exclude anyone from using their works for training or learning as such,” Alsup wrote. “Everyone reads texts, too, then writes new texts. They may need to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.”
Alsup called training Claude “exceedingly transformative,” comparing the model to “any reader aspiring to be a writer.”
That language helps explain why tech lobbyists were quick to call it a major win. Experts agreed. “It’s a pretty big win actually for the future of AI training,” says Andres Guadamuz, an intellectual property expert at the University of Sussex who has closely followed AI copyright cases. But he adds: “It could be bad for Anthropic specifically depending on authors winning the piracy issue, but that’s still very far away.”
In other words, it’s not as simple as tech companies might wish.
“The fair use ruling looks bad for creators on its surface, but this is far from the final word on the matter,” says Ed Newton-Rex, a former AI executive turned copyright campaigner and founder of Fairly Trained, a nonprofit certifying companies that respect creators’ rights. The case is expected to be appealed — and even at this stage, Newton-Rex sees weaknesses in the ruling’s reasoning. “The judge makes assertions about training not deincentivizing creation, and about AI learning like humans do, that feel easy to rebut,” he says. “This is on balance a bad day for creators, but it’s just the opening move in what will be a long game.”
While the judge approved training AI models on copyrighted works, other elements of the case weren’t so favorable for Anthropic. Guadamuz says Alsup’s decision hinges on a “solid fair use argument on the transformative nature of AI training.” The judge thoroughly applied the four-factor test for fair use, Guadamuz noted, and the ruling could reshape broader copyright approaches. “We may start seeing the beginnings of rules for the new world, [where] having legitimate access to a work would work strongly in proving fair use, while using shadow libraries would not,” he says.
And that’s the catch: this wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than seven million books.
That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.
So while tech companies may still claim victory (with some justification, given the fair use precedent) the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.
Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup


Did you wake up at 4 a.m. on November 6, 2024? If so, you’re not alone.
The 4 a.m. club is a group of people, mostly on TikTok, who say they were spiritually “activated” when they

New analysis has found mobile phone users are being pinged with as many as 50 news alerts daily. Unsurprisingly, many are experiencing “alert fatigue.”
The use of news alerts on phones h

The startup Warp is best known for its modern, AI-empowered take on the terminal—the decades-old,

Want to save pages on the web for later? You could always bookmark them in your browser of choice, of course. But that’s a quick way to end up with a messy bookmarks toolbar. And organizing your b

When a viral Reddit post revealed that ChatGPT cured a five-year medical mystery in seconds, even LinkedIn’s Reid Hoffman took notice. Now, OpenAI’s Sam Altman says Gen Z and Millennials are treat

Everyone who’s ever talked to ChatGPT, Claude, Gemini, and other big-name chatbots recognizes how anodyne they can be. Because these conversational AIs’ creators stuff them with as much human-gene