This is another preliminary ruling in the copyright battle over generative AI. The stakes of this battle couldn’t be higher. Copyright law has the capacity to nix the entire generative AI category. Fortunately, Judge Chhabria easily rejects the copyright owners’ overclaims.
The defendant in this case is Facebook’s LLaMa, a “foundational, 65-billion-parameter large language model” that is “available for free for research and commercial use.” The plaintiff copyright owners are three book authors suing on behalf of a class, including GenXer icon Sarah Silverman. (Remember her appearance in Star Trek Voyager? And who can forget her appearance in the Aristocrats?).
This short opinion squarely addresses when AI training models constitute derivative works. Simply indexing copyrighted books into the model doesn’t create derivative works (the judge calls the argument “nonsensical”) because the training model doesn’t recast or adapt the books. Instead, “the plaintiffs would indeed need to allege and ultimately prove that the outputs ‘incorporate in some form a portion of’ the plaintiffs’ books.” Specifically,
To the extent that they are not contending LLaMa spits out actual copies of their protected works, they would need to prove that the outputs (or portions of the outputs) are similar enough to the plaintiffs’ books to be infringing derivative works
The plaintiffs can amend their complaint to make these allegations, but in practice this legal standard functionally ends their derivative works argument. Based on the way LLMs work, it’s highly unlikely they can find examples where the outputs are “similar enough” to their source material. Furthermore, the plaintiffs would need to do this for each work they claim is infringed, which makes it difficult or impossible to form a class. If this legal standard holds (and it should), the derivative works arguments against LLMs should be a dead-end.
The post Facebook’s LLaMa Defeats Copyright Claims–Kadrey v. Meta appeared first on Technology & Marketing Law Blog.