Skip to content

The New York Times vs. OpenAI: AI’s Impact on Copyright and Traditional Media

by Clyde E. Findley | February 27, 2024 | Intellectual Property

The New York Times vs. OpenAI copyright lawsuit has the potential to be a litmus test to measure how AI advancements could affect traditional media and established intellectual property regulations. As the case moves through the courts, it may raise more questions than answers, given the rapidly evolving nature of artificial intelligence that often conflicts with the slow-moving legal precedents and existing copyright laws.

Case Background

The New York Times is suing OpenAI and Microsoft for copyright infringement, arguing that their artificial intelligence software, ChatGPT, is creating new content that includes copies of original works published online by The New York Times. Procedurally, it will be interesting to see how this case can move past the initial pleading stage. To maintain a copyright infringement case, a plaintiff must normally first obtain a registration for its copyrighted material. NYT has not yet published evidence that they met that threshold, meaning OpenAI could pursue this avenue as grounds for dismissal. However, there are some exceptions to this rule and some policy problems for authors who create large volumes of content. It will be interesting to see how NYT addresses this procedural hurdle.

Moving The Case Forward

Assuming the case proceeds past the procedural stage, it will be interesting to see how OpenAI is treated as a defendant. Can OpenAI argue that its system creates new, original content from available news resources like a human would? It is perfectly fine for a human to read publicly available content. It is also fine for a human to use that content to create original works. So far, so good. What a human cannot do is copy the original content in the process of creating a new work. This is especially true if they are selling the new work. Humans typically understand this as plagiarism. Taking this simplistic view of the issues, a reasonable outcome of the case could be a ruling that finds OpenAI liable for infringement whenever ChatGPT has actually copied NYT content, but not where the content has been sufficiently reworded, and not where the content has been quoted and properly cited. Such a ruling would likely encourage OpenAI and other similar AI system developers to ensure their software is aware of copyright laws and does not copy content created by others.

Liability for AI vs. Its Creator

Separately, there is an interesting question of whether someone can be sued for copyright infringement when they (in this case, the software developers) did not perform the copying. Rather, their software did the copying. Under existing laws, this is considered indirect infringement – similar to being indirectly liable when your dog bites someone. In the human world, we want people to train their dogs not to bite, and we have laws to hold them accountable when they do. We are not yet at the stage where AI systems fully understand the social and/or legal implications of their actions, but perhaps The New York Times vs. OpenAI is an opportunity for a court to rule that developers of AI systems should be held accountable when their software violates someone else’s rights (especially when the developers train their AI systems using content developed by others). In other words, if we allow AI systems to function in the human world, we should probably train the AI systems to obey the law.

What Can The New York Times Do?

If NYT does not succeed in proving actual copying, they could try a different approach. They could argue that it is not so much about copying the exact wording of the content, it is more about unfairly profiting from misusing the raw subject matter. Why should NYT employ thousands of reporters to research and produce valuable content, only to have an AI system reword that content enough to avoid copyright laws? Questions like these highlight the need for updates to intellectual property laws so they can keep up with technological developments. Otherwise, creators like NYT will either conclude that original content is not worth it or place it behind a paywall. One potential outcome of this case could be a finding of indirect infringement paired with an injunction or a forced license arrangement where OpenAI is required to pay to use NYT subject matter used in creating new content.

The New York Times vs. OpenAI lawsuit throws into sharp focus how technology is outpacing legal precedent. Every question that arises out of cases like this one is an opportunity to redefine the legal landscape of intellectual property in the digital age. If these questions remain unanswered, companies will continue developing their technologies in an unregulated space, and traditional media companies like The New York Times will be left wondering where their legal protections went.

Clyde Findley is Special Counsel and a registered patent attorney in the Intellectual Property practice at Berenzweig Leonard. He can be reached at cfindley@berenzweiglaw.com.