Generative AI Faces Legal Challenges as The New York Times Sues Microsoft and OpenAI

The rapid growth of generative artificial intelligence (AI) has hit a roadblock as The New York Times takes legal action against Microsoft and OpenAI for copyright infringement. In its lawsuit, The Times accuses the companies of using millions of its articles without permission to train their AI models, including the popular ChatGPT tool developed by OpenAI.

While other creators have filed similar lawsuits in recent months, legal experts believe that The Times has crafted a stronger case. Robert Brauneis, an intellectual property law professor, commended the precision of the complaint, stating that it focuses on specific causes of action rather than a scattered approach.

Generative AI models rely on extensive training data to create human-like responses. These AI models, such as OpenAI’s ChatGPT and Microsoft’s Copilot, are designed to be transformative rather than reproducing existing content. However, The Times claims that the AI models developed by the defendants have “memorized” and sometimes reproduce portions of their copyrighted material.

The lawsuit argues that this unauthorized use of content threatens The Times’ revenue streams from subscriptions, advertising, licensing, and affiliates. OpenAI responded by expressing its commitment to working with content creators and ensuring fair and mutually beneficial use of AI technology.

The Times’ case stands out due to the numerous examples provided of the AI models reproducing their material almost verbatim. Previous copyright lawsuits against AI models have struggled to demonstrate substantial similarity between the model’s output and the copyrighted work.

Legal experts suggest that the court could rule in favor of limiting certain prompts or outputs of generative AI models to prevent copyright infringement. However, they believe that the tech companies will develop filters and safeguards to reduce such incidences, making it a manageable issue for the industry.

OpenAI has already implemented measures to minimize verbatim repetition, such as removing duplications from training data and declining prompts aimed at reproducing copyrighted works. The company acknowledges the challenges of fully understanding and declining every request but has emphasized the rarity of “memorization.”

As this legal battle unfolds, it sparks a broader discussion about the intersection between media and AI. Critics warn that excessive lawsuits like this could hinder the industry’s growth and impede public access to information. It remains to be seen how the court will respond and how the AI industry will adapt to protect against copyright infringement while continuing to produce innovative AI models.

The source of the article is from the blog guambia.com.uy

Privacy policy
Contact