New Legislation Proposed to Address Concerns Over AI Content Use

A group of media executives recently appealed to lawmakers to enact new legislation that would require artificial intelligence developers to compensate publishers for using their content to train their computer models. This call comes in response to the growing use of AI chatbots, such as OpenAI’s ChatGPT, which has raised concerns among media organizations and led to significant job cuts in the industry.

During a hearing before the US Senate, Roger Lynch, Chief Executive of Condé Nast, emphasized that current AI models were developed using “stolen goods” as chatbots scrape and display news articles from publishers without permission or compensation. He further explained that news organizations rarely have control over how their content is used to train AI or is generated by the models.

While a recent lawsuit by The New York Times shed light on news publishers’ desire to prevent AI models from scraping their articles without compensation, this issue goes beyond the news media industry. In 2023, several lawsuits were filed against AI companies, including notable authors such as Sarah Silverman, Margaret Atwood, and Dan Brown, highlighting the broader concern surrounding content usage.

To address the problem of content pilfering, Lynch proposed that AI companies utilize licensed content and compensate publishers for their content used in training and output. This approach, he argued, would ensure a sustainable and competitive ecosystem where high-quality content continues to be produced.

Danielle Coffey, President and CEO of the News Media Alliance, also emphasized the need to protect publishers’ content. She noted that AI models have introduced inaccuracies and produced misleading information by scraping content from unreliable sources. This not only risks misinforming the public but also damages the reputation of publishers.

Curtis LeGeyt, President and CEO of the National Association of Broadcasters, raised concerns about the use of AI in creating deepfakes and spreading misinformation, which undermines the trust of audiences in local personalities.

While implementing legal safeguards to protect news publishers from content misuse by AI is essential, it could also be advantageous for developers in the long run. Coffey explained that generative AI models and products cannot be sustainable if they undermine the quality content they rely on.

In conclusion, the proposed legislation aims to address the concerns surrounding the use of AI in scraping and generating content without proper compensation or permission. It seeks to establish a balanced and sustainable ecosystem where both AI development and high-quality content creation can coexist.

The source of the article is from the blog elperiodicodearanjuez.es

Privacy policy
Contact