New Rules on AI-Generated Content and Manipulated Media Announced by Meta

Meta, the social networking giant, has unveiled changes to its guidelines regarding AI-generated content and manipulated media. The move comes in response to criticism from the company’s Oversight Board and aims to address concerns around misleading information and deception.

Starting next month, Meta will introduce an “Made with AI” badge for deepfakes, a form of manipulated media created using artificial intelligence. Additionally, when content has been manipulated in other ways that could deceive the public on important issues, Meta will provide additional contextual information to users.

This policy change has significant implications, especially in a year filled with elections across the globe. It means that Meta may label more content that has the potential to mislead, ensuring transparency for users. However, the labeling of deepfakes will be limited to content that either exhibits “industry standard AI image indicators” or has been disclosed by the uploader as AI-generated.

While this shift may result in more AI-generated content and manipulated media remaining on Meta’s platforms, the company believes that providing transparency and additional context is a better way to handle such content, rather than removing it altogether.

To facilitate this change, Meta will stop removing content solely based on its current manipulated video policy in July. This timeline allows users to become familiar with the self-disclosure process before content is no longer taken down.

It is likely that the change in approach is a response to increasing demands placed on Meta regarding content moderation and systemic risk. The European Union’s Digital Services Act, which applies rules to Meta’s social networks, has prompted the company to navigate the delicate balance between removing illegal content and protecting free speech.

The Oversight Board’s criticism played a crucial role in Meta’s decision to revise its policies. The Board, which is funded by Meta but operates independently, scrutinized Meta’s response to AI-generated content. In particular, the Board highlighted the inconsistency of Meta’s policy, which only applied to AI-created videos, leaving other forms of manipulated media untouched.

Meta has acknowledged the Board’s feedback and intends to broaden its approach to include AI-generated audio, photos, and other forms of manipulated media. The company plans to use industry-shared signals to detect AI content, and users will have the option to disclose whether their uploads contain AI-generated content.

In cases where digitally-created or altered media poses a significant risk of deceiving the public on important matters, Meta may use more prominent labels to provide additional information and context. The goal is to equip users with the necessary tools to assess content accurately and understand its context across different platforms.

It is essential to note that Meta will not remove manipulated content, AI-based or otherwise, unless it violates other policies such as voter interference or harassment. Instead, the company will add informational labels and context in situations that generate high public interest.

To ensure the accuracy of content, Meta has partnered with nearly 100 independent fact-checkers who will help identify risks associated with manipulated media. These external entities will continue to review false and misleading AI-generated content. When content is classified as “False or Altered,” Meta will implement algorithm changes to reduce its reach and display an overlay label with additional information for users who come across it.

As Meta takes steps to address concerns surrounding AI-generated content and manipulated media, it aims to strike a balance between safeguarding against misinformation and safeguarding the principles of free speech.

The changes in Meta’s guidelines regarding AI-generated content and manipulated media are significant for the industry and have broader implications for the social media landscape. This move comes as Meta faces increased pressure from regulators, lawmakers, and the public to address the spread of misinformation and deceptive content on its platforms.

In recent years, there has been growing concern about the use of AI technology to create deepfake videos and other forms of manipulated media. Deepfakes, in particular, have the potential to deceive the public by convincingly altering or superimposing someone’s face onto another’s body or fabricating realistic audio. By introducing an “Made with AI” badge for deepfakes and providing additional contextual information for manipulated content, Meta aims to increase transparency and enable users to make informed judgments about the content they encounter.

The timing of these changes is crucial as several countries are heading into elections. With increased labeling and contextualization of potentially misleading content, Meta aims to minimize the impact of deceptive information during significant events like elections. By disclosing whether content is AI-generated and providing industry-standard AI image indicators, Meta hopes to mitigate the risks associated with manipulated media while ensuring free expression is protected.

This policy change also aligns with the European Union’s Digital Services Act, which imposes regulations on social media platforms like Meta. The Act requires platforms to remove illegal content promptly while upholding principles of freedom of speech. Meta’s new approach strikes a balance between removing harmful content and promoting transparency, aligning with the regulatory demands placed on the company.

The Oversight Board’s criticism of Meta’s previous policy on AI-generated content played a crucial role in the company’s decision to revise its guidelines. The Board’s independent scrutiny highlighted the need for consistency and expanded coverage beyond AI-created videos. By including AI-generated audio, photos, and other forms of manipulated media, Meta aims to address this inconsistency and provide a more comprehensive approach to tackling deceptive content.

To ensure the accuracy of content, Meta has partnered with independent fact-checkers. Nearly 100 organizations will assist Meta in identifying risks associated with manipulated media. By leveraging the expertise of these external entities, Meta aims to improve the detection of false and misleading AI-generated content. When content is flagged as “False or Altered,” Meta will implement algorithm changes to reduce its reach and display additional information through overlay labels. This collaborative effort reflects Meta’s commitment to addressing misinformation and empowering users to assess the credibility of the content they encounter.

As Meta navigates the challenges of content moderation and systemic risk, these policy changes signify the company’s commitment to tackle the issue of manipulated media responsibly. By prioritizing transparency, contextualization, and collaboration, Meta aims to strike a balance between safeguarding against misinformation and preserving the principles of free speech on its platforms.

The source of the article is from the blog guambia.com.uy

Privacy policy
Contact