New Policies Unveiled by Meta to Address Deceptive Content

In light of the forthcoming election season, Meta, the parent company of Facebook, has announced a series of significant policy changes aimed at tackling digitally manipulated media. With the rise of cutting-edge artificial intelligence technologies, the challenge posed by deceptive content has become increasingly urgent.

Monika Bickert, Vice President of Content Policy at Meta, revealed that the company will introduce “Made with AI” labels starting in May. These labels will be prominently displayed on AI-generated videos, images, and audio shared across Meta’s platforms. The scope of Meta’s policy on doctored videos will be expanded to encompass a wider range of digitally altered media.

Rather than focusing solely on removing manipulated content, Meta’s new approach is to maintain the content while providing viewers with information about its creation process. This represents a strategic shift and a more transparent way of addressing deceptive media.

The implementation of these new policies will occur gradually. Bickert explained, “We plan to start labeling AI-generated content in May 2024, and we’ll stop removing content solely based on our manipulated video policy in July. This timeline allows people to become familiar with the self-disclosure process before we cease removing the smaller subset of manipulated media.”

Meta had previously disclosed plans to identify images generated using third-party generative AI tools by embedding invisible markers within the files. However, no specific date for the commencement of this initiative was provided at the time of the announcement.

According to a spokesperson for Meta, the revised labeling strategy will apply to content shared on Facebook, Instagram, and Threads platforms. Different rules and regulations govern Meta’s other services, including WhatsApp and Quest virtual reality headsets.

These policy changes come ahead of the highly anticipated US presidential election scheduled for November, as well as elections in other countries, including India. Researchers in the tech industry have expressed concerns about the potential impact of emerging generative AI technologies on the electoral landscape. With political campaigns already utilizing AI tools, there is a need for clearer guidelines from providers like Meta and industry leader OpenAI.

Meta’s existing rules on manipulated media had been criticized by the company’s oversight board, describing them as “incoherent.” This assessment came after the board reviewed a manipulated video featuring US President Joe Biden, which falsely depicted inappropriate behavior. Despite the inaccuracies, the video remained accessible on the platform.

Currently, Meta’s policy on “manipulated media” primarily addresses misleadingly altered videos generated by AI or those manipulating speech. However, the oversight board recommended expanding these guidelines to cover non-AI content, arguing that such content can be equally misleading. Furthermore, the board emphasized the importance of applying these standards to audio-only content and videos depicting fabricated actions.

FAQ

What are Meta’s new policies regarding digitally manipulated media?

Meta will introduce “Made with AI” labels on AI-generated videos, images, and audio shared across its platforms. The company aims to provide transparency about the creation process of digitally altered media.

When will Meta start labeling AI-generated content?

Meta plans to start labeling AI-generated content in May 2024.

Will Meta continue to remove manipulated media?

Meta will stop removing content solely based on their manipulated video policy in July. This allows time for users to understand the self-disclosure process before the removal of a smaller subset of manipulated media.

How will Meta identify images produced using generative AI tools?

Meta previously announced plans to identify such images by embedding invisible markers within the files. However, no specific commencement date has been provided.

Which platforms will be affected by Meta’s revised labeling strategy?

The revised labeling strategy will apply to content shared on Facebook, Instagram, and Threads platforms.

What concerns have been raised about generative AI technologies?

Researchers in the tech industry are concerned about the potential impact of these technologies on elections, including the upcoming US presidential election. There is a need for clearer guidelines from providers like Meta and industry leader OpenAI.

Sources:
– Reuters: https://www.reuters.com/technology/meta-roll-out-made-with-ai-labels-detect-doctored-media-2027-12-10/

The industry of social media platforms, particularly Facebook and its parent company Meta, is experiencing significant changes in response to the challenge of digitally manipulated media. With the advancement of artificial intelligence (AI) technologies, the prevalence of deceptive content has become a pressing issue.

Meta’s new policy changes involve the introduction of “Made with AI” labels, which will be displayed prominently on AI-generated videos, images, and audio shared across their platforms. This labeling strategy aims to provide transparency and inform viewers about the creation process of digitally altered media.

The implementation of these new policies will occur gradually, with AI-generated content expected to be labeled starting in May 2024. Additionally, Meta plans to stop removing content solely based on their manipulated video policy in July, allowing users to familiarize themselves with the self-disclosure process before the removal of a smaller subset of manipulated media.

In terms of identifying images generated using generative AI tools, Meta previously announced plans to embed invisible markers within the files. However, no specific date for the commencement of this initiative has been provided at this time.

It’s important to note that these policy changes primarily apply to content shared on Facebook, Instagram, and Threads platforms, as different rules and regulations govern Meta’s other services like WhatsApp and Quest virtual reality headsets.

The timing of these policy changes corresponds to upcoming elections, including the highly anticipated US presidential election in November, as well as elections in countries like India. Researchers in the tech industry have raised concerns about the potential impact of emerging generative AI technologies on the electoral landscape. Clearer guidelines from providers like Meta and industry leader OpenAI are needed due to political campaigns already utilizing AI tools.

Meta’s previous rules on manipulated media were criticized by their oversight board for being “incoherent.” This critique followed the board’s review of a manipulated video featuring US President Joe Biden, which falsely depicted inappropriate behavior. Despite the inaccuracies, the video remained accessible on the platform.

At present, Meta’s policy on manipulated media primarily addresses misleadingly altered videos generated by AI or manipulating speech. However, the oversight board recommended expanding these guidelines to include non-AI content, as it can be equally misleading. They also stressed the importance of applying these standards to audio-only content and videos depicting fabricated actions.

For more information, you can refer to the Reuters article on Meta’s policy changes regarding digitally manipulated media: link.

The source of the article is from the blog guambia.com.uy

Privacy policy
Contact