Experts Warn: Watermarks Won’t Stop AI Misinformation

Experts in the field caution that adding visible or invisible watermarks to images created by AI will not effectively prevent the manipulation and spread of misinformation online. While visible watermarks, such as colored squares overlaid on images, are easy to remove by cropping or copying, invisible watermarks embedded directly into image outputs can still be erased with the right technical knowledge. Siwei Lyu, a computer science professor specializing in digital forensics, explains that watermarks can be broken or even manufactured by individuals familiar with both watermarking technology and AI. The general reliance on people’s unawareness of watermarks contributes to their potential vulnerability.

To address these concerns, tech and media companies have formed the Coalition for Content Provenance and Authenticity (C2PA), which introduces metadata embedded in picture files to describe the image’s source and creation details. This metadata includes information about the time, location, and method of creation, allowing users to verify an image’s provenance. However, it is important to note that C2PA is not a comprehensive solution and relies heavily on widespread acceptance and recognition to be effective.

In other news, a British digital entertainment company, Layered Reality, has obtained the rights to thousands of personal photos and home videos of the late rock and roll musician Elvis Presley. Using advanced technology, augmented reality, theater, and projection, Layered Reality plans to bring a virtual replica of Elvis to the stage for a series of performances across several cities. This innovative use of AI and groundbreaking tech aims to recreate iconic Elvis performances and celebrate his extraordinary life and career. This follows the trend of using AI to bring back other famous deceased artists, such as John Lennon and Édith Piaf.

In the realm of AI chatbots, the “Psychologist” persona created on Character.ai has become immensely popular. Offering support for life difficulties, the Psychologist bot has received a staggering 78.5 million messages. Created by a psychology student, this chatbot has resonated with users who find comfort and support in its text-based conversations. While AI chatbots lack empathy, the text format may suit the communication habits of younger people, making them feel more at ease discussing sensitive topics. Despite the rise of mental health chatbots, Character.ai suggests that users prefer interacting with anime or computer game characters for role-playing purposes.

The source of the article is from the blog lisboatv.pt

Privacy policy
Contact