Portuguese Political Leader’s Voice Cloned by AI for Misleading Ads

An unprecedented incident in Portugal has been brought to light by experts at MediaLab, part of ISCTE-IUL’s communication sciences department. They uncovered that André Ventura, a prominent political figure, had his voice convincingly replicated by artificial intelligence (AI) to promote a deceptive financial scheme in an advertisement.

The commercial, seen by nearly 17,000 Portuguese citizens on Facebook and Instagram, showcased Ventura urging investments in a dubious money-making platform. While the advertisement appeared to have commercial motives, experts Gustavo Cardoso and José Moreno from the MediaLab warned about its potential political ramifications, especially during elections. The advertisement fraudulently implied Ventura’s endorsement of questionable online wealth-building schemes, which could inadvertently shape political perceptions.

Analysts at MediaLab also discovered other misleading political advertisements using imagery related to the Chega party, including both Ventura and another candidate, António Tanger Corrêa, and even misrepresenting statements by António Costa. These fabricated narratives have reportedly reached more than 66,000 people across Portugal, yet the advertisers’ identities remain a mystery.

The use of public figures in these scams typically serves a dual purpose: to profit from the misled and to discredit the personalities involved. Advertisements directing to fake cloned websites of Correio da Manhã and Galp presented fabricated stories involving these political personalities to lure readers.

MediaLab’s investigation revealed that the content creation is automated, with algorithms identifying popular social media figures to generate misleading text narratives. José Moreno commented on the irony of Ventura becoming a victim to his substantial social media presence. Furthermore, efforts to trace the ad financiers or the website owners lead to dead ends, with anonymous registry entries leading to servers in California and registrations in Reykjavik, Iceland.

The use of AI to clone voices and create misleading advertisements poses significant ethical, legal, and political challenges. The incident involving Portuguese political leader André Ventura highlights key questions and concerns in this domain:

Key Questions and Answers:
How can society protect itself against the misuse of AI in creating misleading content? Regulations and advanced detection technologies must be developed to identify and combat synthetic media.
What legal repercussions exist for the creators of such fraudulent advertisements? Currently, laws may vary by country, but there is a growing need for international legal frameworks to prosecute such cases of digital impersonation and fraud.
How significant is the threat of deepfake technology to the integrity of political processes? The threat is substantial, as it can distort voters’ perceptions and undermine trust in democratic institutions.

Key Challenges or Controversies:
Detection and Regulation: Developing reliable tools to detect deepfakes and establishing regulations that can keep up with rapidly advancing technologies is challenging.
Free Speech vs. Misinformation: Balancing the protection of free speech with the need to prevent the spread of misinformation is a contentious issue.
Political Manipulation: There are concerns that deepfake technology could be used to manipulate electoral outcomes or create political unrest.

Advantages and Disadvantages of AI in Media:

Advantages:
– AI can be employed for innovative content creation, reducing costs and time in media production.
– AI technology can enhance the personalization and accessibility of content for users with disabilities.

Disadvantages:
– The technology can be misused to create deceptive and misleading content.
– AI-generated fake content can erode public trust in media and institutions.
– Legal systems may struggle to keep pace with the challenges posed by synthetic media.

For further information about AI and its ethical implications, related links would typically include reputable organizations like the Partnership on AI (partnershiponai.org) or academic institutions such as Stanford’s Human-Centered AI Institute (hai.stanford.edu). These links provide resources on the responsible use of artificial intelligence.

The revelations from ISCTE-IUL’s MediaLab underscore the urgent need for both technological and legislative solutions to safeguard against the malicious use of deepfake technology and to uphold the integrity of information online.

Privacy policy
Contact