AI Deepfakes: A Threat to Elections or a Tool for Good?

As the election season in India approaches, concerns about the impact of artificial intelligence (AI) deepfakes on political campaigns have been growing. Deepfakes, which are synthetic and realistic images and videos, have become more prevalent and sophisticated in recent years. They can manipulate politicians’ speeches and appearances, creating a sense of doubt and mistrust among voters.

With the availability of cheap mobile data and high smartphone penetration in India, there are worries that deepfakes could be disseminated through platforms like WhatsApp, influencing voters’ decisions or making them question the significance of their votes.

To address these concerns, the Indian Ministry of Electronics and Information Technology recently issued an advisory to generative AI companies. The advisory states that if these companies offer AI systems that are still under testing or unreliable, they must obtain explicit permission from the government and label the outputs accordingly. The goal is to prevent the generation of illegal or misleading content that could undermine the electoral process.

However, this advisory has faced severe backlash from both national and international AI players. Many argue that it hampers innovation and freedom of expression. Minister of State for Electronics and IT, Rajeev Chandrasekhar, clarified that the advisory does not apply to start-ups, which provided some relief. Nevertheless, concerns about the misuse of AI technology persist.

In response to these concerns, Senthil Nayagam, the founder of start-up Muonium AI, has created an ‘Ethical AI Coalition Manifesto’ ahead of the elections. This manifesto aims to ensure that AI technologies are used responsibly and do not manipulate elections, spread misinformation, or erode public trust in political institutions. Nayagam has reached out to around 30 companies, with two already signing the manifesto and three in the process of signing.

Not all AI creations are sinister deepfakes meant to slander opponents. Some applications, such as personalised interactive phone calls, serve as novelties rather than threats to electoral integrity. For example, political parties in India have used synthesized voices to send out pre-recorded messages addressing individual voters by name. While these applications may seem harmless, it is important to consider the potential for misuse.

Divyendra Singh Jadoun, an AI creator known as The Indian Deepfaker, has trained voice and video AI models to distribute tailored messages for political parties. He claims to work on “ethical” AI creations, such as authorized translations and revivals of deceased leaders, with the party’s approval. However, he acknowledges that as deepfake technology becomes more accessible, parties are increasingly producing their own deepfakes, making it easier to spread misinformation.

Experts in the field, such as Karen Rebelo from the fact-checking website BOOM, have noted a significant increase in AI deepfakes during recent assembly elections in India. The technology has become cheaper and more accessible, allowing anyone with a laptop to create deepfakes. This raises concerns about the potential impact of deepfakes on the fairness and transparency of elections.

While AI deepfakes are a cause for concern, they also have the potential to be used for good. AI technologies can play a positive role in enhancing the democratic process, such as improving accessibility for disabled voters or providing accurate translations of political speeches. The key lies in ensuring that these technologies are used ethically and responsibly.

As India prepares for the upcoming elections, it is essential for the government, AI companies, and political parties to work together to establish clear guidelines and regulations for the use of AI in political campaigns. By prioritizing transparency, accountability, and the integrity of the electoral process, India can harness the potential of AI while mitigating the risks it poses.

The impact and concerns surrounding deepfakes in political campaigns are not limited to India alone. The use of deepfake technology has become a global issue, with implications for various industries and markets.

The deepfake industry is poised for significant growth in the coming years. According to a report by MarketsandMarkets, the global deepfake market is projected to reach a value of $304.3 million by 2026, growing at a compound annual growth rate (CAGR) of 37.3% from 2021 to 2026. This market growth can be attributed to the increasing use of AI technology and the growing awareness of deepfakes.

The potential impact of deepfakes extends beyond political campaigns. Industries such as entertainment, cybersecurity, and journalism also face challenges related to deepfake technology. In the entertainment industry, for example, deepfakes can be used to manipulate actors’ appearances or create realistic virtual avatars. This raises concerns about privacy, intellectual property rights, and the authenticity of content.

In the cybersecurity realm, deepfakes pose a significant threat. Fake videos and images can be used to deceive individuals or gain unauthorized access to systems. Cybercriminals can exploit deepfakes to create convincing phishing attacks or to manipulate digital evidence during legal proceedings.

Journalism is another industry affected by deepfakes. The spread of misinformation and the erosion of public trust are significant challenges faced by journalists in the age of deepfakes. News organizations need to develop robust fact-checking techniques and educate the public on detecting and verifying the authenticity of content.

In response to the growing threat of deepfakes, various initiatives and organizations have emerged to combat their negative effects. One such initiative is the Deepfake Detection Challenge (DFDC), organized by a collaboration between Facebook, Microsoft, the Partnership on AI, and several academic institutions. The DFDC aims to develop and advance detection methods to identify deepfakes, helping to limit their spread.

The responsibility to address the issue of deepfakes extends beyond government regulations and industry initiatives. Individuals also need to be aware of the existence of deepfakes and be cautious of the content they consume and share. Fact-checking information, verifying sources, and promoting media literacy are essential steps in combating the spread of deepfakes.

While tackling deepfakes is a complex task, advancements in AI technology can also be leveraged to develop tools for detecting and mitigating their impact. Machine learning algorithms can be trained to identify anomalies in videos or images and distinguish between authentic and manipulated content. Ongoing research and collaboration between technology companies, researchers, and policymakers are crucial in effectively addressing deepfake-related challenges.

Related links:
MarketsandMarkets Deepfake Market Report
Deepfake Detection Challenge (DFDC)

Privacy policy
Contact