State-Backed Cyber Attacks and Disinformation: The Risks for Elections in 2024

As Britain gears up for local and general elections this year, cybersecurity experts are warning that state-backed cyber attacks and disinformation campaigns will pose significant risks. The use of artificial intelligence (AI) in these malicious activities is expected to be particularly concerning. The upcoming elections will be held against a backdrop of issues such as the rising cost of living and immigration tensions, adding to the complexity of the political landscape.

The majority of cybersecurity risks are anticipated to arise in the months leading up to the election day. Similar instances have occurred in the past. The 2016 U.S. presidential election and the U.K. Brexit vote were both marred by disinformation spread through social media platforms. State-affiliated groups, allegedly linked to Russia, were blamed for these disruptions. Since then, state actors have become increasingly adept at manipulating election outcomes through routine attacks in various countries.

Recently, the U.K. accused a Chinese state-affiliated hacking group, known as APT 31, of attempting to access lawmakers’ email accounts. Although these attempts were unsuccessful, the incident highlights the growing concern over state-sponsored hacking activities. In response to the alleged hacking, London imposed sanctions on Chinese individuals and a technology firm believed to be associated with APT 31. The U.S., Australia, and New Zealand have also implemented their own sanctions. China, however, has refuted these allegations.

One of the major cybersecurity threats for the upcoming elections is the use of AI-powered disinformation. Experts predict that disinformation campaigns will be amplified this year due to the increasing availability of AI tools. Synthetic images, videos, and audio, commonly referred to as “deep fakes,” will likely be prevalent as individuals gain greater access to AI-generated content creation. Threat actors, including nation-states and cybercriminals, are expected to utilize AI to launch identity-based attacks, such as phishing, social engineering, ransomware, and supply chain compromises, targeting politicians, campaign staff, and election-related institutions. The proliferation of AI and bot-driven content will also contribute to the spread of misinformation on a much larger scale than in previous election cycles.

The cybersecurity community emphasizes the need for heightened awareness and international cooperation to address this type of AI-generated misinformation. Adam Meyers, an expert in counter adversary operations, has identified AI-powered disinformation as a top risk for elections in 2024. He cautions that hostile nation states, such as China, Russia, and Iran, can leverage AI and deep fakes to craft persuasive narratives that exploit confirmation biases, posing a grave threat to the democratic process.

AI poses another challenge in the form of reducing barriers for cybercriminals. Scam emails that utilize AI tools for crafting convincing messages have already emerged. Hackers are also leveraging AI models trained on social media data to launch more personalized attacks. The circulation of fake AI-generated audio clips, similar to the one targeting Keir Starmer, the leader of the opposition Labour Party, in October 2023, underscores the growing concern around deepfakes and their impact on elections.

As deep fake technology becomes more sophisticated, tech companies are engaged in an ongoing battle to combat its misuse. AI is now being used to detect deepfakes, but the lines between reality and manipulation are increasingly blurred. Certain clues, like discrepancies resulting from AI flaws, can help identify manipulated content. For instance, in an AI-generated video, if a spoon suddenly disappears, it could indicate tampering.

As the U.K. approaches its elections, industry experts emphasize the importance of verifying the authenticity of information to combat the risks of state-backed cyber attacks and disinformation. The use of AI in these malicious activities presents a formidable challenge, making it essential for individuals, organizations, and governments to remain vigilant to protect the integrity of the democratic process.

FAQ

What are the major risks for elections in 2024?

The major risks for elections in 2024 include state-backed cyber attacks and disinformation campaigns. These activities are expected to be amplified by the use of artificial intelligence (AI).

How are cybercriminals utilizing AI in election interference?

Cybercriminals are likely to utilize AI-powered identity-based attacks such as phishing, social engineering, ransomware, and supply chain compromises. They may also employ AI and bot-driven content to spread misinformation on a larger scale.

What is the concern around deep fake technology?

Deep fake technology enables the creation of AI-generated synthetic images, videos, and audio that can be used to manipulate public opinion. This technology poses a significant threat to the democratic process, as it can be used to craft persuasive narratives and exploit confirmation biases.

How can individuals and organizations combat the risks of state-backed cyber attacks?

To combat the risks of state-backed cyber attacks, it is crucial to verify the authenticity of information and remain vigilant. It is also important to support international cooperation and awareness initiatives to mitigate the impact of malicious activities.

Source: [Cyber experts warn of state-backed cyber attacks and disinformation risks for 2024 elections](https://www.cnbc.com/2024/04/25/state-backed-cyber-attacks-disinformation-risks-for-2024-elections.html)

As Britain prepares for local and general elections in 2024, cybersecurity experts are sounding alarm bells about the significant risks posed by state-backed cyber attacks and disinformation campaigns. What adds to the concern is the use of artificial intelligence (AI) in these malicious activities, which is expected to be particularly worrisome. This backdrop of cyber threats and disinformation campaigns is occurring amid other issues like rising costs of living and immigration tensions, making the political landscape even more complex.

The months leading up to the elections are expected to see a surge in cybersecurity risks. Similar instances have been witnessed before, such as the 2016 U.S. presidential election and the U.K. Brexit vote, both marred by disinformation spread through social media platforms. State-affiliated groups, allegedly linked to Russia, were blamed for these disruptions. Over time, state actors have become increasingly adept at manipulating election outcomes through routine attacks in various countries.

In recent news, the U.K. accused a Chinese state-affiliated hacking group called APT 31 of attempting to access lawmakers’ email accounts. Though unsuccessful, this incident highlights the growing concern over state-sponsored hacking activities. As a response, London imposed sanctions on Chinese individuals and a technology firm believed to be associated with APT 31. The U.S., Australia, and New Zealand have also implemented their own sanctions, although China has refuted these allegations.

One of the major cybersecurity threats for the upcoming elections is the use of AI-powered disinformation. Experts predict that disinformation campaigns will be amplified this year due to the increasing availability of AI tools. Synthetic media, including images, videos, and audio, commonly referred to as “deepfakes,” are expected to be prevalent as individuals gain greater access to AI-generated content creation. Threat actors such as nation-states and cybercriminals are expected to utilize AI to launch identity-based attacks, targeting politicians, campaign staff, and election-related institutions. The proliferation of AI and bot-driven content will also contribute to the spread of misinformation on a much larger scale than in previous election cycles.

The cybersecurity community stresses the need for heightened awareness and international cooperation to address AI-generated misinformation. Experts identify AI-powered disinformation as a top risk for elections in 2024, with concern that hostile nation-states like China, Russia, and Iran may leverage AI and deepfakes to craft persuasive narratives that exploit confirmation biases, posing a grave threat to the democratic process.

AI also presents a challenge by reducing barriers for cybercriminals. Scam emails utilizing AI tools to create convincing messages have already emerged, alongside hackers leveraging AI models trained on social media data to launch more personalized attacks. The circulation of fake AI-generated audio clips, like the one targeting the leader of the opposition Labour Party in October 2023, underscores the growing concern around deepfakes and their impact on elections.

Tech companies are engaged in an ongoing battle to combat the misuse of deepfake technology. AI is now being used to detect deepfakes, but distinguishing between reality and manipulation becomes increasingly difficult. Certain clues, such as discrepancies resulting from AI flaws, can help identify manipulated content. For example, if a spoon suddenly disappears in an AI-generated video, it could indicate tampering.

As the U.K. approaches its elections, industry experts emphasize the importance of verifying the authenticity of information to combat the risks of state-backed cyber attacks and disinformation. The use of AI in these malicious activities presents a formidable challenge, making it essential for individuals, organizations, and governments to remain vigilant and protect the integrity of the democratic process.

Related Links:
CNBC (Main domain for the source of the article)
National Cyber Security Centre (UK’s National Cyber Security Centre)
World Health Organization (World Health Organization)
Department of Homeland Security (U.S. Department of Homeland Security)
Europol (The European Union Agency for Law Enforcement Cooperation)

The source of the article is from the blog rugbynews.at

Privacy policy
Contact