AI Chatbots: A Growing Concern for Cybersecurity

In recent years, the rise of AI chatbots has presented society with a range of risks and rewards. While most concerns thus far have focused on mundane tasks like assisting students with their homework or business projects, a new issue has emerged: the potential for AI chatbots to become recruiting tools for violent extremists. As highlighted by Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, urgent action is needed to address this threat.

Traditionally, the main cybersecurity risks associated with AI chatbots have revolved around large language models, such as ChatGPT and Google Bard, which can be vulnerable to prompt injection attacks. However, the focus has now shifted to the dangers posed by these chatbots in terms of radicalization. Hall conducted an experiment where he engaged with an AI chatbot, which responded with glorifying statements about Islamic State. Unfortunately, due to the non-human nature of chatbots, no crime was committed.

Hall argues that current terror legislation is ill-suited to cope with the potential consequences of AI chatbots. The recently introduced Online Safety Act, while commendable, fails to adequately address the fact that chatbots generate their own material rather than relying on pre-scripted responses under human control. As a result, Hall suggests the need for new laws that can hold both the individuals who create radicalizing chatbots and the tech companies hosting them accountable.

The recognition of these risks has prompted concerns from cybersecurity experts. Suid Adeyanju, CEO of RiverSafe, emphasizes the detrimental impact AI chatbots could have on national security, enabling hackers to train the next generation of cybercriminals and facilitate data theft. Adeyanju stresses the urgent need for businesses and government to implement safeguards to mitigate these risks.

While acknowledging the national security threat posed by AI, Josh Boer, director at tech consultancy VeUP, also highlights the importance of nurturing innovation. Boer argues that the UK should focus on building a strong talent pipeline in digital skills to empower the next generation of cyber and AI businesses. Neglecting this issue could not only harm the future of the UK’s tech sector but also play into the hands of cybercriminals.

In conclusion, the emergence of AI chatbots has raised significant concerns regarding cybersecurity and national security. It is clear that current legislation and safety measures require reassessment to effectively address the risks posed by these chatbots. By implementing appropriate regulations and supporting technological innovation, society can strike a balance between security and progress in the AI era.

Privacy policy
Contact