Protecting Against AI-Powered Financial Scams Requires More Than Employee Bans

Financial scams are evolving, driven by the increasing use of generative artificial intelligence (AI) technology by criminals. Companies that ban employees from using AI are not immune to these scams, as criminals use AI tools like ChatGPT and FraudGPT to create convincing deepfakes and realistic phishing emails. The prevalence of these scams is evident, with 65% of organizations experiencing attempted or actual payments fraud in 2022, according to a survey by the Association of Financial Professionals.

Phishing emails remain a common tactic, with criminals impersonating trusted sources and tricking recipients into sharing sensitive information or making fraudulent payments. The use of generative AI makes it more difficult to discern between authentic and fake emails. Previously, grammar mistakes or odd writing would raise suspicions, but now, criminals can leverage AI to create convincing emails that even impersonate company executives.

One notorious case in Hong Kong serves as a cautionary tale. A finance employee received a $25.6 million transfer request from a deepfaked video call with the company’s UK-based CFO and colleagues. Only after the employee contacted the head office did they realize the deceit. This incident highlights the impressive level of credibility achieved through AI-powered deepfakes.

The underlying challenge lies in the accessibility of generative AI tools and the vast amount of information available online that can be used to create convincing phishing emails. Furthermore, the proliferation of APIs and financial transaction platforms has expanded the attack surface for criminals. Automation also plays a significant role, allowing fraudsters to scale up attacks quickly and increase their chances of success.

To combat AI-powered financial scams, the financial industry is turning to its own generative AI models. Companies like Mastercard are investing in AI technology to detect and prevent fraud. However, addressing this issue requires a multi-faceted approach, including employee education, robust security measures, and continuous adaptation to evolving tactics.

In conclusion, companies must recognize that merely banning employees from using generative AI is not enough to protect against AI-powered financial scams. It is crucial to implement comprehensive strategies that combine technology, employee awareness, and industry collaboration to stay ahead of ever-evolving threats.

FAQ on AI-Powered Financial Scams

Q: What technology are criminals using to create convincing scams?
A: Criminals are using generative artificial intelligence (AI) technology, such as ChatGPT and FraudGPT, to create convincing deepfakes and realistic phishing emails.

Q: Are companies that ban employees from using AI immune to these scams?
A: No, companies that ban employees from using AI are not immune to these scams. Criminals can still use AI tools to create scams, even if employees are not allowed to use AI themselves.

Q: How prevalent are these scams?
A: According to a survey by the Association of Financial Professionals, 65% of organizations experienced attempted or actual payments fraud in 2022.

Q: What is a common tactic used in these scams?
A: Phishing emails remain a common tactic, with criminals impersonating trusted sources to trick recipients into sharing sensitive information or making fraudulent payments.

Q: How does generative AI make it harder to detect fake emails?
A: Generative AI allows criminals to create convincing emails that even impersonate company executives, making it more difficult to discern between authentic and fake emails.

Q: Can you give an example of a case where AI-powered deepfakes were used in a scam?
A: In Hong Kong, a finance employee received a $25.6 million transfer request from a deepfaked video call with the company’s CFO and colleagues. It was only after contacting the head office that the deceit was discovered.

Key Terms:
– Generative artificial intelligence (AI): Technology that can generate new content (such as text or images) based on patterns and examples it has learned.
– Deepfakes: Manipulated videos or photos created using AI to make it appear as though someone is saying or doing something they never actually did.

Suggested Related Links:
Mastercard: Learn more about the AI technology being used by companies like Mastercard to detect and prevent fraud.

Privacy policy
Contact