Desktop Banner

Mobile Banner

Generative AI – fraud friend or foe?

According to the Office for National Statistics, there were 1.15 million fraud and computer misuse offences recorded in England and Wales in 2022/23, which is the highest in a reporting year. As generative artificial intelligence (AI) becomes more accessible and impressive in its output, fraudsters are exploiting its power to deceive and harm. This trend could have devastating consequences for businesses.

AI fraud defensive

AI already plays an important part in fraud detection within financial institutions and will become even more prominent with the growing use of generative AI. For example, through analysing big data to detect fraudulent transactions or using device analytics during the onboarding process. These approaches can be further enhanced by generating synthetic data to train fraud detection systems, particularly focusing on new fraud methods to stay ahead of the curve. This forward-thinking strategy helps banks and building societies reduce potential financial harm and nurtures a sense of trust and assurance among customers, who can be confident in the security of their financial details.

Cyber security teams are also increasingly relying on defensive-AI, where machine learning identifies what is ‘normal’ for a business 24/7. When abnormalities are detected, it serves as a red flag for potential malicious activity. The technology is used to identify abnormalities rapidly and autonomously at the earliest stage of an attack, which is the most easily salvageable position. Bad actors will often compromise a network and wait for the best opportunity to launch their attack. It is at this point of compromise that AI defences come into their own, protecting the security of data and assets. Human defences alone are insufficient.

AI fraud offensive

The fraud landscape is changing at a rapid pace. Criminals leveraging the power of offensive AI add a further layer of complexity, with tools to create emails indistinguishable from true communications and deepfakes becoming more widely available. Without increasingly rigorous protection and prevention, there is a far higher chance for banks and building societies to fall victim to this threat.

Emails

Many organisations are training their staff, their front line of defence, to be on high alert for suspicious emails, which is a preferred platform for fraudsters. Most employees know to be wary of communications addressed to ‘dear sirs’, with obvious spelling and grammatical errors and hyperlinks to questionable sites. Since most malware is delivered by email, and it is the easiest route in for social engineering, it makes sense to educate employees in this way. However, since the pandemic, suspicious emails directed at individuals, known as spear phishing, have become more sophisticated – far less obviously suspect and far more targeted, tailored and frequent.

AI on side

AI technology is enabling sophisticated forms of fraud that pose significant challenges in detection. Fraudsters are capitalising on AI's capabilities to carry out criminal activities, reshaping the landscape of fraudulent practices. Examples from real-world cases highlight the alarming reality and pace of AI-driven fraud. To address this emerging threat, it is crucial to adopt a proactive approach that emphasises awareness and education of AI's ongoing advancements, while investing in next-generation AI technology to fight the increased risk of fraud.

authors:erin-sims