News and Publications

Fintech fraud: AI as both a weapon and a combatant

Posted: 23/10/2025


Over the last decade, we have witnessed the unprecedented growth of financial technology (fintech). Digital innovations such as digital banking, cryptocurrency exchanges, mobile wallets, robo-advice, online brokerage platforms and peer-to-peer lending platforms are being embraced by consumers globally. Whilst fintech offers convenience and speed for consumers, it has also expanded the attack surface for fraudsters. In addition to this, the sheer volume and velocity of fintech transactions create a fertile breeding ground for fraudulent activity.

As financial digital ecosystems grow more complex, so too do the methods used to exploit them and protect them, driven by the dual-edged power of artificial intelligence (AI), which poses both a significant threat and a vital line of defence for fintech companies. 

AI as a weapon: how fraudsters are leveraging artificial intelligence

Research by Stop the Scams UK and PwC found that AI-driven fintech fraud is still in its infancy, but it is only a matter of time before it becomes more prevalent. 

Andrew Bailey, Governor of the Bank of England, states, "While there is limited evidence that AI is behind the large numbers of fraud attacks now, it will very likely drive an increase in the number and sophistication of fraud threats."

Examples of how AI has the potential to proliferate the scale and sophistication of fintech fraud include:

Account takeovers 

Criminals employ a range of tactics, often enhanced by AI, to gain unauthorised access to an individual's bank accounts. These may include:

  • Phishing emails – in fintech-targeted attacks, attackers may send fraudulent emails impersonating financial institutions to trick individuals into revealing sensitive banking information, which can then be used to take over an individual's account. AI can bolster this process as it can make phishing emails less easy to spot because it can mimic the tone and style of genuine communication and ensure that the message is grammatically correct and fluent. 
  • Spear phishing – is a more targeted form of phishing that uses personalised messages aimed at specific individuals or businesses, as opposed to a generic audience. AI can analyse vast amounts of publicly available data to generate spear-phishing emails tailored to the target's interests, job role, or recent activities.
  • Brute force attacks – are where criminals use trial and error to guess username and password combinations. AI could be used to improve the efficiency of these fraudulent activities by rapidly testing vast numbers of credential combinations.
  • Voice cloning and deepfakes – it is easy to imagine how this technology, as it becomes more sophisticated, could be used to bypass online banking systems that rely on voice and facial recognition for security and verification. 

 
Payment fraud

Mimicking a victim's transaction patterns

Once AI has enabled a criminal to break into a victim's account, the fraudster could then use machine learning to analyse the victim's transaction history and spending habits. Armed with this insight, the fraudster could mimic the victim's typical purchase and withdrawal behaviour. As a result, a financial institution's fraud detection system would be less likely to detect unusual activity. 

Authorised Push Payment (APP) Fraud

Authorised Push Payment (APP) scams are scams where an individual is fooled into sending money to a fraudster posing as a genuine payee. These scams thrive on emotional manipulation, from tugging at a victim's heartstrings in a romance scams to impersonating a CEO to exploit an employee's loyalty or triggering panic in parents with scam messages like 'Hi Mum, my phone's broken and I need money'. 

We are slowly starting to see fraudsters leveraging AI to make APP scams even more convincing, for example, cloning the voice of a loved one or using online footage to create a deepfake video of a CEO. Santander's head of fraud risk management warns that the "Hi mum" scams are evolving at "breakneck speed" and raises caution about how AI is being used to impersonate relatives. 

Meanwhile, in a sophisticated impersonation scam of WPP's (the world's largest AD agency) CEO, criminals used voice cloning along with YouTube footage to set up a Teams meeting with WPP senior executives in an attempt to extract money and confidential information. Although the scam was unsuccessful, it raises concerns about how criminals may use publicly available footage, audio recordings, and images of high-level executives in targeted attacks to convince employees to transfer large sums of money via direct bank transfer. 

Synthetic identity fraud

Synthetic identity fraud involves creating a fake identity by blending real and fabricated information. Fraudsters can harness AI in this process to forge counterfeit identity documents that are nearly indistinguishable from legitimate ones and to reduce the time spent searching the dark web for stolen personal data (a task that would otherwise be time-consuming for criminals). 

Synthetic identities are a growing concern for fintech companies, as they can bypass identity verification checks, open bank accounts, and behave like real customers. They build transaction histories and grow their credit scores. Once they have established credibility, fraudsters strike either by taking out loans they never intend to repay or maxing out credit limits before the synthetic identity disappears. According to UK Finance, UK financial institutions are losing over £300 million annually to synthetic identity fraud.

AI as a combatant: fighting AI with AI

Traditional fraud detection systems, reliant on static rules and manual reviews, can no longer keep pace with the speed and complexity of modern fraud. AI offers a dynamic, scalable, and efficient defence and is increasingly being adopted by financial institutions. 

With the recently launched APP (Authorised Push Payment) Reimbursement Policy requiring both sender and receiver banks to share reimbursement costs for APP fraud (except for limited exceptions), there is increasing pressure on fintech platforms to bolster fraud controls and proactively identify scams before they occur. As a reaction to this greater responsibility, Visa and Mastercard (in partnership with 11 UK banks) are both using AI solutions to monitor account-to-account payments and prevent fraud in real time. 

Mastercard and Visa, among other financial institutions, are implementing predictive risk scoring. They are harnessing machine learning to score transactions on the risk of being fraudulent based on various factors, including amount, location of the receiving bank account, and past transactional behaviour. This helps financial institutions automate decision-making and prioritise investigating transactions with high-risk scores. 

Another benefit of machine learning algorithms is that they can analyse large datasets and spot suspicious patterns and anomalies that diverge from normal customer behaviour as they happen, reducing the chance of fraudulent transactions.  These models continuously learn from new data and can adapt to emerging AI fraud tactics, improving their accuracy over time and reducing false positives (i.e., the incorrect flagging of legitimate transactions).

However, we must not forget that the power of AI in fraud prevention lies in the data it learns from. Unfortunately, many institutions still struggle with inadequate data integration and departmental silos, which limit the effectiveness of AI fraud detection tools. To truly unlock its potential, upskilling employees about the benefits of AI in fraud detection is crucial.

Conclusion

AI is reshaping the fintech fraud landscape on both sides of the battlefield. While it enables more sophisticated and scalable attacks, it also offers powerful tools for defence. The key for fintechs lies in staying ahead of the curve: investing in cutting-edge fraud detection systems, fostering partnerships with trusted AI providers, and upskilling their workforce. 

 

This article was co-authored by Georgia Morris, a trainee solicitor in the commercial dispute resolution team.

 


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP