News and Publications

The dark side of AI - AI enabled fraud

Posted: 11/07/2025


While the opportunities that AI presents for society and businesses are extensive, we must be mindful of the ways fraudsters can leverage these advancements to increase the sophistication and scale of fraud. This article explores the common methods of AI-enabled fraud of which our clients should be aware.

AI and financial scams

Before AI, criminals may have spent a significant amount of time reviewing publicly available data, scouring through people's bins or searching the 'dark web'. With the advent of AI, the cost/benefit analysis of fraud is clearly tipping in the scammers' favour. They can leverage AI to automate their processes to rapidly create deepfakes, fraudulent websites and synthetic identities on a much larger scale than before.

Authorised push payments (APP)

In the UK, authorised push payments (APP) represents over 40% of payment fraud according to a 2024 report by UK Finance. APP is a particular kind of fraud where individuals are targeted, drawn into a fraudster's web and deceived into transferring money to that person. The fraudster might impersonate a trustworthy figure in a person's life or job using tactics such as emotional appeals from family members or friends or making urgent requests from colleagues, senior management and CEOs. The proliferation of AI, combined with the convenience of online banking and e-commerce, has created a fertile breeding ground for APP fraud.

The Authorised Push Payment Reimbursement Scheme, which came into force on 7 October 2024, was introduced by the government as part of its strategy to combat APP fraud. The scheme introduces mandatory reimbursements and requires payment service providers (PSPs) to reimburse customers (including individuals and charities but excluding businesses except for micro-enterprises) who are victims of APP fraud. Provided they have not been grossly negligent, victims can claim up to £85,000 in losses if the payment was made by either Faster Payments or CHAPS and specific criteria are met.

Although this is a welcome protection for consumers, the impact will be felt by financial institutions who will need to implement better fraud detection and prevention measures, especially considering that the deepfakes and AI-generated phishing emails described below could potentially perpetuate APP fraud and increase the number of reimbursement claims. 

Deepfakes

As deepfake technology becomes more accessible and sophisticated, it has become easier for scammers to develop convincing audio and video recordings and harder for individuals and businesses to tell if a request is genuine and identify scams. For example, there have been several instances of deepfake videos of celebrities being used to endorse fake financial investments, including a fraudulent cryptocurrency scheme featuring trusted consumer champion, Martin Lewis.

The most widely reported example of deepfakes in APP fraud occurred last year when an employee in Hong Kong at a multi-national company was duped into authorising payments totalling £20 million to fraudsters. The employee was tricked into participating in a video call with individuals he thought were the company's chief financial officer and other senior leaders. None of the individuals were real and the fraudsters had used deep fake technology to imitate their likeness.

There is also the cautionary tale of a woman in France who was defrauded of €830,000 after falling in love with an AI-generated Brad Pitt. The fraudsters used deepfake images of the actor, including in a hospital bed, to convince the woman that he needed money to fund cancer treatment as his bank accounts had been frozen during his divorce proceedings with Angelina Jolie. This story highlights how scammers can take advantage of those who are vulnerable by creating a sense of intimacy and connection.

Phishing emails

Traditionally, phishing emails were easy to spot due to their poor spelling, grammar and clunky formatting. However, AI has made it harder for individuals to distinguish between legitimate and fraudulent communications because Natural Language Generation models (NLG), like GPT, can be used to generate professionally crafted and fluently written communications that mimic a genuine business or individual's tone or writing style.

AI can also be used to analyse the receivers' online and social media presence to identify their interests. The result is the generation of highly targeted and personalised emails (known as spear phishing) sent en masse to deceive people into revealing sensitive information, make direct payments or installing malicious malware.

AI-generated fraudulent websites

Phishing emails may also direct a victim to a fake website and these counterfeit websites can now be generated easily using AI.  AI has democratised website building, allowing users to build professional-looking websites quickly and without the need for sophisticated coding or computer programming skills. However, scammers are able to capitalise on this accessibility by using AI to create fraudulent websites that mimic legitimate companies or services. Large Language Models (LLMs) can be used to produce convincing text and product descriptions accompanied by realistic AI-generated images, videos and audio.

Key warning signs that a website may be fraudulent include 'too good to be true' discounted and limited-time offers on popular items which give consumers a sense of urgency and enticing individuals to make payments or share their personal details.

Synthetic identities

AI could also heighten 'synthetic fraud', a type of identity fraud where cyber-criminals create fraudulent identities by combining real and fictitious personal information. For example, a genuine identity card number combined with a fictitious name and date of birth to circumvent credit checks and carry out high-value fraudulent transactions. Financial institutions and businesses may find it difficult to combat this type of fraud because the synthetic identity is not connected to a real person and there is no one to alert the organisation of suspicious activity. Criminals will also build up the credit score of the synthetic identity to make it appear as a genuine customer before committing fraud.

AI fraud in legal proceedings 

There are concerns about how deepfake evidence could be used in disputes to damage the credibility and reputation of the other side. A recent article by our family team examines how high-stakes divorces and custody battles can tempt individuals to manipulate evidence. It notes how we are beginning to witness the use of sophisticated fake evidence in these proceedings. For example, in a child custody case in 2020, a deepfake audio clip was submitted as evidence to falsely portray the father as a threat.

The potential reputational damage caused by the spread of AI-generated misinformation and deepfakes is investigated in an article written by our reputation management team. The article warns of how the circulation of this content may give rise to potential defamation or data protection rights claims.

Another detrimental side effect of AI is that it could be used fraudulently to undermine the integrity of evidence and the judicial process. In the High Court case, Crypto Open Patent Alliance v Craig Steven Wright, the court found the defendant had fabricated 47 documents in his quest to convince the court that he was the creator of Bitcoin. The judge ruled that this constituted a serious abuse of the court's process. While it is unclear if the defendant used ChatGPT, expert analysis of one deleted file which was recovered suggested that the structure and syntax of additional text on the document had been written using software that would not have been available in 2007 when it was alleged they were written.

Meanwhile, in the recent case of Ayinde, R v The London Borough of Haringey, the defendant made an application for a wasted costs order against the claimant's barrister and Haringey Law Centre on the grounds that they had cited five fake cases. The judge made a wasted cost order and condemned the claimant's lawyers for their 'appalling professional misbehaviour'.

The judge, in particular, criticised the fact that the claimant's solicitor had dismissed the fictitious cases as 'cosmetic errors'. Although the defendant's counsel suggested that AI had been used, the judge said that he was not in a position to determine this as the claimant's barrister was not sworn or cross-examined. Despite this, he said that it would have been negligent on the balance of probabilities if she had used AI and had not verified the authenticity of the authorities identified before citing the cases in the pleadings.

Subsequently, the Ayinde case was referred to the Divisional Court along with another case where there had been suspected use by lawyers of generative AI tools. The claimant's barrister continued to maintain that she had not used AI in producing the list of cases. While the court said that the threshold for instigating contempt of court proceedings had been reached against the barrister, proceedings were not initiated against her for multiple factors, including that she was very junior. Nevertheless, the court felt it was necessary to refer her to the professional regulator.

The judge in this case issued a stark reminder of the 'serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused'. She urged that those with leadership responsibilities such as heads of chambers, managing partners and regulators must now implement policies and measures to ensure every individual currently providing legal services is aware of how to use AI in compliance with their ethical and professional obligations and duties to the court. Although Ayinde appears to be an example of reckless rather than fraudulent use of AI, it does show how easy it is for AI to be used to extract untrue and misleading information from fictitious cases.

In March 2025, Penningtons Manches Cooper hosted a panel discussion at the firm's London office, addressing the challenges posed by deepfake evidence in court proceedings. You can read more about this event and watch our post-event panel video here.

AI-assisted cheating in education

A survey by The Higher Education Policy Institute (HEPI) found that more than half of students use AI to help with assessments and 5% were copying and pasting AI-generated content directly into their assignments without modifying their output which most institutions would classify as cheating. AI is here to stay but it is important that students are taught how to use AI ethically.

While plagiarism and collusion have always existed in education, the emergence of AI in this arena has substituted one set of challenges for another. Before the rise of AI, students could buy professionally written papers from 'essay mill' companies which could go undetected by examiners and plagiarism detection software. However, now that LLMs like ChatGPT are widely and freely available, students have the technology at their fingertips to produce similar or better-quality assignments and are less likely to seek out the above services.

The opportunities for students to use AI to cheat in assignments could be heightened where students are assessed through unsupervised coursework assignments, particularly since Covid-19.

Last year, researchers at the University of Reading conducted a study in which they submitted 33 unedited AI-written assignments across five psychology undergraduate modules. They found that 94% of the submissions were undetected and the grades received on the assignments were half a grade higher than submissions by actual students.

Unlike conventional plagiarism, where students were likely to copy and paste or paraphrase large sections of original sources, it is difficult for plagiarism software to detect cheating when AI-generated content integrates information seamlessly from multiple sources. This has led educational institutions to adopt Generative AI detection tools such as Turnitin's AI writing detection software to distinguish between human-written and AI-generated text as well as to revert to examination-based assessments and move away from coursework.

Colleges should be aware of the potential copyright issues. The platform terms state that the person who submits a paper grants the software provider a licence to make copies or store the papers on their database for the purposes of plagiarism and/or AI detection. When students upload their own work to plagiarism or AI detection platforms like Turnitin, they are able to grant such a licence as they are the owners of the copyright in their papers.

However, issues may arise when college or university staff upload student papers. They are not the owner of the copyright in the paper and they may not have suitable permissions to upload the paper on behalf of the student.  If they have not been granted permission or a licence by the student to upload the paper to the platform, this may constitute copyright infringement and the platform will be operating on the basis of an invalid licence.

AI clearly has the potential to undermine academic integrity, devalue qualifications and de-skill students of crucial critical thinking and evaluation skills. To help students use AI ethically, schools and universities are now implementing clear policies and guidelines on the use of AI in assessments.

This article was co-written with Georgia Morris, trainee solicitor in the employment team.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP