Posted: 26/11/2025
This article is based on an original contribution to the Family Law Journal.
It was co-authored by Dr Tristan Jenkinson, head of forensics and investigations at Sky Discovery. He is an expert witness with nearly 20 years of experience working in digital forensics and eDiscovery, specialising in complex litigation and investigation-based matters.
The familiar idiom “I’ll believe it when I see it” can no longer be applied uncritically to evidence used in legal proceedings. Advances in technology have made it possible to fabricate and manipulate images, audio, and video with unprecedented sophistication. This development presents significant challenges for the legal profession, requiring practitioners to exercise heightened vigilance in assessing the authenticity of evidence while upholding professional standards and safeguarding clients’ interests.
The issue of falsified evidence is not new; it is as old as the law itself. Long before the advent of modern technology, courts grappled with forged signatures and fabricated documents. Traditional techniques such as handwriting analysis, rigorous cross-examination, and expert testimony have long been employed to uncover such deception. While the tools for fabrication have evolved dramatically, so too must the methods and skillsets required to investigate and challenge them.
A particularly striking recent example underscores the continuing relevance of this issue. In Crypto Open Patent Alliance v Wright; Wright and other companies v BTC Core and others [2024] EWHC 1198 (Ch), the court found that Craig Wright – a who had repeatedly and falsely claimed to be Satoshi Nakamoto, the inventor of Bitcoin – had engaged in an extensive campaign of dishonesty and forgery. Wright falsified hundreds of documents filed with the court in an attempt to substantiate his claim. The judgment provides a detailed and illuminating account of the multiple avenues available to interrogate electronically generated evidence and is well worth reading for those interested in the intersection of technology and evidential integrity.
We are entering an era in which falsified evidence can be created by virtually anyone, using readily available technology on almost any device. Whether advising clients in court proceedings or within non-court dispute resolution (NCDR) processes, practitioners must understand the tools – often free – that can generate or manipulate evidence, recognise how such material may manifest in cases, and develop awareness of key indicators of falsification. As these technologies become increasingly accessible, the likelihood of encountering fabricated material – whether provided by our own clients or presented by opposing parties – will rise. For some, the motivation may be financial, as in the case of Mr Wright; for many family law clients, the motivation is even more compelling and emotionally charged, particularly where issues such as child arrangements are at stake. Family lawyers rely on evidence to persuade judges to make decisions with profound consequences for clients, their families, and sometimes their commercial interests. We must do so with confidence, vigilance, and a clear commitment to our duties as officers of the court.
Deepfake – A term allegedly derived from “deep learning” (a subset of artificial intelligence) and “fake.” It refers to falsified media generated or altered using AI technologies such as machine learning or generative AI. Deepfakes most commonly involve audio or video content, where technology is used to map an individual’s voice or likeness onto existing media. Although now primarily associated with audio and video, the term originally emerged in the context of images and face-swapping technology.
Shallowfake – Sometimes used to describe falsified or manipulated evidence created without the use of AI. Examples include altered PDFs, emails, or other documents produced through conventional editing techniques.
Cheapfake – A term sometimes applied to low-quality falsifications, often generated using basic online tools. These typically lack sophistication and fail under even minimal scrutiny.
Falsified Evidence – A broad term encompassing any evidence that has been fabricated or manipulated, regardless of the method or technology employed.
Much of the commentary surrounding the impact of artificial intelligence focuses on its potential to disrupt traditional legal tasks such as case research and contract drafting. Recent high-profile cases have underscored the dangers of relying on unverified sources when constructing legal arguments. While these examples may not directly concern falsified evidence, they highlight a fundamental principle: the responsibility for accuracy rests squarely with lawyers, as officers of the court, when advancing a case. This duty is not diminished by technological innovation; if anything, it becomes more critical in an environment where the authenticity of information can no longer be assumed.
Evaluating the volume of evidential material in family law matters is already a complex and time-consuming task. Occasionally, a single “smoking gun” piece of evidence may determine the outcome of a case. More often, however, the factual matrix is wider and comprises multiple strands – messages, documents and/ or images. In either scenario, practitioners must remain alert to the risk that some of this material may have been fabricated or manipulated, presenting a fictitious account under the guise of fact.
In addition, we must factor in the psychological phenomenon of confirmation bias, which is deeply embedded in human behaviour. In essence, this bias manifests as a tendency to seek out and assign greater weight to evidence that supports one’s preferred view, while disregarding or minimising evidence that contradicts it. Ambiguous material is often interpreted in a way that aligns with the preferred opinion or version of events. Against this backdrop, one of the fundamental principles of legal practice remains clear: a lawyer must not mislead the court by presenting false evidence. Professional obligations require that lawyers do not improperly prioritise the client’s interests over their duties to the court. These duties include avoiding any conduct that knowingly or recklessly misleads the court, whether by presenting false information or by being complicit in its presentation (see SRA Guidance: Balancing Duties in Litigation).
While the rules governing falsified evidence are well established, the law relating to the use of artificial intelligence remains in its infancy. Some jurisdictions are beginning to introduce specific provisions addressing both issues. For example, the state of Louisiana in the United States recently enacted Act 250 (2025), which imposes a “reasonable diligence” standard on lawyers in relation to digital evidence.
Personal litigation, including family proceedings, is particularly vulnerable to the use of altered or fabricated evidence, given the high stakes involved. Such practices persist despite the fact that deliberately or recklessly misleading the court can result in severe sanctions, including fines or imprisonment for contempt of court. These consequences apply equally to litigants in person. In one case, our client’s husband falsified bank statements and was exposed only when a review revealed an obvious error: a reference to “31 September.” The result was a criminal charge and a custodial sentence.
What about matters which are resolved outside of court? While NCDR is welcome for many reasons, it introduces additional challenges in relation to evidential integrity. The absence of judicial scrutiny or expert involvement places greater responsibility on lawyers and mediators to assess the reliability of evidence. Where parties enter into agreements based on falsified material, even if presented during a voluntary process, those agreements, and any resulting consent orders, may later be set aside if the falsified evidence upon which the agreement was based is later uncovered. This risk underscores the need for rigorous diligence in evaluating evidence, regardless of the forum in which the dispute is resolved.
When reviewing evidence, whether from clients or third parties, each item must be assessed on its own merits and in the context of the case as a whole to determine its veracity. This obligation applies equally to material produced by our client and by others. Documents, emails, chat logs, messages, audio recordings, images, and videos cannot be assumed to be reliable without careful scrutiny.
While AI-generated deepfakes present a relatively new challenge, many of the principles of digital forensic analysis remain applicable. When evaluating digital evidence, consider the following questions:
When receiving data from the other party/ parties, be sure to preserve an untouched copy of the original material. Ideally, this should be done using forensic preservation methods, although this may not be proportionate in every case. Crucially, do not open the original file directly - always work from a duplicate you have created. Simply opening a file can alter its internal metadata, potentially erasing information that could be critical in identifying signs of falsification.
When assessing evidence, certain steps can help uncover indications of falsification. While the approach will vary depending on the file type, some high-level considerations include:
Care must be taken when checking for inconsistencies in files, particularly before making any allegation of wrongdoing. If preliminary checks raise concerns, it may be appropriate to engage a forensic expert. Electronic evidence presents unique pitfalls, and expert involvement can help avoid misinterpretation. The precise methods available will depend on the file type under investigation, but much of the process involves examining underlying metadata beyond what is visible to typical users.
For example, the “Last Printed” date in a Word document does not necessarily reflect the most recent print action; if the document was printed but not subsequently saved, this will not be recorded. Similarly, due to quirks in file system architecture, it is possible for a “File Created” date to legitimately post-date the “File Modified” date. These nuances illustrate why expert analysis is often essential to ensure accurate interpretation.
It is not always possible to determine the legitimacy of a file from initial checks alone. While much attention is given to the use of generative AI in creating falsified images, video, or audio, it can also be deployed to manipulate other types of files. For example, we have encountered emails that were altered using generative AI to change critical details such as dates, sender information, and the body text.
Digital forensic experts possess specialist skills in examining a wide range of files and documents to uncover evidence of falsification. Certain types of information can be particularly revealing. For example:
Issues of proportionality must always be considered, alongside the time and financial cost of any forensic investigation - as in any case. These factors should be weighed against the potential impact on the case if the evidence in question is ultimately proven to be falsified. Where concerns arise, it may be appropriate to instruct a digital forensic expert to conduct a limited review of specific material to identify any red flags or obvious signs of falsification. The key is to remain vigilant: these risks are real, and they are likely to increase and evolve as technology advances.
Email Rebecca
+44 (0)20 7457 3127
Email Eleanor
+44 (0)1865 813691