News and Publications

Seeing is not believing: how can family lawyers manage risks and protect evidence in the AI age?

Posted: 26/11/2025


This article is based on an original contribution to the Family Law Journal. 

It was co-authored by Dr Tristan Jenkinson, head of forensics and investigations at Sky Discovery. He is an expert witness with nearly 20 years of experience working in digital forensics and eDiscovery, specialising in complex litigation and investigation-based matters. 

The evolution of evidence manipulation 

The familiar idiom “I’ll believe it when I see it” can no longer be applied uncritically to evidence used in legal proceedings. Advances in technology have made it possible to fabricate and manipulate images, audio, and video with unprecedented sophistication. This development presents significant challenges for the legal profession, requiring practitioners to exercise heightened vigilance in assessing the authenticity of evidence while upholding professional standards and safeguarding clients’ interests.

The issue of falsified evidence is not new; it is as old as the law itself. Long before the advent of modern technology, courts grappled with forged signatures and fabricated documents. Traditional techniques such as handwriting analysis, rigorous cross-examination, and expert testimony have long been employed to uncover such deception. While the tools for fabrication have evolved dramatically, so too must the methods and skillsets required to investigate and challenge them.

A particularly striking recent example underscores the continuing relevance of this issue. In Crypto Open Patent Alliance v Wright; Wright and other companies v BTC Core and others [2024] EWHC 1198 (Ch), the court found that Craig Wright – a who had repeatedly and falsely claimed to be Satoshi Nakamoto, the inventor of Bitcoin – had engaged in an extensive campaign of dishonesty and forgery. Wright falsified hundreds of documents filed with the court in an attempt to substantiate his claim. The judgment provides a detailed and illuminating account of the multiple avenues available to interrogate electronically generated evidence and is well worth reading for those interested in the intersection of technology and evidential integrity.

The challenge of falsified evidence in family law 

We are entering an era in which falsified evidence can be created by virtually anyone, using readily available technology on almost any device. Whether advising clients in court proceedings or within non-court dispute resolution (NCDR) processes, practitioners must understand the tools – often free – that can generate or manipulate evidence, recognise how such material may manifest in cases, and develop awareness of key indicators of falsification. As these technologies become increasingly accessible, the likelihood of encountering fabricated material – whether provided by our own clients or presented by opposing parties – will rise. For some, the motivation may be financial, as in the case of Mr Wright; for many family law clients, the motivation is even more compelling and emotionally charged, particularly where issues such as child arrangements are at stake. Family lawyers rely on evidence to persuade judges to make decisions with profound consequences for clients, their families, and sometimes their commercial interests. We must do so with confidence, vigilance, and a clear commitment to our duties as officers of the court.

Key definitions

Deepfake – A term allegedly derived from “deep learning” (a subset of artificial intelligence) and “fake.” It refers to falsified media generated or altered using AI technologies such as machine learning or generative AI. Deepfakes most commonly involve audio or video content, where technology is used to map an individual’s voice or likeness onto existing media. Although now primarily associated with audio and video, the term originally emerged in the context of images and face-swapping technology.

Shallowfake – Sometimes used to describe falsified or manipulated evidence created without the use of AI. Examples include altered PDFs, emails, or other documents produced through conventional editing techniques.

Cheapfake – A term sometimes applied to low-quality falsifications, often generated using basic online tools. These typically lack sophistication and fail under even minimal scrutiny.

Falsified Evidence – A broad term encompassing any evidence that has been fabricated or manipulated, regardless of the method or technology employed.

The intersection of forgery and lawyers' duties to the court

Much of the commentary surrounding the impact of artificial intelligence focuses on its potential to disrupt traditional legal tasks such as case research and contract drafting. Recent high-profile cases have underscored the dangers of relying on unverified sources when constructing legal arguments. While these examples may not directly concern falsified evidence, they highlight a fundamental principle: the responsibility for accuracy rests squarely with lawyers, as officers of the court, when advancing a case. This duty is not diminished by technological innovation; if anything, it becomes more critical in an environment where the authenticity of information can no longer be assumed.

Evaluating the volume of evidential material in family law matters is already a complex and time-consuming task. Occasionally, a single “smoking gun” piece of evidence may determine the outcome of a case. More often, however, the factual matrix is wider and comprises multiple strands – messages, documents and/ or images. In either scenario, practitioners must remain alert to the risk that some of this material may have been fabricated or manipulated, presenting a fictitious account under the guise of fact.

In addition, we must factor in the psychological phenomenon of confirmation bias, which is deeply embedded in human behaviour. In essence, this bias manifests as a tendency to seek out and assign greater weight to evidence that supports one’s preferred view, while disregarding or minimising evidence that contradicts it. Ambiguous material is often interpreted in a way that aligns with the preferred opinion or version of events. Against this backdrop, one of the fundamental principles of legal practice remains clear: a lawyer must not mislead the court by presenting false evidence. Professional obligations require that lawyers do not improperly prioritise the client’s interests over their duties to the court. These duties include avoiding any conduct that knowingly or recklessly misleads the court, whether by presenting false information or by being complicit in its presentation (see SRA Guidance: Balancing Duties in Litigation).

While the rules governing falsified evidence are well established, the law relating to the use of artificial intelligence remains in its infancy. Some jurisdictions are beginning to introduce specific provisions addressing both issues. For example, the state of Louisiana in the United States recently enacted Act 250 (2025), which imposes a “reasonable diligence” standard on lawyers in relation to digital evidence.

Personal litigation, including family proceedings, is particularly vulnerable to the use of altered or fabricated evidence, given the high stakes involved. Such practices persist despite the fact that deliberately or recklessly misleading the court can result in severe sanctions, including fines or imprisonment for contempt of court. These consequences apply equally to litigants in person. In one case, our client’s husband falsified bank statements and was exposed only when a review revealed an obvious error: a reference to “31 September.” The result was a criminal charge and a custodial sentence.

What about matters which are resolved outside of court? While NCDR is welcome for many reasons, it introduces additional challenges in relation to evidential integrity. The absence of judicial scrutiny or expert involvement places greater responsibility on lawyers and mediators to assess the reliability of evidence. Where parties enter into agreements based on falsified material, even if presented during a voluntary process, those agreements, and any resulting consent orders, may later be set aside if the falsified evidence upon which the agreement was based is later uncovered. This risk underscores the need for rigorous diligence in evaluating evidence, regardless of the forum in which the dispute is resolved.

Practical approaches to detecting falsified evidence

When reviewing evidence, whether from clients or third parties, each item must be assessed on its own merits and in the context of the case as a whole to determine its veracity. This obligation applies equally to material produced by our client and by others. Documents, emails, chat logs, messages, audio recordings, images, and videos cannot be assumed to be reliable without careful scrutiny.

While AI-generated deepfakes present a relatively new challenge, many of the principles of digital forensic analysis remain applicable. When evaluating digital evidence, consider the following questions:

  • Has the evidence been filed at the eleventh hour? Falsified material is often introduced late in proceedings to minimise the time available for scrutiny and to reduce the likelihood of inconsistencies being detected.
  • Does the evidence significantly alter the trajectory of the case? Falsified evidence is frequently designed to serve as the “smoking gun”: a decisive piece of evidence upon which the case may hinge. If a questionable file appears to have little or no material impact, the incentive to falsify it (and assume the associated risk) is considerably lower.
  • Has the evidence been referenced previously? If material is produced without any prior mention, this may indicate that it is not genuine. Crucial information that could determine the outcome of a case would not typically exist in isolation. For example, if an email existed that conclusively supported a party’s position, one would expect its existence to have been flagged earlier in the proceedings. The expectation is that parties will identify the potential existence of compelling evidence at an early stage. Be cautious if pivotal material is first mentioned or produced only when the case is well advanced.
  • Is the evidence in a non-native format, such as a PDF, scan, or printed copy?
    Non-native versions are often provided to conceal the underlying metadata of the original file, which could reveal signs of falsification. Metadata can be a critical source of information about the creation, modification, and authenticity of a document.
  • Is there a convoluted explanation for the absence of original versions?
    Avoiding production of native files without a clear and credible reason may indicate deliberate obfuscation. If the explanation for missing originals is vague or implausible, this should raise further suspicion.

When receiving data from the other party/ parties, be sure to preserve an untouched copy of the original material. Ideally, this should be done using forensic preservation methods, although this may not be proportionate in every case. Crucially, do not open the original file directly - always work from a duplicate you have created. Simply opening a file can alter its internal metadata, potentially erasing information that could be critical in identifying signs of falsification.

When assessing evidence, certain steps can help uncover indications of falsification. While the approach will vary depending on the file type, some high-level considerations include:

  • Are there obvious inconsistencies within the document? These may include internal contradictions or discrepancies when compared with other evidence in the case, or inconsistencies in the document itself.
  • If the document is historic, do the details align with expectations? For example, does it use the correct headers, footers, company logos, or formatting consistent with the relevant time period?
  • Are there notable differences between the suspicious file and legitimate files from the same period? Compare the questioned document with other contemporaneous materials - such as emails, invoices, or statements - to identify discrepancies in style, structure, or content
  • Are there questionable dates or anomalies in time references?
    Watch for impossible dates (eg. “31 September”) or incorrect day-of-week references. Similarly, note any entries or actions recorded as occurring on weekends or public holidays where such activity would be unexpected.
  • Is the formatting of the document suspicious?
    Inconsistencies in fonts, spacing, alignment, or layout, particularly when compared with authentic documents, may indicate manipulation.
  • Consider reviewing the file’s metadata, which can often be accessed through built-in tools. For example, “Document Properties” in Adobe Acrobat or “File > Info” in most Microsoft Office applications. Examine whether the metadata appears consistent and credible. Pay particular attention to dates such as “Created” and “Modified”: do they align with the timeline of the case and the narrative provided? Discrepancies in these fields can be a strong indicator of manipulation.
  • For audio, video, or image files, it can be helpful to ask the person providing the material to explain how and where it was created, and on what device. This information can then be cross-checked for consistency:
    • Location: Does the stated location align with what appears in the video or image? While care should be taken (locations can change over time) obvious discrepancies may raise concerns.
    • Device details: Verify whether technical attributes such as file format, aspect ratio, resolution, and pixel count are consistent with the claimed device. Inconsistencies may indicate manipulation or fabrication

Care must be taken when checking for inconsistencies in files, particularly before making any allegation of wrongdoing. If preliminary checks raise concerns, it may be appropriate to engage a forensic expert. Electronic evidence presents unique pitfalls, and expert involvement can help avoid misinterpretation. The precise methods available will depend on the file type under investigation, but much of the process involves examining underlying metadata beyond what is visible to typical users.

For example, the “Last Printed” date in a Word document does not necessarily reflect the most recent print action; if the document was printed but not subsequently saved, this will not be recorded. Similarly, due to quirks in file system architecture, it is possible for a “File Created” date to legitimately post-date the “File Modified” date. These nuances illustrate why expert analysis is often essential to ensure accurate interpretation.

It is not always possible to determine the legitimacy of a file from initial checks alone. While much attention is given to the use of generative AI in creating falsified images, video, or audio, it can also be deployed to manipulate other types of files. For example, we have encountered emails that were altered using generative AI to change critical details such as dates, sender information, and the body text.

Expert analysis: establishing whether evidence has been falsified

Digital forensic experts possess specialist skills in examining a wide range of files and documents to uncover evidence of falsification. Certain types of information can be particularly revealing. For example:

  • Embedded timestamps – These are often stored in obscure formats that are not easily interpreted by a layperson. Because falsified documents are typically created retrospectively, identifying the true creation date of a file can be critical in proving manipulation.
  • Structural rules and file specifications – Experts often examine whether the file conforms to expected structural standards, including software markers, location indicators, and embedded usernames of creators.
  • Device-level analysis – Requesting access to the device on which the evidence was allegedly created or recorded can be helpful. This allows an expert to verify whether the file is present on the device, whether its metadata is consistent with expectations, whether it resides in the correct directory for the tool used to create it, and whether system logs corroborate its presence since the claimed creation date.
  • Fonts and malware traces – In some cases, anomalies such as unexpected fonts or remnants of viruses have been used to demonstrate that a file is not authentic.

Issues of proportionality must always be considered, alongside the time and financial cost of any forensic investigation - as in any case.  These factors should be weighed against the potential impact on the case if the evidence in question is ultimately proven to be falsified. Where concerns arise, it may be appropriate to instruct a digital forensic expert to conduct a limited review of specific material to identify any red flags or obvious signs of falsification. The key is to remain vigilant: these risks are real, and they are likely to increase and evolve as technology advances.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP