When AI Hallucinates in Court: 60 Cases and What Construction Must Learn

By Peter Adeleye|Last updated: 30 Mar 2026|4 min read

When AI Hallucinates in Court: 60 Cases and What Construction Must Learn

Sixty. That is the number of suspected or confirmed cases of AI-generated hallucinations now recorded in UK courts and tribunals, according to the latest tracking by legal commentator Damien Charlotin.[1] The figure has more than doubled since the start of the year. For an industry that increasingly relies on AI for everything from contract drafting to cost modelling, the implications deserve serious attention.

The Problem Is Accelerating

In March alone, new cases emerged in the Family Court and the High Court. In Re A, B, C, D [2026] EWFC 71, a litigant in person used a publicly available AI tool to prepare a skeleton argument. Four of her legal citations turned out to be entirely fabricated. The judge noted that she held herself out as a lawyer to the public, yet failed to verify a single reference before submitting it to the court. In a separate case, Brightwaters Energy Limited v Eroton [2026] EWHC 296, similar issues arose in commercial litigation before the High Court.

Scottish courts have now joined the trend. In Your Home Partners, a sheriff explicitly extended the duty of verification established in the landmark Ayinde decision, ruling that it applies equally to lay persons, not just legal professionals.[1] The direction of travel is clear: courts are treating unverified AI output as a case management, fairness and, in some circumstances, sanctions-related issue.

Why Construction Cannot Afford to Ignore This

Construction disputes are among the most document-heavy and technically complex in the UK legal system. Adjudications, Arbitrations, and Technology and Construction Court proceedings routinely involve thousands of pages of contract analysis, variations, delay claims, and quantum assessments. AI tools are already being used to draft witness statements, review contract clauses, and generate legal research in these contexts.

The risk is not hypothetical. A quantity surveyor preparing a final account submission that relies on AI-summarised contract terms without verification could introduce errors that undermine an entire claim. A contractor using AI to draft a response to a contractual notice might cite obligations that do not exist in the actual contract. The consequences range from weakened negotiating positions to formal sanctions from adjudicators or courts.

The Ayinde principle now applies broadly: anyone presenting AI-assisted material to a court or tribunal has a duty to verify that it is accurate and genuine. That duty extends to expert witnesses, party representatives, and litigants in person alike. For quantity surveyors acting as expert witnesses, this creates an additional layer of professional responsibility on top of existing obligations under the RICS Rules of Conduct.

The Governance Gap

What makes this particularly concerning is the gap between AI adoption rates and AI governance maturity in the built environment. A recent survey of fifty senior AEC leaders found universal enthusiasm for AI, with two-thirds planning to increase their AI budgets.2Yet the focus remains overwhelmingly on optimising existing workflows rather than building the verification and governance structures needed to use AI responsibly.

ISO 42001, the first international standard for AI management systems, provides a structured framework for managing exactly these risks.[1] It requires organisations to assess AI-specific risks throughout the lifecycle, implement human oversight for high-risk decisions, and establish clear accountability for AI outputs. The EU AI Act, with its August 2026 enforcement deadline for high-risk systems, will make this type of governance mandatory for many applications.[2]

For construction firms, the practical question is not whether to use AI, but how to use it without exposing themselves to the kind of risks now playing out in UK courtrooms week after week.

Three Steps Firms Should Take Now

First, establish a verification protocol. Any AI-generated output used in formal submissions, expert reports, contractual correspondence, or legal proceedings must be checked against primary sources before it leaves the firm. This is not optional; it is a professional duty.

Second, train your teams. The Family Court case revealed that even a trained legal professional did not understand the limitations of the AI tool she was using. Construction professionals need practical training on what AI can and cannot reliably do, particularly when it comes to interpreting contract terms and generating references.

Third, adopt a governance framework. ISO 42001 offers a proportionate, internationally recognised structure. Firms do not need to pursue full certification immediately, but understanding the framework and beginning to align internal processes is a sensible first step, particularly for those providing expert witness services or working on high-value disputes.

The Bigger Picture

The sixty cases tracked so far represent only those that have come to judicial attention. The true volume of AI hallucinations affecting professional work across the built environment is almost certainly far higher. Construction firms that treat AI governance as a compliance afterthought are taking a risk that grows with every judgment.

Those that build robust verification processes and invest in responsible AI practices will find themselves better positioned, both in the courtroom and in the market.

Related Blog Posts