New reported decision sets out legal professionals' duties to fully check authorities and supervise work
The Upper Tribunal (Immigration and Asylum Chamber) yesterday issued a further reported Hamid decision with a stern warning to legal representatives about the misuse of artificial intelligence (AI) and the citation of false authorities in immigration proceedings.
The decision is UK and R (on the application of Munir) v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 00081 (IAC) – available here for EIN members and available here for non-members.
Proceedings concerned two judicial review applications in which grounds and supporting material contained false and non-existent case citations. These authorities were presented as supporting established propositions of immigration law but could not be located because they had been generated using non-specialist AI tools. The Tribunal found that the inaccurate authorities had not been properly verified before filing and that supervisory safeguards had failed. In one instance, responsibility was attributed to a junior fee-earner who had relied on AI-assisted drafting without adequate checking.
The Tribunal was particularly concerned with the failure of supervisors to detect the errors before signing statements of truth, though both cases concerned pleadings settled before the recent amendment to the Upper Tribunal judicial review claim form requiring representatives expressly to certify that any authority cited exists, is locatable, and supports the proposition advanced. Although there was evidence of subsequent candour and remedial action in at least one case, the conduct represented a serious breach of professional duties, wasted time and resources, and created a risk of misleading the tribunal.
The panel of Judges Lindsley, Blundell and Keith reiterated that: "Legal professionals are obliged to ensure that legal arguments which are presented to the First-tier Tribunal or Upper Tribunal are factually and legally accurate. Those who cite false cases fail to comply with that professional obligation and waste the time of the Tribunal."
Addressing supervision, the Tribunal stated that a solicitor who delegates work "remains responsible for the supervision of their work and for ensuring its accuracy" and that failures to check drafting are "likely to result in a referral to the Solicitors Regulation Authority or other regulatory body." It added that "a supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable" than one who fails to check their own work.
The Tribunal emphasised that generative AI tools can produce confident but fictitious authorities and warned that the immigration jurisdiction cannot afford "its limited resources absorbed by representatives who place false information before the Tribunal." It described incorrect citations as sending judges on a "fool's errand".
Procedurally, the judicial review claim form has now been amended to require representatives to confirm by statement of truth that any authority cited "exists; may be located using the citation provided; and supports the proposition of law for which it is cited." Those signing inaccurate statements "should ordinarily expect to be referred to their regulatory body."
The Tribunal also warned that uploading client material to "open-source" AI tools such as ChatGPT places information "in the public domain, and thus… breach[es] client confidentiality and waives legal privilege," potentially requiring referral to regulators and the Information Commissioner's Office.
In one case, no further referral was made because the practitioner had self-reported. In another, the supervising solicitor was referred to the Solicitors Regulation Authority.
The decision's full headnote summary states:
1. Legal professionals are obliged to ensure that legal arguments which are presented to the First-tier Tribunal or Upper Tribunal are factually and legally accurate. Those who cite false cases fail to comply with that professional obligation and waste the time of the Tribunal.
2. A solicitor or other legal professional who delegates their work to another fee-earner remains responsible for the supervision of their work and for ensuring its accuracy. Such supervisors must ensure that fee-earners under their supervision are aware of the dangers of using non-specialist Artificial Intelligence (AI) for legal research and drafting. Failures to do so, or to undertake appropriate checks on the drafting of fee-earners is likely to result in a referral to the Solicitors Regulation Authority (SRA) or other professional body. A supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such "hallucinations".
3. The claim form by which judicial review is sought in the Upper Tribunal has now been amended so as to require a legal representative to confirm by a statement of truth that any authority cited within the form or in any documents appended to it (a) exists; (b) may be located using the citation provided; and (c) supports the proposition of law for which it is cited. Other forms and directions are to be similarly amended. A legal representative who signs such a statement in a case in which false authorities are cited should ordinarily expect to be referred to their regulatory body.
4. Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the SRA and should, in any event, be referred to the Information Commissioner's Office.