Open Rights Group publishes opinion by Cloisters Chambers' Robin Allen KC and Dee Masters and Doughty Street Chambers' Joshua Jackson
A detailed legal opinion published yesterday has concluded that aspects of the Home Office's use of artificial intelligence (AI) in asylum decision-making are likely to be unlawful, particularly where applicants are not informed that such tools are being used.
Image credit: WikipediaYou can download the 84-page legal opinion here.
The opinion was written by Cloisters Chambers barristers Robin Allen KC and Dee Masters, together with Joshua Jackson of Doughty Street Chambers. It was commissioned and published by the non-profit digital rights group Open Rights Group.
According to the opinion, the Home Office's use of generative AI tools in the asylum process may breach legal obligations, including duties relating to procedural fairness, data protection and equality law.
The authors say the findings could enable asylum applicants to challenge decisions where AI tools have been used in the assessment of their claims. The Open Rights Group says it "opens the way to legal challenges by asylum applicants who believe that AI has been used in their assessments that determine whether or not they can be granted protection in the UK".
According to the opinion, the Home Office is using two generative AI tools in the asylum process. The Asylum Case Summarisation (ACS) tool summarises information provided by applicants during interviews, while the Asylum Policy Search (APS) tool searches country-of-origin information for caseworkers.
The opinion states: "The important point is that both AI tools create new text for the Decision-Maker to consider rather than simply indexing or organising the existing source information. In this way, they funnel, filter and regurgitate important facts which are material to the Decision-Maker's legal obligations when determining refugee status. They may 'filter out' crucial information. The output of the APC and APS is not shared with the asylum-seeker. In fact, we understand that they are not even informed that AI is going to be used for their application."
Questions are raised over the accuracy of the AI tools, with the opinion noting that an AI pilot found the ACS tool produced inaccurate summaries 9% of the time and that 5% of users of the APS tool were not confident in its accuracy. It also highlights a lack of publicly available information about how the accuracy of the tools has been measured or evaluated, and whether adequate safeguards are in place to prevent errors affecting asylum decisions.
The legal analysis argues that the Home Office may be under a heightened duty to investigate the performance and impact of the AI tools before deploying them in asylum determinations. In particular, the opinion states that the department could risk breaching its Tameside duty of inquiry if it fails to properly assess the accuracy of the tools, their effect on the quality of asylum decisions, the risk of bias or discrimination, and whether non-AI alternatives could achieve the same efficiency gains.
Further concerns relate to how the tools might affect the reasoning process of decision-makers. Under established public law principles, caseworkers determining asylum claims must consider all relevant evidence, including applicants' testimony and country of origin material. The opinion states that if decision-makers rely on AI-generated summaries rather than reviewing the underlying evidence in full, there is "a significant risk" that relevant considerations may be overlooked. In such circumstances, the decision-maker may have failed to properly take into account material evidence when determining the claim.
Another risk identified is that inaccurate AI summaries could lead to decisions being based on "material errors of fact". This concern is heightened, the opinion suggests, by the absence of safeguards requiring caseworkers to cross-check AI outputs against the original source material and by the fact that applicants are not given access to the summaries to correct possible mistakes.
Given the stakes involved in asylum decisions, the opinion argues that transparency is also required as a matter of procedural fairness. It states that asylum seekers should be informed when AI is used and should be given access to the AI-generated material.
The opinion states: "[G]iven the gravity of the consequences for asylum-seekers if their claims are determined on the basis of inaccurate information and the nature of the interests at stake, we consider that – as a matter of procedural fairness – asylum-seekers have a common law right to be informed that AI is being used in the determination of their claims, how it is being used, and to be provided with the output of the AI-generated summaries. That conclusion applies with greatest force to the ACS given that it summarises sensitive information that the asylum-seeker has provided, and which the asylum-seeker is well-placed to correct. … In our view, the fact that asylum-seekers appear not to be so informed is likely to be unlawful."
The authors also highlight data protection and equality risks arising from the use of AI in asylum decision-making. By summarising applicants' interviews, the ACS processes sensitive personal data, including information about race, religion, political beliefs, and sexual orientation, raising obligations under the UK GDPR for transparency, accuracy, and access.
Similarly, the absence of a published Equality Impact Assessment means the Home Office cannot demonstrate that the Public Sector Equality Duty has been satisfied or that potential discriminatory effects of the tools have been assessed and monitored. Further, the opinion notes that civil society and regulators, such as the Independent Chief Inspector of Borders and Immigration, currently have limited oversight of how these tools are used, reducing accountability and public scrutiny.
Authors Robin Allen KC and Dee Masters of Cloisters Chambers commented: "If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used. Without that transparency, it becomes extremely difficult to ensure that decisions affecting fundamental rights are lawful and fair."