Skip to main content

Immigration AI in 2025: What’s Actually Changed Since the Streaming Algorithm?

Written by
Amit Kapadia, ARK Law Limited
Date of Publication:

Five years after the landmark legal victory, how does UK immigration AI work now - and what immigration practitioners need to know.

The Victory That Was Supposed to Change Everything

In August 2020, immigration rights campaigners achieved what seemed like a decisive victory. The Home Office agreed to scrap its controversial visa streaming algorithm after legal challenges by the Joint Council for the Welfare of Immigrants (JCWI) and Foxglove exposed how the system used nationality as a primary factor in processing applications.

The victory was hailed as the UK's first successful legal challenge to government algorithmic decision-making. The Home Office committed to conducting equality impact assessments, promised that nationality would not be used in replacement systems, and pledged greater transparency in automated decision-making. Critics had described the original system as creating "speedy boarding for white people."

Five years later, as we survey the immigration technology landscape in 2025, a crucial question emerges: what has actually changed?

The New Reality: More AI, Less Transparency

The uncomfortable truth is that algorithmic decision-making in UK immigration has not retreated since 2020—it has expanded dramatically, often in ways that are less visible but potentially more consequential than the original streaming algorithm.

What Is IPIC? The New Immigration AI System You Need to Know About

Today's UK immigration AI landscape is dominated by systems that would have seemed impossible in 2020. The IPIC (Identify and Prioritise Immigration Cases) system, according to a Guardian report published in November 2024, processes cases for about 41,000 people subject to removal action. Unlike the relatively simple streaming algorithm, IPIC represents a fundamental shift in how immigration AI works in practice.

IPIC ingests vast arrays of personal data, including biometric information, ethnicity, health markers, criminal history, and even electronic monitoring data from GPS tracking devices. This represents a significant evolution in how UK immigration technology 2025 operates compared to earlier systems.

The Atlas caseworking system, now the digital backbone of UK immigration processing, handles applications across multiple visa categories and is used by over 30,000 users, according to 2022 government assessments. But Atlas has suffered significant technical problems, with a Guardian article from March 2024 revealing that over 76,000 people's records contained incorrect information due to "merged identities"—database errors where multiple people's biographical and biometric details became incorrectly linked.

What Is AI Facial Age Estimation and Why Is It Controversial?

Most controversially, the Home Office announced in 2025 that it will begin trialling AI facial age estimation technology to determine whether asylum seekers are children or adults. As announced by Minister Dame Angela Eagle in a parliamentary statement, this facial age estimation system will begin trials in 2025 with integration planned by 2026, representing the most significant expansion of immigration AI since the streaming algorithm era.

How Does Immigration AI Work Now? The Sophistication Problem

Understanding how UK immigration AI works in 2025 reveals a troubling evolution. The streaming algorithm, for all its problems, was relatively straightforward: it categorised applications using nationality and other factors to assign red, amber, or green risk ratings. Officials could understand, if not fully see, how it worked.

Today's immigration AI systems are far more sophisticated and correspondingly more opaque. IPIC doesn't just categorise—it actively recommends specific enforcement actions based on complex algorithmic analysis of multiple data streams. According to the Guardian's reporting on leaked training documents obtained by Privacy International through Freedom of Information requests, the system creates a built-in bias toward accepting its recommendations. Officials who want to reject IPIC's proposed decisions must provide written explanations and tick boxes relating to their reasons, but to accept the computer's recommendations, "no explanation is required and the official clicks one button marked 'accept'."

The AI facial age estimation system represents another leap in complexity for UK immigration technology. Rather than processing applications, this immigration AI attempts to make biological determinations about human development, judging whether someone is a child based on analysing their facial features. Human Rights Watch has condemned facial age estimation as "cruel and unconscionable" in a July 2025 article, noting that the technology cannot account for children who have aged prematurely due to trauma, violence, or malnutrition - precisely the circumstances that often drive people to seek asylum.

The Transparency Paradox: More Systems, Less Oversight

One of the most striking changes since 2020 is how the lesson of transparency appears to have been learned in reverse. The streaming algorithm's exposure led not to greater openness, but to more sophisticated methods of concealment.

The Guardian article revealed that IPIC has been in widespread operation since 2019-20, yet "people whose cases are being processed by the algorithm are not specifically told that AI is involved." This represents a fundamental shift from even the limited accountability that existed before 2020.

Privacy International's Jonah Mendelsohn told the Guardian that the tool "could affect the lives of hundreds of thousands of people" and warned that "anyone going through the migration system currently has no way of knowing how the tool has been used in their case." This opacity extends beyond individual cases to systemic accountability—without transparency about how these systems work, it becomes impossible to assess whether they operate fairly or discriminate against particular groups.

The Home Office continues to refuse Freedom of Information requests about AI systems, typically citing concerns that disclosure might allow people to "game" the system, as evidenced in multiple FOI responses documented by Privacy International and other organizations.

What Hasn't Changed: The Bias Problem

Despite the 2020 victory's emphasis on eliminating nationality-based discrimination, current systems continue to incorporate protected characteristics in their decision-making processes. According to Privacy International's research based on Freedom of Information requests, IPIC processes ethnicity data alongside other personal information in making its recommendations.

Fizza Qureshi, chief executive of the Migrants' Rights Network, raised concerns about racial bias when speaking to the Guardian, noting that "there is a huge amount of data that is input into IPIC that will mean increased data-sharing with other government departments to gather health information, and suggests this tool will also be surveilling and monitoring migrants."

The fundamental problem identified in 2020—that algorithmic systems can encode and amplify existing biases—remains unaddressed. If anything, the sophistication of current systems makes bias harder to detect and challenge, not easier.

The Regulatory Void: Still No Comprehensive Framework

Perhaps the most significant continuity from 2020 is what hasn't been built: a comprehensive legal framework for algorithmic accountability in immigration. While the EU has developed the AI Act and other jurisdictions have implemented algorithmic accountability measures, the UK still lacks immigration-specific legislation addressing automated decision-making.

This regulatory gap means that applicants still have no statutory right to understand how AI influenced their case, limited pathways for redress when algorithms make mistakes, and minimal oversight of how personal data is used in automated systems. The government's 2025 AI Opportunities Action Plan explicitly links immigration policy to AI development, promising to use immigration routes to attract global AI talent while simultaneously using AI to enhance border security and immigration enforcement, according to government policy documents published in January 2025.

Can You Challenge AI Immigration Decisions? The Growing Accountability Gap

For immigration lawyers and advisers, the changes since 2020 have created unprecedented challenges in understanding how to challenge AI-influenced decisions. The streaming algorithm, whatever its flaws, was at least a known quantity after its exposure. Today's professionals must navigate multiple immigration AI systems whose operations remain largely secret and whose influence on individual cases is often undetectable.

Understanding how these UK immigration AI technologies operate provides important context for the environment in which immigration cases are processed, though specific legal strategies would require appropriate specialist legal advice for individual circumstances. The challenge is that traditional immigration law assumes decisions can be understood and challenged through established procedural safeguards, but when algorithmic logic influences outcomes in ways that aren't disclosed, questions arise about how effectively immigration lawyers can challenge AI decisions.

Looking Forward: Lessons Unlearned?

The trajectory from 2020 to 2025 suggests that the streaming algorithm victory, while symbolically important, failed to address the underlying drivers of algorithmic expansion in immigration control. Rather than leading to systematic reform, it appears to have encouraged the development of more sophisticated and less visible systems.

The 2020 legal victory, achieved through challenges brought by JCWI and Foxglove, demonstrated that problematic AI systems can be successfully challenged through appropriate legal channels. However, it also showed how quickly new systems can emerge under different names to fulfil similar functions.

As we move deeper into 2025, the central question for UK immigration technology is whether the UK will develop the regulatory frameworks and oversight mechanisms necessary to ensure that immigration AI serves justice rather than simply making existing problems more efficient and harder to challenge. The current trajectory suggests that, without significant intervention, the next five years may see even more sophisticated AI systems operating with even less accountability than those we have today.

The streaming algorithm case was supposed to be a turning point for algorithmic decision-making in UK immigration. Instead, it may have been merely the beginning of a much larger transformation—one that is still unfolding largely beyond public view and professional understanding.