The Home Office’s AI tool for processing immigration cases involving adults and children faces a backlash from campaigners who say it risks allowing life-changing decisions to be “rubber stamped”, according to the Guardian. I am doing it.
Critics have described the tool as a “robo caseworker” and fear it has the potential to “encode injustice” through algorithmic decision-making, which could include deportation.
The government has defended the tool as a means to improve efficiency, and officials say humans will ultimately review each decision. AI is being used to assist approximately 41,000 immigrants at risk of deportation.
Campaigners are calling on the Home Office to abolish the system, condemning it as a “technology used to make brutality and harm more efficient”.
A year-long battle over freedom of information requests has revealed how the system operates. Documents released to Privacy International reveal that individuals affected by AI are not explicitly informed that the algorithms are involved.
The AI-powered tool, known as the IPIC (Identify and Prioritize Immigration Cases) system, uses data such as biometric information, ethnicity, health markers, and criminal history to streamline immigration.
The Home Office claims that IPIC is a “rules-based workflow tool” that aims to recommend next steps to caseworkers. Officials say all recommendations will be considered on a case-by-case basis. The system also supports cases for EU nationals under the EU Settlement Scheme.
Privacy International lawyer Jonah Mendelsohn said the tool could impact hundreds of thousands of lives, and people may not know how AI is involved. he warned. He stressed the need for transparency and accountability to avoid “encoding inequities” in immigration.
Fiza Qureshi, CEO of the Immigrant Rights Network, expresses concern about increased surveillance of immigrants due to racial bias and widespread data sharing between government departments, and calls for the system to be reversed. Ta.
The system has been widely used since 2019 and 2020, but previous requests for disclosure by the Home Office have been rejected on the grounds that transparency could undermine enforcement.
Immigration Watch Director Madeleine Sumption acknowledged the potential of AI to improve decision-making, but called for transparency. She noted that understanding the impact of AI on reducing unnecessary detention requires system insight.
A new data bill introduced in the UK parliament last month would enable automated decision-making, provided individuals can object, request human intervention and challenge such decisions. It is.