Migrants’ rights campaigners have called for the Government to end the use of an artificial intelligence (AI) tool introduced to produce ‘efficiencies’ in asylum-seeker cases. Critics have suggested it creates a risk of racial bias in immigration decision making.
The Identify & Prioritise Immigration Cases (IPIC) system automatically identifies and recommends migrants for particular immigration decisions or enforcement action by the Home Office.
The campaign group Privacy International obtained information regarding the system via a Freedom of Information request and has expressed concerns that the IPIC system was going to be used in a way that would lead to the ‘rubberstamping’ of the algorithm’s recommendations ‘because it’s so much easier … than to look critically at a recommendation and reject it.’
Privacy International criticised the creation of a ‘kafkaesque reality where anyone going through the immigration system may not know that their information has been processed by an algorithm let alone that it led to incorrect action being taken in their case.’
As reported in The Guardian officials have claimed that the system is designed to improve the country’s public services; to make immigration enforcement processes easier & more effective. It is stated that IPIC would be able to provide an analysis and suggestion on a case-by-case basis upon considering all ‘relevant’ facts including their ethnicity, health markers, and criminal convictions.
Fizza Qureshi, the chief executive of the Migrants’ Rights Network has expressed concerns over the potential for racial bias in decision making and further expressed unease over the collection and use of such data and the loss of privacy involved in the process: ‘There is a huge amount of data that is input into IPIC that will mean increased data-sharing with other government departments to gather health information, and suggests this tool will also be surveiling and monitoring migrants, further invading their privacy.’
The Secretary of State for Science, Innovation, and Technology, Peter Kyle has announced a programme of investment designed to improve the ‘safe development and deployment’ of AI and claimed there were considerable benefits to be had from the programme: ‘AI has incredible potential to improve our public services, boost productivity and rebuild our economy but, in order to take full advantage, we need to build trust in these systems which are increasingly part of our day to day lives.’