This op-ed was originally published at Undark

For people seeking asylum, the process usually involves registration, an interview, and review by government caseworkers. Decisions on granting asylum have traditionally been made by human officials, who may rely on forms, guidance, and other assessment frameworks. Those tools remain within a human-led process.

Now, the systems that assess need, verify identity, and decide what appears urgent are beginning to use artificial intelligence. In the United Kingdom's system for granting asylum to refugees, for example, AI tools that are being tested summarize interview transcripts and bring relevant policy information to caseworkers. Even in that limited role, they become part of how claims for asylum are read and processed before human caseworkers make their final decisions.

Discussing a future where AI controls us may be unrealistic. What matters more is the quieter present, in which AI and other systems built around data are being gradually folded into the usual routines. These tools are usually presented as ways to improve efficiency, and sometimes they do. In refugee systems, though, speed is only part of the story. The larger concerns are how institutions define vulnerability, notice urgency, see and acknowledge detailed differences among refugees' needs and constraints, recognize personal contexts, and determine which cases are treated as priorities.

The U.K. pilot programs that are testing these AI tools offer an early glimpse of that change. The tools were introduced to help staff review interview material and retrieve policy information more quickly, not to replace human officers, and the U.K.'s Home Office found time savings in both tasks. However, the same evaluation also pointed to some limits, as some summaries were inaccurate or incomplete, which can affect how cases are read and handled by human caseworkers. Furthermore, wider or deeper effects on decision-making may not yet be visible, since the findings come from small-scale trials rather than sustained use. And a recent legal opinion, commissioned by the U.K. nonprofit Open Rights Group, found that some aspects of the government's use of AI in these refugee cases is likely unlawful.

To understand the kind of system AI is entering, it helps to look at refugee assistance in Jordan. For years, decisions on how to distribute aid there have relied in part on a tool called the Vulnerability Assessment Framework, or VAF, which helps agencies assess and compare household vulnerability — that is, how vulnerable a family is based on factors such as income, debt, health, housing, and caregiving burdens. VAF is not an AI tool. But it is the kind of structured decision system into which newer AI tools are being incorporated, one that organizes humanitarian priorities through formal categories of need….

Read the full op-ed at Undark

​