In 2019, guards on the border with Greece, Hungary and Latvia began testing an artificial intelligence-driven lie detector. The system, called iBorderCtrl, analyzed facial movements to try to see signs that a person was lying to a frontier agent. The trial was driven by nearly $ 5 million in EU research funding and nearly 20 years research at Manchester Metropolitan University in the United Kingdom.
The trial sparked controversy. Polygraphs and other technologies built to detect lies from physical properties have been widely declared unreliable by psychologists. Soon, bugs from iBorderCtrl were also reported. Media reports indicated that it the lie prediction algorithm did not workand the project’s own website recognized that the technology “may pose risks to fundamental human rights.”
This month, Silent Talker, a Manchester Met company that brought the technology behind iBorderCtrl into disarray, disbanded. But that is not the end of the story. Lawyers, activists and lawmakers are pushing for an EU law to regulate artificial intelligence, which will ban systems that claim to detect human deception in migration – citing iBorderCtrl as an example of what could go wrong. Former Silent Talker leaders could not be reached for comment.
A ban on AI lie detectors at the borders is one of thousands of changes to the AI law being considered by officials from EU nations and members of the European Parliament. The legislation aims to protect EU citizens fundamental rights, as well as the right to live free from discrimination or to declare asylum. It designates some cases of use of artificial intelligence as “high-risk”, some “low-risk” and strikes a direct ban on others. Those who lobby to change the AI law include human rights groups, unions, and companies like Google and Microsoft, who want the AI law to distinguish between those who make AI systems for general purposes and those who implement them for specific purposes.
Last month, advocacy groups, including European digital rights and the platform for international cooperation on undocumented migrants called for the law to ban the use of AI polygraphs that measure things like eye movements, tone of voice or facial expressions at borders. Statewatch, a non-profit organization for civil liberties, published one analysis warns that the AI law, as written, would allow the use of systems such as iBorderCtrl, adding to Europe’s existing “publicly funded frontier AI ecosystem.” The analysis estimated that over the last two decades, about half of the 341 million euros ($ 356 million) in funding for the use of artificial intelligence at the border, such as profiling migrants, went to private companies.
The use of AI lie detectors at borders effectively creates new immigration policies through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, describing everyone as suspicious. “You have to prove you’re a refugee and you’re supposed to be a liar unless proven otherwise,” she says. “That logic underpins everything. It supports AI lie detectors, and it supports more surveillance and pushback at borders. “
Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for innocent reasons – such as culture, religion or trauma – but doing so is sometimes misread as a signal that a person is hiding something. People often struggle with cross-cultural communication or talking to people who have experienced trauma, she says, so why should people think that a machine can do better?