
2019, the guard On the borders of Greece, Hungary and Latvia, they began testing an artificial intelligence-powered lie detector. The system, called iBorderCtrl, analyzes facial movements to try to spot signs that a person is lying to border agents. The trial was driven by nearly $5 million in EU research funding and nearly 20 years of research at Manchester Metropolitan University in the UK.
The trial sparked controversy. Polygraphs and other techniques used to detect lies of physical attributes have been widely declared unreliable by psychologists. Soon, iBorderCtrl also reported an error. Media reports said its polygraph algorithm didn’t work, and the project’s own website acknowledged that the technology “could pose risks to basic human rights.”
This month, Silent Talker, a spin-off from Manchester Metropolitan that made the technology underlying iBorderCtrl, disbanded. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for an EU law to regulate artificial intelligence that would ban systems that claim to detect human deception in immigration — citing iBorderCtrl as an example of what could go wrong. Former Silent Talker executives could not be reached for comment.
Banning the use of artificial intelligence polygraphs at borders is one of thousands of amendments to the artificial intelligence bill being considered by EU national officials and members of the European Parliament. The legislation aims to protect the fundamental rights of EU citizens, such as the right to live without discrimination or to declare asylum. It labels some use cases of AI as “high risk”, some “low risk” and bans others outright. Those lobbying for changes to the AI bill, including human rights groups, labor unions and companies like Google and Microsoft, want the AI bill to distinguish between those who make general-purpose AI systems and those who deploy them for specific purposes.
Last month, advocacy groups including the European Platform for Digital Rights and the International Partnership for Undocumented Migrants called for the bill to ban the use of artificial intelligence polygraphs to measure things like eye movement, tone of voice or facial expressions at the border. Civil liberties nonprofit Statewatch released an analysis warning that a written AI bill would allow for systems such as iBorderCtrl, adding to Europe’s existing “publicly funded border AI ecosystem.” The analysis calculated that over the past two decades, about half of the 341 million euros ($356 million) spent on the use of artificial intelligence at the border, such as profiling migrants, went to private companies.
Petra Molnar, deputy director of the nonprofit Refugee Law Lab, said the use of artificial intelligence polygraphs at the border effectively created new immigration policies through technology and flagged everyone as suspicious. “You have to prove you’re a refugee and unless you prove otherwise, you’re considered a liar,” she said. “That logic underpins everything. It supports AI lie detectors, it supports more surveillance and border counterattacks.”
Immigration attorney Molner said people often avoid eye contact with border or immigration officials for innocuous reasons such as culture, religion or trauma, but doing so can sometimes be misinterpreted as a sign that a person is hiding something. Humans often have trouble communicating across cultures or talking to people who have experienced trauma, so why do people believe machines can do better, she said?