TBILISI: A lie detector driven by artificial intelligence and trialled at European Union borders is the focus of a lawsuit that hopes to bring more transparency over the bloc’s funding of “ethically questionable” technology, its proponent said.
Patrick Breyer, a European lawmaker, is requesting the release of EU Research Agency (REA) documents evaluating the 4.5 million euro ($5.4 million) trial of the use of artificial intelligence (AI) lie detectors to ramp up EU border security.
“I want to create a precedent to make sure that the public … can access information on EU-funded research,” said Breyer, of Germany’s Pirate Party, who has described the technology as a “pseudo-scientific security hocus pocus”.
The European Union’s top court started hearing the case on Friday.
The iBorderCtrl trial, which ended in 2019, is one of several projects seeking to automate the EU’s increasingly busy borders and counter irregular migration and terrorism.
The project, launched in 2016, was tested in Greece, Latvia and Hungary, drawing criticism from human rights groups that question the technology’s ability to accurately assess people’s intentions and its potential for discrimination.
The European Commission, which manages the REA, said the project aimed to test new ideas and technologies.
“iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications,” a Commission spokesman said in emailed comments.
Under iBorderCtrl, people planning to travel were asked to answer questions from a computer-animated border guard, via webcam. Their micro-gestures were analysed to see if they were lying, according to the European Commission website.
Then at the border, low-risk travellers went through, while higher-risk passengers were sent for further checks, it said.
Ella Jakubowska of digital rights group EDRi expressed concern over the effectiveness of AI in making such decisions.
“Human expressions are varied, diverse (especially for people with certain disabilities) and often culturally-contingent,” she said in emailed comments.
“(IBorderCtrl) is by no means the only dystopian technological experiment being funded by the EU,” she added.
IBorderCtrl acknowledged the ethical concerns on its website, adding the project helped initiate a public debate over the technology’s use.
“Novel technologies can have a significant impact on improving the efficacy, accuracy, speed, while reducing the cost of border control,” it said.
“However, they may imply risks for fundamental human rights, which need to be further researched and mitigated before a concept goes live.”
When Breyer asked the REA for the project’s results, ethics report and legal assessment in 2019, the REA said disclosure would undermine commercial interests of the iBorderCtrl consortium – a decision that Breyer is now challenging in court.
Breyer said he hopes the case will lead to greater transparency over the EU’s funding of “ethically questionable” technology.
While the technology was unlikely to be used at EU borders again, there was a risk it could make its way into the private sector, for example to screen insurance claims or job applicants, he said.
As EU governments increasingly turn to algorithms and AI to make important decisions about people’s lives, more transparency was needed, said Merel Koning, senior policy officer at Amnesty International.
“The (European Commission) must subject all research being conducted on AI systems to the full light of public scrutiny and only fund research that respects, protects and promotes human rights,” Koning told the Thomson Reuters Foundation.
The Commission said that all EU-funded research proposals undergo a specific evaluation that verifies their compliance with ethical rules and standards.
“The Commission always encourages projects to publicise as much as possible their results,” it said.