
The impact of this algorithm on the Serbian Roma community has been enormous. Ahmetović said his sister’s benefits have also been cut since the system was introduced, as have several of his neighbors. “Almost everyone living in Roma settlements in some cities has lost their benefits,” said Danilo Ćurčić, program coordinator at A11, a Serbian nonprofit that provides legal aid. A11 is working to help the Ahmetović family and more than 100 other Roma families regain their benefits.
But first, Ćurčić needed to know how the system worked. So far, he said, the government has rejected his requests to share the source code on intellectual property grounds, claiming it would violate the contracts they have with the companies that actually build the systems. Under Ćurčić and a government contract, a Serbian company called Saga, which specializes in automation, was involved in building the social card system. Neither Saga nor Serbia’s Ministry of Social Affairs responded to WIRED’s request for comment.
as a government technology As the industry has grown, so has the number of companies selling fraud detection systems. Not all of them are local startups like Saga. Accenture – Ireland’s largest publicly traded company with more than half a million employees worldwide – has been working to study fraudulent systems across Europe. In 2017, Accenture helped the city of Rotterdam in the Netherlands develop a system that calculates a risk score for each welfare recipient. A company document describing the original project, obtained by Lighthouse Reports and WIRED, mentions a machine learning system Accenture built that combed through data on thousands of people to determine how likely each of them was to commit benefits fraud. “The city could then rank benefit recipients in order of illegitimate risk so that individuals at highest risk are investigated first,” the document said.
Officials in Rotterdam said Accenture’s system was in use until 2018, when a team from Rotterdam’s research and business intelligence unit took over development of the algorithm. When Lighthouse Reports and WIRED analyzed the 2021 version of Rotterdam’s fraud algorithm, it became clear that the system discriminates on the basis of race and gender. About 70 percent of the variables in the 2021 system — categories of information such as gender, spoken language and mental health history that algorithms use to calculate a person’s likelihood of committing benefits fraud — appear to be the same version of Accenture’s variables.
Asked about the similarities, Accenture spokesman Chinedu Udezue said the company’s “startup model” moved to the city in 2018 when the contract expired. Rotterdam stopped using the algorithm in 2021 after auditors found it used data that could produce biased results.
Consultants typically implement predictive analytics models and then walk away after six or eight months, says Schells, Accenture’s European public services director. He says his team helps the government avoid what he calls an industry curse: “false positives,” Sheils’ term for life-destroying incidents in which algorithms mistakenly flag innocent people for investigation. “It seems like a very clinical way of looking at it, but technically, that’s all there is to it.” Sheils claims Accenture mitigates this by encouraging clients to use artificial intelligence or machine learning to improve rather than replace decision-making humans. Condition. “This means ensuring that citizens do not experience serious adverse consequences based solely on AI decisions.”
However, social workers who are asked to investigate people flagged by these systems before making a final decision aren’t necessarily exercising independent judgement, said Eva Blum-Dumontet, a technology policy consultant who researched for the campaign group Privacy International. Algorithms in the UK welfare system. “The person will still be affected by AI decisions,” she says. “Having a person in the loop doesn’t mean that person has the time, the training or the ability to challenge decisions.”