Artificial Intelligence systems used by local councils – targeting and punishing the vulnerable
By TruePublica: A UN human rights expert has expressed serious concerns about the emergence of the “digital welfare state”, saying that all too often the real motives behind such artificial intelligence programs are to slash welfare spending, set up intrusive government surveillance systems and generate profits for private corporate interests, while effectively targeting and punishing the vulnerable.
The Special Rapporteur on extreme poverty and human rights, Philip Alston, says in a report presented to the UN General Assembly that the world is stumbling zombie-like into a digital welfare dystopia.
The first paragraph of that report reads like something from a futuristic version of a George Orwell novel.
“The digital welfare state is either already a reality or is emerging in many countries across the globe. In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish. This report acknowledges the irresistible attractions for governments to move in this direction but warns that there is a grave risk of stumbling zombie-like into a digital welfare dystopia. It argues that Big Tech operates in an almost human rights free-zone – and that this is especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state. The report recommends that instead of obsessing about fraud, cost savings, sanctions, and market-driven definitions of efficiency, the starting point should be on how welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged.”
But don’t think that Alston is referring to places like China or other authoritarian states – for this statement to come alive, one only has to look on our doorstep in the UK.
Councils failing the vulnerable with AI
One in three councils are now using computer algorithms to help make decisions about benefit claims and other welfare issues, despite evidence emerging that some of the systems are completely unreliable. And it is quite extraordinary where Britain is going when it comes to using this type of technology in such a complicated environment.
Companies including the US credit-rating businesses Experian and TransUnion, as well as the outsourcing specialist Capita and Palantir, a data-mining firm co-founded by the Trump-supporting billionaire Peter Thiel, are selling machine-learning packages to local authorities that are under pressure to save money. Peter Thiel’s Palantir was at the heart of the Cambridge Analytica/Facebook scandal in the 2016 EU referendum.
A recent investigation established that about one-third of councils out of 408 have now invested in these software contracts, with some running into millions of pounds.
SafeSubcribe/Instant Unsubscribe - One Email, Every Sunday Morning - So You Miss Nothing - That's It
There have been obvious concerns raised about privacy and data security, and of course, the ability of council officials to understand how some of the systems even work or how some decisions have been made.
A spokesperson for the Local Government Association, which represents councils, said: “Good use of data can be hugely beneficial in helping councils make services more targeted and effective … But it is important to note that data is only ever used to inform decisions and not make decisions for councils.”
Gwilym Morris, a management consultant who works with IT providers to the public sector, said the complexity of these systems meant the leadership of local authorities “don’t really understand what is going on.” From here, it is easy to ask how private and sensitive data is being gathered, used and interpreted.
“the complexity of these systems meant the leadership of local authorities “don’t really understand what is going on”
Just as the UN rapporteur said – many councils have ended up making the wrong decisions and failed to pay valid claims for financial assistance – leading to serious delays. A report requested by councils found that in many cases the error rate or reasons for errors “could not be established.’
These systems are being used to make decisions are not just about welfare payments but also includes tracking and making decisions on children expelled from school, domestic violence and other very sensitive issues where only human decisions should be made. Currently, lie detections systems are being considered by some councils as another method to digitise decision making.
David Spiegelhalter, a former president of the Royal Statistical Society, said: “There is too much hype and mystery surrounding machine learning and algorithms. I feel that councils should demand trustworthy and transparent explanations of how any system works, why it comes to specific conclusions about individuals, whether it is fair, and whether it will actually help in practice.”
The human rights community has thus far done a very poor job of persuading industry, government, or seemingly society at large, of the fact that a technologically-driven future will be disastrous if it is not guided by respect for human rights and grounded in hard law.
There is no shortage of analyses warning of the dangers for human rights of various manifestations of digital technology and especially artificial intelligence. “But none has adequately captured the full array of threats represented by the emergence of the digital welfare state” the UN expert said.