Algorithms and the politicisation of government automated decision-making

11th March 2020 / United Kingdom
Algorithms and the politicisation of government automated decision-making

Charlotte Threipland and Oscar Rickett:  It can be hard in the era of big data and mass surveillance to remember that once upon a time, technology was meant to set us free.

Those twentieth-century titans of British intellectual life, Bertrand Russell and John Maynard Keynes, both saw a future in which it would reduce working hours to a minimum, freeing human beings up and allowing them to live richer, more fulfilling lives. Before that, Karl Marx proposed that in the end machinery would establish the conditions of mankind’s emancipation.

Visions of utopia have often included technology but so too have visions of dystopia. Science fiction is full of predictive systems of control, all-seeing eyes and malign robots. It is that portrait of technology that seems more familiar in the 21st century. Today, a small group of all-powerful companies in Silicon Valley are misusing our data and a dizzying array of software is deployed across public life, from the border to the police station to the jobcentre.

The UK is no exception. In keeping with the often unseen nature of artificial intelligence, automated decision-making happens behind closed doors. It is a world the government keeps hidden. But we know that machines are being used to make decisions in the public sphere. We know too that this is having a profound impact on democracy and the rule of law, not to mention the people on the receiving end of what can be bias or poor quality decisions.

Algorithms are being deployed to administer crucial public services at every level of government. Research shows that at least 53 UK local authorities are using algorithms for predictive analytics. The real number is likely to be higher as the research was based on requests made under the Freedom of Information Act, more than 100 of which weren’t responded to. We also know that about a quarter of police authorities in the UK are now using algorithms for prediction, risk assessment and assistance in decision-making.

 

The Department for Work and Pensions (DWP), the largest UK government department, is developing “welfare robots” – artificial intelligence – in delivering welfare and pension

 

The Department for Work and Pensions (DWP), the largest UK government department, is developing “welfare robots” – artificial intelligence – in delivering welfare and pension payments. These systems are being put to use as part of the government’s rollout of Universal Credit, with claimants already reporting that their benefits had been incorrectly withheld following errors made by the technology and that civil servants seemed unwilling or unable to contradict their machine overlords.

We know, too, that the Home Office is using algorithms as part of its settled status scheme and in sorting through visa applications. A July 2017 government report by David Bolt, Independent Chief Inspector of Borders and Immigration, disclosed that since 2015, UK Visas and Immigration “has been developing and rolling out a ‘streaming tool’ that assesses the perceived risks attached to an application” and which “streams applications ‘Green’ (low risk), ‘Amber’ (medium risk) or ‘Red’ (high risk)”.

Bolt told an inquiry by the All-Party Parliamentary Group (APPG) for Africa that he was concerned that overreliance on the algorithmic “streaming tool” could mean that decisions were not being made on the merits of the individual case but on a set of generalised and detached indicators. The Labour MP Chi Onwurah, the chair of the parliamentary group, was more blunt: “The Home Office is broken. We know that it is unable to fulfil its basic visa-processing duties in a timely or consistent manner. If we add to that a powerful and unregulated new technology, Brexit and bias, we have a recipe for disaster.”

SafeSubcribe/Instant Unsubscribe - One Email, Every Sunday Morning - So You Miss Nothing - That's It


 

Speedy boarding for white people

Although the Home Office has refused to disclose the full details of the streaming algorithm it has admitted that the risk level applied to a visa application will depend partly on the applicant’s nationality. This categorising of nations was already in use, openJustice understands, before it was automated.

The Joint Council for the Welfare of Immigrants, supported by the not-for-profit organisation Foxglove, has begun a ground-breaking legal case to establish how this Home Office system works.

What we call it, in our slightly glib way, is ‘speedy boarding for white people’”, says Cori Crider, director of Foxglove, who says the algorithm they think is allocating these decisions is “shaking out in a patently biased and frankly racist way”. Across a number of areas of government, citizens are being deprived of the chance to take up their case with a human being. More than that, existing inequalities can become baked in, with algorithms or artificial intelligence taking the worst human biases and enshrining them in code.

The obvious advantage to using machines instead of humans, the government claims, is the speed at which they can process information. The DWP successfully removed a backlog of 30,000 pension claims within two weeks of deploying welfare robots. An increased efficiency leads to a faster service and financial savings which (in the right hands) could be reinvested to the advantage of its service users, for example in providing a more generous welfare state.

“Lots of the concern about this is basically nostalgia, but I can’t think why people are nostalgic for the benefits system, every incarnation has been riddled with problems”, a former DWP official told openJustice.

 

Across a number of areas of government, citizens are being deprived of the chance to take up their case with a human being

 

It’s clear that efficiency is the backdrop to the deployment of such methods, but we are unlikely to see the benefit. Since the austerity agenda that began under David Cameron’s government in 2010, departments are scrambling for solutions to their squeezed purse.

They are being told to do more with less in a way that seems fresh and contemporary. This is one reason behind the rise of software like Xperian’s Mosaic, “a cross-channel consumer classification system which segments the population into 15 groups and 66 types that helps you to understand an individual’s likely customer behaviour”, which is now being used across government.

This software was designed for brands looking to target customers but now, “Mosaic is used everywhere”, says Silkie Carlo, director of the non-profit civil liberties organisation Big Brother Watch. “By local authorities, by political parties – it was even used by the fire service. The sales pitches are, I think, very effective at a local government level because using something like Mosaic can be branded as innovative and modern”.

We are all on Mosaic, broken down and packaged up into one of the software’s 66 types. Those types are not uncontroversial. The first, or A01, is “World Class Wealth”. These gilded figures are, according to the Xperian tool, “global high flyers and families of privilege living luxurious lifestyles in London’s most exclusive boroughs”.

Further down the consumer food chain, we find quite a loaded language relating to race and class. Type K45 is named “Crowded Kaleidoscope”, referring to “multi-cultural households with children renting social flats in over-crowded conditions”. Type N59 is simply “Asian Heritage”.

openJustice understands that local councils and local police forces have – with the help of Mosaic – used data points relating to income and racial background to determine anything from council tax avoidance to the chance of an offender re-offending to, extraordinarily, the likelihood a family will have a member who is sexually abusive.

Policing algorithms create a feedback loop in which communities subjected to the most intrusive policing continue to be targeted because the algorithm reinforces biases relating to class and race. This has played out in predictive policing methods in the US. Statistics show that black and white people there sell and use drugs at similar rates, but black people are 2.7 times more likely to be arrested for drug-related offences. Using a tool to predict future crimes based on past policing actions such as arrests creates a feedback loop and an even more biased police force.

 

Nigerian? Syrian? Computer says ‘no’

All of this is done in the name of saving time and money. The Home Office has had £1.9 billion (15%) cut from its day-to-day budget since 2010-11. In this environment, employees are under pressure to meet stringent targets. According to Onwurah, investigations by the APPG for Africa found “this impacted on the quality and fairness of decision-making”.

The all-party group’s investigations also found that African applicants are refused UK visas at twice the rate of those from any other part of the world.

Speaking to openJustice, the Nigerian musician Villy Odili said that his band Villy and the Xtreme Volumes was asked to headline a stage at Glastonbury Festival. “I was excited,” he remembers. “I had always wanted to play in Britain and to be asked to play at such a big festival was even better.” He soon found, though, that his application to get a visa was time-consuming and expensive. “They ask for so many documents. It takes up all your focus.”

For one, Odili was asked for evidence that he owned property. Luckily enough, he had in fact just bought somewhere, but he didn’t have the paperwork completed yet, and so he went to great lengths to obtain it ahead of time. He was asked about his family and told that if he had a wife and children he needed to provide evidence that this was, in fact, the case. He had to send in three months’ worth of bank statements.

“I thought surely after all this, they will let me in,” he says. But they didn’t. His dream was smashed. He was, he says, rejected simply because he was “Nigerian and a musician”. If there was a human being involved in the decision, that human being was simply following the kind of checklist that we now know the Home Office’s algorithms run on. What is a problem for African applicants is also one for Syrians. Abdullah Karam is from Hama, a city in the west of Syria that has been bound up with struggles against the authoritarianism of successive Assad governments.

Karam’s eighteenth birthday came in 2014, three years after the beginning of the Syrian uprising, with his country in the midst of a civil war that continues to this day. “Suddenly, we turned eighteen and we were expected to join the military and fight against our own people,” he tells openJustice. “My parents made the decision that I should flee the country. I wasn’t raised to kill someone or be killed.”

“I will just say that it was hard,” Karam says of his journey out of Syria, through Turkey, where he lived for a time, and into Europe. Arriving in Austria, he felt accepted and began working for a computer games developer. It was here that he helped create Path Out, a game about a young man who has to leave Syria because of the war. He was also, of course, the game’s protagonist.

Path Out won a number of awards and in 2018 it was a finalist in the Indie Prize, with the ceremony in London. His application for a visa – which he had to pay for twice, following a computer error – was processed in a Home Office hub in the Polish capital of Warsaw, and it is highly likely that the automated system based on the long-established streaming method was put to use. “I applied with all the papers that were needed. They needed fingerprints, a photo and a video saying why I wanted to visit Britain,” says Karam.

His application was denied, with the rejection letter he was sent revealing some curious details. The young, childless Syrian was told that he had two children and that he should have applied for visas for them as well. He was told it was assumed he would use the visa to then relocate to Britain.

 

“a weird mix of bureaucratic bullying and a sense of overestimating the attractiveness of their own nation on behalf of the British authorities”

 

His boss at the time described the rest of the letter as “a weird mix of bureaucratic bullying and a sense of overestimating the attractiveness of their own nation on behalf of the British authorities”.

“I felt like I had a stamp on my head”, says Karam. “Living your life as a Syrian you know you are going to get treated horribly. I don’t think anyone even looked at my application. I think whatever or whoever processed it just went: ‘Another Syrian? Rejected!’”. He points out that “if Europeans were treated the way Syrians are treated they would rebel”.

Having dreamt of going to London, Karam stayed at home in Austria. A couple of games sites picked up his story and then, with its public relations to think about, the Home Office sent him an email apologising for the whole affair.

Neither Karam nor Odili knows for sure what role automated decision-making played in their visa rejections, but in both cases, computerised errors played a part and a decision seemed to be made primarily based on nationality.

This level of service from the Home Office doesn’t come cheap. A standard visitor visa application costs £95 and the department even monetises its communications, charging £5.48 per email from an overseas email address and £1.37 per minute of a phone call.

To mitigate against the obvious risk of placing too much trust in machines, the EU has passed a regulation – under the General Data Protection Regulation which is currently the only regulation on this area in force in the UK – that algorithmic decisions still be checked by people.

But as Cori Crider told us, “Imagine you are a Home Office worker in Croydon. Let’s say you are working the green queue, you have a massive load of targets to get through every single day. You are not going to be doing a really sharp meaningful review of those applications in the green queue… The idea that it is not going to make a material difference to the outcome which of these queues you get in, it just doesn’t wash.”

Crider is referring to the phenomena of ‘automation bias’ in which people tend to defer to decisions made by machines. When we use spell-checking programmes, for example, many of us tend to assume that a suggestion will be correct or that, where no errors have been highlighted, there are none.

 

The challenge for civil society

This phenomenon raises a larger question around the extent to which humans are more likely to place trust in the apparent neutrality of algorithms. But they are only as neutral as the humans behind them.

“Software engineers… tend to come from a very narrow demographic – few are women, from ethnic minorities or working class. The design will necessarily reflect the limits of their backgrounds, unless a significant effort is made for it not to,” suggests Chi Onwurah.

In the UK, there are fears that an algorithm currently in use by the Home Office to categorise prisoners will result in a similar feedback loop to ones found in policing. But as the government has refused to provide enough information about these data points, as yet no meaningful challenge can be made against it. This raises what is perhaps the most significant problem with automatic decision-making – a lack of transparency.

The government has not even disclosed exactly where it is using automated decision-making tools, let alone what data points the tools are using in reaching their outcome. This is in stark contrast to when a person makes a decision. The grounds on which they are obliged to make that decision (for example, statutory requirements and guidance) are known. This makes the decision far easier to scrutinise.

Two years ago the House of Commons Science and Technology Committee recommended that “government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within Central Government, along with projects underway or planned for public service algorithms”. This has not happened.

 

Supreme Court Justice, Lord Sales – “Through lack of understanding and access to relevant information, the power of the public to criticise and control the systems which are put in place to undertake vital activities in both the private and the public sphere is eroded. Democratic control of law and the public sphere is being lost”

 

Even when we do have details about an algorithm and its data points, in some cases it may still be incomprehensible. The information can be in a ‘black box’ so complex that it cannot not be meaningfully comprehended by lawyers, judges or the public at large. In a world in which trade secrets are jealously guarded, there can also be an element of intentional opacity.

Without this basic information around how a public decision is being made, our rights to scrutinise and challenge decisions are completely undermined. With that, the rule of law is made a mockery.

Hovering over all of this is a spectre of disempowerment. “Through lack of understanding and access to relevant information, the power of the public to criticise and control the systems which are put in place to undertake vital activities in both the private and the public sphere is eroded. Democratic control of law and the public sphere is being lost,” argues Supreme Court Justice, Lord Sales.

Contrary to the apparent cloak of tech neutrality which the government promotes in its public messaging, this all shows how the use of algorithms is highly political. Not only do their inner workings reflect the political biases of their human creator, but by handing over power to technology we are giving it to an elite of unelected software engineers.

Organisations like the Joint Council for the Welfare of Immigrants, who, despite the evidential hurdles, are bringing legal challenges against the use of algorithms, are performing a vital role. There are few legal challenges like this but they are on the rise. In a groundbreaking ruling, a Dutch court recently found that the use of an automated system used to predict the likelihood of an individual committing benefit or tax fraud or violating labour laws was unlawful. The court found that it disproportionately targeted poorer citizens.

If, in the words of Philip Alston, the UN’s Special Rapporteur on extreme poverty and human rights, we want to avert the “grave risk of stumbling zombie-like into a digital welfare dystopia”, civil society must urgently respond to these challenges. Our democracy and rule of law depend on it.

 

Charlotte Threipaland is a lawyer, researcher and campaigner and Editor of openJustice and Oscar Rickett is a journalist for Guardian, Vice, Middle East Eye

 

At a time when reporting the truth is critical, your support is essential in protecting it.
Find out how

The European Financial Review

European financial review Logo

The European Financial Review is the leading financial intelligence magazine read widely by financial experts and the wider business community.