A Closer Look at Experian Big Data and Artificial Intelligence in Durham Police

13th April 2018 / United Kingdom
A Closer Look at Experian Big Data and Artificial Intelligence in Durham Police

By BigBrotherWatch: The recent revelations about the improper use of big data have brought into clear focus the extent to which personal data can be aggregated and used to manipulate individuals through severe abuses of privacy.

Big Brother Watch has been investigating the use of controversial big data in policing. Our investigation into the use of commercial consumer behaviour data in the public sector has resulted in the discovery of a very concerning case of profiling data informing AI custody decisions.

Durham Police’s Register of Contracts

 

At the moment, the police’s use of automated decision making is ‘advisory’. However, the new Data Protection Bill currently going through parliament opens the door to purely automated decisions being made – with significant effects on people’s rights – in policing and across the public and private sectors. We are calling on Parliament to urgently amend the Bill to ensure that, in a world of advancing AI, human rights are always protected by human decisions.

 

Low, Medium or High Risk Suspect?

Durham Police has developed a machine learning algorithm called the Harm Assessment Risk Tool (HART) to evaluate the recidivism risk of offenders. The algorithm scores offenders to place them in one of the three categories of Low, Moderate or High Risk, with those deemed to be at moderate risk of reoffending being offered the chance to go into a rehabilitation programme called Checkpoint as an “alternative to prosecution.” [1]

 

Perpetuating bias

The AI tool uses 34 data categories including the offender’s criminal history, combined with their age, gender and two types of residential postcode. The use of postcode data is problematic in predictive software of this kind as it carries a risk of perpetuating bias towards areas marked by community deprivation. This danger was identified in a draft article titled “Algorithmic risk assessment policing models: Lessons from the Durham HART model and ‘Experimental’ proportionality” where it is noted:

SafeSubcribe/Instant Unsubscribe - One Email, Every Sunday Morning - So You Miss Nothing - That's It


“one could argue that this variable risks a kind of feedback loop that may perpetuate or amplify existing patterns of offending. If the police respond to forecasts by targeting their efforts on the highest-risk postcode areas, then more people from these areas will come to police attention and be arrested than those living in lower-risk, untargeted neighbourhoods. These arrests then become outcomes that are used to generate later iterations of the same model, leading to an ever-deepening cycle of increased police attention.”[2]

 

(Experian Mosaic’s “Asian Heritage” category)

 

The paper then goes onto state that HART is now being “refreshed with more recent data, and with an aim of removing one of the two postcode predictors.”[3] The article does not specify which of the two postcode predictors is set to be removed. However, our investigation has revealed that the source of one of the postcode predictors is data from the socio-geo demographic segmentation tool Mosaic supplied by Experian, best known for its credit referencing services.

 

Mosaic, marketing and policing

Mosaic was developed by Robert Webber, working in the 1970’s on ways of “better understanding patterns of urban deprivation,”[4] who eventually decided to leave academia and pursue his research in marketing, managing the micromarketing divisions of CACI where he developed ACORN – the first geodemograhic classification system in the UK, and later Experian where he developed Mosaic, which was released in 1986.

 

 

A recent book by Robert Webber and Roger Burrows titled ‘The Predictive Postcode: The Geodemographic Classification of British Society’, conceptualises the utility of such a precise form of segmentation through the recognition that traditional measures of social behaviour “criteria such as class, race and ethnicity, gender and sexuality”[5] don’t possess the level of homogeneity they’re assumed to have and therefore are best operationalised by matching this data to location, as “members of each category of these criteria behave very differently according to precisely where they live.”[6]

 

(Experian’s Mosaic presentation, ‘Under The Bonnet’.)

 

Therefore, systems like Mosaic not only classify groups according to highly personal categories, but do so by overlaying this information to specific locations on an unprecedented level, uncovering granular data about consumer behaviour.  Mosaic offers an astoundingly detailed profile of neighbourhoods, households and now, even individuals.  Mosaic is constructed from an expansive collection of data including “GCSE results, Gas & Electricity consumption, DWP child benefits, tax credits, income support by Census LSOA areas” [7] just to name a few.

 

(Experian Mosaic’s name stereotypes)

 

Law, ethics and rights

While Experian holds that all the data they gather is privacy compliant, there are serious questions to be asked about the issue of whether the aggregation of such enormous quantities of personal data is ethical and truly  respects the right to privacy. As we find ourselves confronted with eye opening revelations about the degree to which abuses of data can impact individuals, it is imperative that we develop ways of conceptualising data privacy beyond the particulars of how individual pieces of data are collected. We need to scrutinise the ways in which data is utilised and contextualised to make a fair assessment of whether certain practices are truly “privacy compliant” , providing services that guarantee the fair and equal treatment of individuals.

It is this understanding that we crucially need to apply to the use of Mosaic in predictive policing, as well as its other public sector uses. The fact that Mosaic, which is essentially a marketing tool capable of producing a very intimate view of households and individuals down to their online activity, is aiding custodial decision making risks incredible prejudice in our justice system.

 

A pin-sharp picture?

Beyond the obvious danger of focusing disproportionate effort towards the high-risk areas, there are further questions to be asked regarding the use of socio-demographic data to inform policing activities.

 

 

The Mosaic Public Sector brochure declares that it “gives you a pin-sharp picture of the people you need to reach…[using]…over 850 million pieces of information across 450 different data points” [8] and even a brief look into Mosaic’s segmentation portal[9] illustrates this very clearly, revealing a dedicated section to the online activities of each of these groups, detailing highly specific information about not only the websites visited, but also the frequency with which various communication technologies, and social media platforms are used.

 

 

Black-box bias

Therefore, what needs to be considered is, even if such data is collected through ’lawful’ means, is it right to use this data to make inferences about individuals and their ‘risk’ to society? This is another question that needs careful consideration when constructing Artificial Intelligence tools to aid decision making processes. The black-box nature of such tools, along with the proportions of data going into their construction means it is very likely that associations will be made and conclusions come to by the algorithm that are simply not understandable by the individuals whose decision making is being supported by predictive AI tools.

In a discussion regarding whether AI tools should be designed to model the human mind during the Science and Technology Committee’s Inquiry on algorithms in decision making in December 2017,[10] Sheena Urwin, alluded to the black-box nature of the human mind, arguing that there is no way of knowing what goes on in the mind of the police officer in charge of the custody decision, in defence of the obscure nature of the algorithm. On this point, it is important to be aware of a couple of things. Firstly, it is true that human minds are just as changeable and open to bias, which is precisely why AI tools need to be carefully examined for replicating them, but further to this, humans are accountable and have agency, which is what distinguishes them from algorithms. Therefore, police officers are expected to provide explanations of their own decision making processes, and hold the responsibility of being aware of the factors influencing their decision making. And so the aim of any regulation being introduced to automated decision making must be to create the conditions in which algorithms can be held up to the same level of scrutiny human minds are subject to, rather than trying to model them after the unpredictability of human minds.

 

Over-estimating ‘risk’

Another issue at play within HART is that it is designed to produce as few False Negatives as possible, meaning, it is designed to avoid wrongly classifying a suspect as low risk when in fact they are high risk. Indeed, it would be a greatly detrimental outcome if the algorithm failed to identify an individual who went on to commit serious crimes. However, what this also means is that to compensate for this possibility, the algorithm will label offenders as high risk quite liberally. This is reflected in Urwin’s thesis in the levels of agreement between the police and the algorithm. The agreement levels are generally low (at 56%), but this is most apparent in the high risk category where police officers tend to be cautious. In this category, the agreement went down to 10% for cases where the police agreed with the algorithm’s high risk assessment[11].

Therefore, as confirmed by Urwin and her colleagues, the

 

“HART model represents a real example of a value-judgement built into an algorithm, so requiring a ‘trade-off’ to be made between false positives and false negatives in order to avoid errors that are thought to be the most dangerous: in this context, offenders who are predicted to be relatively safe, but then go on to commit a serious violent offence (high risk false negatives). As a consequence, high risk false positives have been deliberately made more likely to result.”[12]

 

The consequence of this trade off is obvious in creating a strong tendency in the model to wrongly assess suspects, which is one of the most significant concerns regarding this type of predictive policing. The argument presented against this issue is that HART is a decision-making “aid”. However, given that the algorithm has been designed to detect the cases that the police might miss, or is reluctant to deem high-risk, is it really feasible to expect that police officers would consistently make judgments against the AI result? In order to function as a decision making aid, it needs to alert the police to potential offenders they might not have considered. Therefore, it is questionable whether police would comfortably ignore these suggestions.

 

 

A postcode lottery

It is clear that HART is still in an experimental phase. The programme has not been running long enough to assess whether the predicted low, moderate, high-risk suspects will prove to be so. Nevertheless, this does not make this issue any less urgent or serious. Durham’s Harm Assessment Risk Tool is the first predictive policing tool to be used in decision making about individuals, and therefore extremely significant in the precedent it sets.

Going forward, it is our obligation to ensure that the accident of a person’s birth into a certain neighbourhood, or a person’s choices about how they conduct their private lives, are not leveraged against them, or used to subject them to life-altering decisions they cannot challenge. Worse still, AI decisions cannot be challenged because it may not be possible to even explain the conclusions reached.

Whilst Durham Constabulary’s efforts to move offenders away from a life of crime by offering alternatives to prosecution are laudable, the trade-off should not be yet another immense invasion of privacy, in the form of our private lives being micro-analysed to rate our behaviour as citizens.

 

 

At a time when reporting the truth is critical, your support is essential in protecting it.
Find out how

The European Financial Review

European financial review Logo

The European Financial Review is the leading financial intelligence magazine read widely by financial experts and the wider business community.