Lifting The Lid On The Met’s Notting Hill Facial Recognition Operation

4th September 2017 / United Kingdom
Lifting The Lid On The Met's Notting Hill Facial Recognition Operation

Silkie Carlo from Liberty goes behind the scenes of the Met’s worryingly inaccurate and painfully crude facial recognition operation where the rules are devised on the spot.  

 

The spectre of Grenfell Tower loomed large at Notting Hill Carnival. It was a palpable haunting – a community of people who feel ignored.

But something else was going on amid the costumes, music and searing heat at this year’s event. The Metropolitan Police were trialling real-time facial recognition on carnival-goers.

Civil liberties and race equality groups including Liberty, the Institute of Race Relations and Black Lives Matter – none of whom were consulted or forewarned – had written to the Met, urging them to rethink. We were ignored.

We’d asked for a meeting with the Commissioner to discuss the technology and the force’s plans. Again, we were ignored.

We heard that, following bad press, the Met was engaging with a few politicians about facial recognition, and some had been invited to observe the police operation.

So, ignored by the Commissioner, we asked the Met’s project leads on facial recognition if we, too, could see the technology in action.

 

We were reassured when they agreed – but not by what we witnessed. This ‘trial’ showed all the hallmarks of the very basic pitfalls technologists have warned of for years – policing led by low-quality data and low-quality algorithms.

 

Behind the camera

SafeSubcribe/Instant Unsubscribe - One Email, Every Sunday Morning - So You Miss Nothing - That's It


On Monday, in the golden heat and ecstatic cacophony of the whistles and drums, we were led to a banal van parked by a tree, comically concealed by corrugated iron. As we approached, we saw two cameras protruding towards a main route into the Carnival.

Meeting eye to lens, the feeling of intrusion is unlike the ubiquitous CCTV we are usually subjected to – you know you are being measured, assessed, identified, invaded.

The project leads explained they had constructed a “bespoke dataset” for the weekend – more than 500 images of people they were concerned might attend. Some police were seeking to arrest, others they were looking to apprehend if they were banned from attending.

I asked what kind of crimes those on the ‘arrest’ watch list could be wanted for. We weren’t given details, but were told it could be anything from sexual assault to non-payment of fines.

I watched the facial recognition screen in action for less than 10 minutes. In that short time, I witnessed the algorithm produce two ‘matches’ – both immediately obvious, to the human eye, as false positives. In fact both alerts had matched innocent women with wanted men.

The software couldn’t even differentiate sex. I was astonished.

The officers dismissed the alerts without a hint of self-reflection – they make their own analysis before stopping and arresting the identified person anyway, they said.

I wondered how much police time and taxpayer’s money this complex trial and the monitoring of false positives was taking – and for what benefit.

 

I asked how many false positives had been produced on Sunday – around 35, they told me. At least five of these they had pursued with interventions, stopping innocent members of the public who had, they discovered, been falsely identified.

 

There was no concern about this from the project leaders.

 

One more for the collection

There was a palpable dark absurdity as we watched the screen, aghast, red boxes bobbing over the faces of a Hare Krishna troupe relentlessly spreading peace and love as people wearing Caribbean flags danced to tambourines.

“It is a top-of-the-range algorithm,” the project lead told us, as the false positive match of a young woman with a balding man hovered in the corner of the screen.

The falsely-matched images will be kept for “around three months, probably” the project lead told us (you get the sense rules are made up as they go along), in case the woman should exercise her rights to access her photo.

How that could be possible, given the “strategic” concealment of the cameras and the fact she was not informed of the false match or that her photo was taken, is baffling.

 

Future tech, ancient data

The project leads told me, quite jubilantly, that the algorithm had produced one correct facial recognition match across the four days of its operation. An individual was identified entering Carnival who had an arrest warrant for a rioting offence.

They arrested that person – but their data was stale. Between the construction of their watch list and Carnival, that individual had already been arrested – and was no longer wanted. So they were sent on their way, after an unnecessary but seriously hi-tech arrest.

The project leads viewed this as a resounding success – not a failure.

Carson Arthur from StopWatch, who joined me on the observation, asked the officers: “What would success in this trial look like, to you?”

The project leader responded: “We have had success this weekend – we had a positive match!”

 

It didn’t seem to register – or maybe matter – that the arrest was erroneous, that it had come at the price of the biometric surveillance of two million carnival-goers and considerable police resource, or that innocent people had been wrongly identified.

 

Then again, none of our concerns about facial recognition have registered with the police so far. The lack of a legal basis. The lack of parliamentary or public consent. The lack of oversight. The fact that fundamental human rights are being breached.

 

The race question

Alarmingly, the question that met most resistance was whether the algorithm had been tested for accuracy biases.

Despite research showing FBI facial recognition misidentified black faces more than white ones, the project leads proudly told us they had no intention of independently testing for racial bias. They had not asked the vendor if they had tested the algorithm for bias. It wasn’t a concern.

Similarly, they were wilfully ignorant of the demographic data in their Carnival dataset. They didn’t know the ethnicities, ages or gender of those on their watch list – nor did they want to.

The ‘race-blind’ ‘data-blind’ fallacy has a clear temptation for the Met. If the same attitude were taken to data collection around stop and search, the fact that black people are six times more likely to be stopped than white people could be dismissed as an unsubstantiated myth, rather than the race equality crisis it is.

Technology and human rights analysts have long warned that implicit biases in datasets, and the algorithms that process them, could perpetuate discrimination and inequality beyond observable view. The professed objectivity of algorithms – that the Met seem convinced of – and veil of opacity they offer obstructs accountability and risks burying bad practice deeper. .

There’s an argument that police should overcome this wilful ignorance and invigorate and improve the algorithms they are deploying in biometric surveillance. But eliminating accuracy biases would only sharpen a blunt instrument.

 

Biometric checkpoints

There is a more fundamental question – what does real-time facial recognition mean for our rights? What are the risks? Does it have a place in a democracy at all?

 

The answer is no. It is the stuff of dystopian literature for a reason. In a society that has rejected ID cards, the prospect of biometric checkpoints overshadowing our public spaces is plainly unacceptable and frankly frightening.

 

If we tolerated facial recognition at Carnival, what would come next? Where would the next checkpoint be? How far would the next ‘watch list’ be expanded? How long would it be before facial recognition streams are correlated?

Like GPS surveillance, if facial recognition were rolled out across the country, the State would potentially have a biometric record of who goes where, when and with whom.

The technology isn’t there yet – as I observed, it’s offensively crude – but the risk to our freedom posed by this ‘trial’ is current and real.

 

At a time when reporting the truth is critical, your support is essential in protecting it.
Find out how

The European Financial Review

European financial review Logo

The European Financial Review is the leading financial intelligence magazine read widely by financial experts and the wider business community.