Get people talking about the gadgets they wish existed but don’t, and soon the idea of “glasses with a computer built in that tells you who you’re looking at” comes up. Such “augmented reality” devices seem so obviously desirable that someone usually says: “When will they start selling those, eh?”
In fact, two different companies have built working prototypes in the past six years. Facebook had an internal version in 2017, fed by the colossal number of profile photos on its site. The other is Clearview AI, a secretive American startup that first came to the attention of the New York Times journalist Kashmir Hill in November 2019.
Neither has put their device on sale; Facebook has carefully edged back from the idea. (So has Google, which despite being first with augmented reality glasses in February 2013, and having the computing power and data, told Hill that it will “not … make general purpose facial recognition commercially available while we work through the policy and technical issues at stake”.)
But Clearview AI did make its basic system, able to identify almost anyone whose photo and name has ever appeared on the internet, available to a few Silicon Valley venture capitalists in the hope of investment, and then – realising where the best arguments for beneficial use lay – rented it to American police departments to identify suspects. It may sound dystopian – but it’s to stop crime!
Ironically, it seems Clearview AI was very wary of being recognised by the media, putting Hill’s face (and, presumably, other journalists’) on a blacklist that would return zero results if queried. Hill documents how she tracked down the company’s founders: it’s a classic piece of the shoe leather journalism about internet privacy in which she specialises and excels.
In the past few years powerful “machine learning” and cloud computing, allied to the growth of smartphones, selfies and social media, have made a facial recognition system able to identify anyone as inevitable as the atomic bomb after the splitting of the uranium atom in 1938. Just as that breakthrough led to a cascade with an obvious endpoint, so the preconditions for facial recognition – masses of pictures online and rapidly improving algorithms for determining what makes a face unique – have been there waiting for whoever was willing to ignore the socially controversial effects.
In fact, face-naming systems have been invented multiple times in the past decade, sometimes escaping on to the internet, where inevitably they’re put to nefarious use – often by would-be stalkers. As Hill notes, in China the state’s use goes far beyond that, with individuals and entire ethnic groups such as the Uyghurs being surveilled and controlled. And a whole chapter describes the experience of an innocent black citizen in Detroit who was picked out as a possible match (the ninth out of 243) for a robber, and then wrongfully arrested.
Despite this, pressure groups tend to be conflicted: Hill points out that the American Civil Liberties Union (ACLU), which sued Clearview, might have argued in favour of facial recognition if it had been used to recognise police who weren’t wearing badges at protests.
Overall, the problem is that we can’t figure out if pervasive, immediate facial recognition is a good or bad thing. Might it find kidnapped children? Hit-and-run drivers? Burglars? Save us embarrassment at social occasions? Certainly. Would it be abused by people looking to harm and harass, and by governments and police in authoritarian or democratic states? Again, certainly. More importantly, can it be stopped? It’s hard to see how, and Hill – not unreasonably – doesn’t offer any suggestions. The bomb is out of the bay. The question now is where it lands.
• Charles Arthur is author of Social Warming: How Social Media Polarises Us All. Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy by Kashmir Hill is published by Simon & Schuster (£20). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply.