In November 2019, I had just become a reporter at The New York Times when I got a tip that seemed too outrageous to be true: a mysterious company called Clearview AI claimed it could identify just about anyone based only on a snapshot of their face.
I was in a hotel room in Switzerland, six months pregnant, when I got the email. It was the end of a long day and I was tired, but the email gave me a jolt.
My source – who worked for the non-profit Open the Government – had unearthed a legal memo marked ‘Privileged & Confidential’, in which a lawyer for Clearview had said that the company had scraped billions of photos from the public web, including social media sites such as Facebook, Instagram and LinkedIn, to create a revolutionary app.
Give Clearview a photo of a random person on the street, and it would spit back all the places on the internet where it had spotted their face, potentially revealing not just their name, but other personal details about their life.
The company was selling this superpower to police departments around the country, but trying to keep its existence a secret. Clearview claimed to be different from other automated facial-recognition systems, touting a ‘98.6 per cent accuracy rate’ and an enormous collection of photos unlike anything the police had used before.
This is huge if true, I thought, as I read and reread the Clearview memo that had never been meant to be public. I had been covering privacy, and its steady erosion, for more than a decade. I often describe my beat as ‘the looming tech dystopia – and how we can try to avoid it’, but I’d never seen such an audacious attack on anonymity before.
I returned to New York with an impending birth as a deadline. I had three months to get to the bottom of this story, and the deeper I dug, the stranger it got.
At the time, the company’s online presence was limited to a simple blue website with a Pac-Man-esque logo – the C chomping down on the V – and the tagline ‘Artificial Intelligence for a better world’. There wasn’t much else there, just a form to ‘request access’ (which I filled out and sent to no avail) and an address in New York City.
So I decided to go door knocking. I mapped the address from the company’s website and discovered that it was in midtown Manhattan, just two blocks away from my workplace. When I arrived at the point on the sidewalk where Google Maps directed me, the mystery deepened. The building where Clearview was supposedly headquartered did not exist.
Further digging threw up a listing on the venture capital-tracking site PitchBook. It claimed that the company had two investors – one I had never heard of before and another who was shockingly recognisable: the polarising billionaire Peter Thiel, who had cofounded the payments platform PayPal, backed the social network Facebook early on, and created the data-mining juggernaut Palantir.
Thiel, like everyone else I reached out to with ties to the company, gave me the cold shoulder.
Then one day I logged in to Facebook to discover a message from a ‘friend’ named Keith. I didn’t remember him, but he mentioned that we had met a decade earlier at a gala for Italian Americans. Back then, I’d been more cavalier about my privacy and said yes to just about any ‘friend request’.
‘I understand you are looking to connect with Clearview,’ Keith wrote. ‘I know the company, they are great. How can I be of help?’ Keith worked at a real estate company in New York and had no obvious ties to the facial recognition start-up. I had many questions – foremost among them being how he knew I was looking into the company and whether he knew the identity of the technological mastermind who had supposedly built the futuristic app – so I asked him for his phone number. He didn’t respond. I asked again two days later. Silence.
As it became clear that the company wasn’t going to talk to me either, I tried a new approach: find out whether the tool was as effective as advertised. I recruited a detective based in Texas who was willing to help look into Clearview, as long as I didn’t reveal his name. He went to Clearview’s website and requested access. Unlike me, he got a response within half an hour with instructions on how to create an account for his free trial. All he needed was a police department email address.
He ran a few photos of criminal suspects whose identities he already knew, and Clearview nailed them all, linking to photos of the correct people on the web. Then he ran his own image through the app. He had purposely kept photos of himself off the internet for years, so he was shocked when he got a hit: a photo of him in uniform, his face tiny and out of focus. It was cropped from a larger photo, for which there was a link that took him to Twitter (now X). A year earlier, someone had tweeted a photo from a Pride festival. The Texas investigator had been on patrol at the event, and he appeared in the background of someone else’s photo. When he zoomed in, his name badge was legible.
He was shocked that a face-searching algorithm this powerful existed. It had, for example, potentially horrible implications for undercover officers if the technology became publicly available. I told him that I hadn’t been able to get a demo yet and that another officer had run my photo but had got no results. He ran my photo again and confirmed that there were no matches.
Minutes later, he says his phone rang. It was a number he didn’t recognise, with a Virginia area code. He picked up. ‘Hello. This is Marko with Clearview AI tech support,’ said the voice on the other end of the call. ‘We have some questions. Why are you uploading a New York Times reporter’s photo?’
‘I did?’ my associate responded cagily. ‘Yes, you ran this Kashmir Hill lady from The New York Times,’ said Marko. ‘Do you know her?’ ‘I’m in Texas,’ the police officer replied. ‘How would I know her?’
The company representative said it was ‘a policy violation’ to run photos of reporters in the app and deactivated the account. The detective helping me was taken aback, creeped out that his use of the app was being that closely monitored.
He called immediately to tell me what had happened. A chill ran through me. It was a shocking demonstration of just how much power this mysterious company wielded. The people who control a technology that becomes widely used hold great power over our society. But who were the people behind Clearview?
Hoan Ton-That is a Vietnamese-Australian, in his mid-30s, 6ft 1in tall, with long, silky black hair and androgynous good looks. He dresses in paisley-print shirts and suits in a rainbow of colours made bespoke in Vietnam, where, his father told him, his ancestors had once been royalty.
When he was 14 and growing up in Australia, he taught himself to code, relying on free teaching materials and video lectures that the Massachusetts Institute of Technology (MIT) posted online. Sometimes he would skip school, tape a do not disturb sign on his bedroom door and spend the day with virtual programming professors.
At just 19, Ton-That dropped out of college and moved to San Francisco, drawn by the siren song of Silicon Valley. Desperate to make money and continue funding his stay in the US, he got to work cranking out Facebook quiz apps. More than six million Facebook users installed Ton-That’s creations, which he monetised with banner ads.
In early June 2017, after moving to New York City, Ton-That sent an email to Richard Schwartz – a former aide to Rudy Giuliani, and the man who would become Clearview’s co-founder – and the Right-wing activist Charles Johnson, with a link to ‘https://smartcheckr.com/face_ search’. ‘Scraped 2.1 million faces from venmo+tinder. Try the search,’ he wrote.
Ton-That also emailed Thiel, whom he had met at the Republican National Convention the year before. He told Thiel that Smartcheckr could now search one billion faces in less than a second. ‘It means we can find somebody on the social networks with just a face,’ he added. The next month, July 2017, one of Thiel’s lieutenants emailed Ton-That to say that Thiel was interested in investing $200,000. Smartcheckr – soon to be rebranded as Clearview AI – had its first financial backer.
In mid-October 2018, Ton-That and Schwartz met Greg Besson, a security director at a property firm. He had once been in the police, and he completely changed the company’s trajectory. ‘You should talk to my old team,’ he said. ‘They do financial crimes.’
Years later, I would ask Ton-That which police department that was. ‘Somewhere around the greater New York area,’ he demurred. I pushed him, asking whether it was the New York City Police Department (NYPD), the country’s biggest local police force, whose 36,000 officers are tasked with fighting crime in a city of 8.5 million people. ‘It’s not,’ Ton-That replied. It was.
By 2019, Clearview had managed to find a few customers, all private businesses. They were advertising three product lines: Clearview AI Search, a background-checking tool; Clearview AI Camera, which could sound an alert when a criminal entered a hotel, bank, or store; and Clearview AI Check-In, a building-screening system that could verify people’s identities ‘while simultaneously assessing them according to risk factors such as criminality, violence and terrorism’.
Clearview cobbled together more than a million dollars from a random collective that included venture capitalist David Scalzo, and some New York-based lawyers and executives in retail and real estate and it soon started going quite well.
Clearview’s success with law enforcement showed its investors that it had a real market, and it raised another $7 million. By the autumn of 2019, at an international conference for child crimes investigators held in The Hague in the Netherlands, Clearview AI was the hot topic.
Investigators called it the ‘the biggest breakthrough in the last decade’, and many international agencies asked for free trials to solve crimes against children, including Interpol, the Australian Federal Police and the Royal Canadian Mounted Police. It was a powerful tool not just for identifying perpetrators but for finding their victims, because it was the first facial recognition database with millions of children’s faces.
Officers around the world started asking for access. The users during free trials included the National Crime Agency in the UK, the Queensland Police Service in Australia, the Federal Police of Brazil, the Swedish Police Authority, and the Toronto Police Service in Canada, as well as Ontario regional police services.
For more than a year, as thousands of police officers around the world deployed it, Ton-That and Schwartz kept the superpower hidden from everyone else. But not for much longer…
While I was contacting Clearview representatives in November 2019, Ton-That must have seen via Facebook that we shared a friend, a guy with whom he played soccer named Keith Dumanski, who worked in PR and had previously been in politics. Ton-That presumably hoped that maybe I was working on a basic round-up of companies working in the facial recognition space and not an exposé on Clearview. Dumanski told him that he barely knew me, but agreed to contact me to try to gather intel.
Two weeks before Christmas, Dumanski sent me that strange Facebook message: ‘I understand you are looking to connect with Clearview.’ I later discovered that when I had asked Dumanski for his number, he didn’t respond because he had no idea how he’d answer my questions.
In early January 2020, Ton-That agreed to meet. On a Friday morning, he and Lisa Linden, a public relations consultant Clearview had recruited, went to a WeWork office that they had booked for our meeting. Ton-That told me about himself, and the company, and its extraordinary tool. He declined to name anyone else involved beyond Schwartz, whom he said he had first met at a book event at the Manhattan Institute, a conservative think tank.
Ton-That said they had changed the name to Clearview not because of any concerns about Smartcheckr’s reputation, but because it was better branding. ‘When you’re doing something like face recognition, it’s important to have nice colours and not be creepy,’ he commented.
He also said he was proud of what the company had done and that he and Schwartz had made the ethical decision to only scrape public images that anyone could find: three billion faces at the time, 30 billion now.
I asked about the alert that had been set up specifically for me and why officers who searched for my face had got no results. ‘It must have been a bug,’ suggested Ton-That and denied any other knowledge of it.
Then he demonstrated Clearview on me, running my face as well as that of the photographer, who had accompanied me. Clearview found many photos of the two of us. It worked even when I covered the bottom half of my face with my hands, surfacing a photo of me from almost 10 years earlier at a gala in San Francisco, with a person I had been talking to for a story at the time. I hadn’t seen the photo before or realised that it was online. It made me think about how much more careful I might need to be in the future about meeting sensitive sources in a public place.
You’ve created a tool that can find all the photos of me online, photos I didn’t know about, I told Ton-That. You’ve decided to limit it to police, but surely that won’t hold. This will open the door to copycats who might offer the tool more widely. Could this lead to universal face recognition, where anyone could be identified by face at any time, no matter how sensitive the situation?
‘I have to think about that,’ he said. ‘It’s a good question.’
The story came out a week after that interview. It was on the front page of The New York Times, with the headline: ‘The Secretive Company That Might End Privacy As We Know It’. It revealed Clearview’s existence, its database of billions of faces, its use by 600 law enforcement agencies, as well as a handful of private companies, and the efforts it had made to stay hidden from public view.
Ton-That’s photo featured prominently. He and Clearview were no longer anonymous.
He got hate mail. But he still felt proud of what he had built and the attention it had generated. ‘I’m walking clickbait,’ he said. His desire for fame was greater than his fear of infamy.
Other repercussions were more serious. The attorney general of New Jersey at the time, Gurbir Grewal, barred police in the state from using Clearview AI. That was ironic because Clearview had just put a promotional video up on its site about the role it had played in New Jersey’s Operation Open Door, a child predator sting. The video included footage of Grewal at a press conference talking about the arrest of 19 people. Grewal said he hadn’t known that Clearview had been used in the investigation and that he was ‘troubled’ that the company was ‘sharing information about ongoing criminal prosecutions’. Clearview took down the video.
Facebook, Venmo, Twitter, LinkedIn, and Google sent Clearview cease-and-desist letters demanding that the facial recognition company stop collecting data from them and delete any photos already in their possession. Lawsuits were filed against the company and Clearview was hammered in the press. Local journalists filed public records requests to see if police in their jurisdictions were using the app.
Clearview’s growing team of lawyers had their hands full. It had been making inroads globally, and that was now translating into legal enquiries from around the world. The international backlash caused the company to pull back. Ton-That started telling journalists that Clearview had decided to sell its product – ‘for now’ – only to law enforcement agencies in the US.
After investigations that lasted a year or more, the cohort of international privacy regulators came to the same conclusion: Clearview had violated their data protection laws.
‘What Clearview does is mass surveillance, and it is illegal,’ said Canadian privacy commissioner Daniel Therrien, the first of the regulators to tell Clearview that it could not legally operate in its country.
Clearview was declared illegal in at least six countries and was subject to approximately $70 million in fines, which was more than the company had raised from investors and made in revenue. They all declared that Clearview needed consent from their citizens to do what it was doing. Each country ordered Clearview AI to stop collecting its citizens’ faceprints and to delete the ones it had. Each time, Ton-That issued a public statement that mentioned how the rulings hurt his feelings.
What Clearview had done represented a turning point. We were on the cusp of a world in which our faces would be indelibly linked to our online dossiers, a link that would make it impossible to ever escape our pasts. That terrified people, so much so that the norm-violating company seemed on the brink of destruction, its onslaught on privacy too dramatic to ignore.
But Clearview’s programming was already being replicated. Others could build it – and they had – and they were offering it to anyone who wanted it.
Meanwhile, Clearview was continuing to develop and promote its technology. On a Friday in October 2021, I experienced a future that may await us all. I was meeting with Ton-That to try out a product he’d once told me that Clearview had no plans to release – $999 chunky black, augmented-reality glasses, made by a New York-based company called Vuzix, that could connect to Clearview’s app to look at a stranger from as far away as 10ft and find out who they were.
Ton-That turned toward me and tapped the glasses. ‘Ooooh, 176 photos,’ he said. He was looking in my direction, but his eyes were focused on a small square of light on the right lens of the glasses. ‘Aspen Ideas Festival. Kashmir Hill,’ he read. In the glasses, a photo had appeared from a conference where I had spoken in 2018.
Ton-That said he had tried out other augmented-reality glasses, but these had performed best. ‘They’ve got a new version coming,’ he added. ‘They’ll look cooler, more hipster.’ Clearview would soon sign a contract with the US Air Force to develop a prototype of the glasses that could be used on bases. Ton-That eventually wanted to develop glasses that could identify people at a distance, so that a soldier, for example, could make a decision about whether someone was dangerous or not when they were still 50ft away.
He remained fiercely proud of his app. Ton-That was ready to fight it out with critics and lawsuits, to challenge governments that deemed it illegal, and to give as many interviews as needed to convince people that in the end their faces were not solely their own. He attributed the global resistance to ‘future shock’. ‘It’s time for the world to catch up,’ he said. ‘And they will.’
Abridged extract from Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy, by Kashmir Hill (£20, Simon & Schuster). Order your copy at books.telegraph.co.uk