Joris Lechêne, a model and artist from London, recently uploaded a TikTok in which he explained that he tried to upload a passport photo only for it to be denied because the software could not accurately recognise him.
“Don’t you love it when you train people to spot racist biases for a living and then it happens to you?” Lechêne, who goes by the username @joris_explains, began the video. “In the process of applying for a British passport, I had to upload a photo, so I followed every guideline to the T and submitted this melanated hotness.”
While he is speaking, Lechêne, who is Black, showed viewers the photo in question that he submitted, which sees him wearing a black shirt and standing in front of a gray wall.
“Lo and behold, that photo was rejected because the artificial intelligence software wasn’t designed with people of my phenotype in mind,” Lechêne continued. “It tends to get very confused by hairlines that don’t fall along the face and somehow mistake it for the background and it has a habit of thinking that people like me keep our mouths open.”
According to the screenshot of the rejection shared on the screen, Lechêne’s photo “doesn’t meet all the rules and is unlikely to be suitable for a new passport,” with the government website suggesting that his mouth “may be open” and that it was “difficult” to tell the image and the backdrop apart.
In the video, which has been viewed more than 156,000 times, Lechêne explained that he knew about the racial bias before being subjected to it because he uses similar examples in the prejudice training that he delivers.
According to Lechêne, his own experience is a reminder that current software is not without prejudice, with the model stating: “This is just a reminder that, if you believe that automation and artificial intelligence can help us build a society without biases, you are terribly mistaken.”
Rather, a more equitable society is only achievable through “political actions at every level of society,” Lechêne continued, adding: “because robots are just as racist as society is.”
This is not the first time the subject of racism in AI has been raised, as the topic has come up frequently as more of the world becomes digitised and examples have come to light.
Previously, the Google Photos app was found to be labelling Black people as gorillas, according to The New York Times, while an Amazon face service had trouble “identifying the sex of female and darker-âskinned faces”.
“The service mistook women for men 19 per cent of the time and misidentified darker-âskinned women for men 31 per cent of the time. For lighter-âskinned males, the error rate was zero,” The Times states.
The issue is due to biases in society that then end up ingrained in algorithms and artificial intelligence due to a lack of diverse training.
“Lack of diversity in the data you work with, that’s exactly what we’re talking about,”Lechêne explained in a follow-up video. “Society is heavily skewed towards whiteness and that creates an unequal system. And that unequal system is carried through the algorithm.”
Lechêne’s video prompted numerous dismayed comments on his behalf, with one person writing: “We need more POC in STEM so they can write algorithms that aren’t biased, especially as our society automates processes like these more.”