Social media giant Twitter said it will investigate its image cropping feature after users complained about preferring white faces over black ones.
Twitter’s mobile app image previews automatically crop images that are too large to fit on the screen and choose which parts of the image to display and crop.
When prompted by a graduate student who found a picture written on a black colleague’s face, a San Francisco-based programmer determined that the Twitter system would produce images of President Barack Obama if it were with the Republican Senate Chairman Mitch McConnell would be released.
“Twitter is just one example of racism manifested in machine learning algorithms,”
Twitter is one of the most popular social networks in the world with nearly 200 million users a day.
Other users shared similar experiments online that they said showed Twitter’s growing system for the benefit of white people.
Twitter admitted the company still had work to do.
“Our team tested for bias prior to shipping the model and found no evidence of racial or gender bias in our tests. However, it is clear from these examples that we need to do more analysis. We’ll keep sharing what we’re learning, what action we’re taking, and making our analysis open source for others to review and replicate, ”said a Twitter spokesman.
In a blog post from 2018, Twitter explained that the cropping system is based on a “neural network” that uses artificial intelligence to predict which part of a photo would be of interest to a user and then cuts out the rest.
A representative from Twitter also referred to an experiment by a scientist at Carnegie Mellon University that analyzed 92 images and found that the algorithm favored black faces 52 times.
However, Meredith Whittaker, co-founder of the AI Now Institute, which studies the social impact of artificial intelligence, said she wasn’t happy with Twitter’s response.
“Systems like Twitter’s image previews are implemented everywhere in the name of standardization and convenience,” she said.
“This is another example in a long and tired litany of examples showing automated systems that encode racism, misogyny, and stories of discrimination.”
A number of studies have found evidence of racial bias in facial recognition software. White faces are more likely to be identified than black ones.