Home / Trends / Twitter is trying to find out why its preview tool appears racist

Twitter is trying to find out why its preview tool appears racist

Illustration for article titled Twitter's Scrambling To Find Out Why Its Photo Preview Algorithm Seems Racist

photo:: Leon Neal (Getty Images)

The neural network that Twitter uses to create photo previews is a mysterious animal. If it debuted the Smart Cropping Tool Back in 2018, Twitter said the algorithm determines the “most salient” part of the image, i.e. what your eyes will be drawn to first for use as a preview image, but what exactly that means has been the subject of much speculation.

Faces are an obvious answer, of course, but what about smiling or non-smiling faces? Or dimly lit or brightly lit faces? I’ve seen a lot of informal experiments on my timeline with people trying to figure out Twitter’s secret sauce. Some have even made the algorithm an ignorant system for Deliver punchlinesHowever, the latest viral experiment poses a very real problem: Twitter’s auto-crop tool seems to favor white faces far too often over black faces.

Several Twitter users demonstrated this over the weekend with images that included both the face of a white and the face of a black person. White faces were shown far more often than previews, even when the images were controlled in terms of size, background color, and other variables that could potentially affect the algorithm. One especially viral twitter thread I used a picture of former President Barack Obama and Senator Mitch McConnell (which has already received a lot of bad press) his callous answer on the death of Justice Ruth Bader Ginsburg) as an example. When the two were shown together in In the same image, the Twitter algorithm kept showing a preview of that stupid turtle grin once again, effectively say that McConnell was the “most prominent” part of the picture.

(Click the embedded tweet below and click his face to see what i mean).

The trend started after one user tried to tweet about a problem with Zoom’s face detection algorithm on Friday. Zoom’s systems didn’t recognize his black colleague’s head, and when he uploaded screenshots of the problem to Twitter, he found that Twitter’s auto-cropping tool also used his colleague’s face in thumbnails by default, rather than his colleague’s.

This problem was apparently new to Twitter as well. In response to the Zoom thread, Dantley Davis, chief design officer, conducted some informal experiments from him on Friday with mixed results, tweet“I’m just as irritated about this as everyone else.” The platform’s chief technology officer, Parag Agrawal, also tweeted the problem, adding this while Twitter’s algorithm was tested, it still had to be “continuously improved” and it was “eager to learn” from rigorous user testing.

“Our team tested for bias prior to shipping the model and found no evidence of racial or gender bias in our tests. However, from these examples it shows that we need to do more analysis, ”Twitter spokeswoman Liz Kelley told Gizmodo. “We’ll be making our work open source for others to review and replicate.”

When reached by email, she was unable to comment on a schedule for the scheduled review of Twitter. Kelley on Sunday too tweeted I thank the users who made Twitter aware of this.

Vinay Prabhu, chief scientist at Carnegie Mellon University, also conducted an independent analysis of Twitter’s auto-cropping tendencies and tweeted his findings on Sunday. You can read more about it its methodology herebut basically he tested the theory by tweeting a series of images from the Chicago Faces Database, a public archive of standardized photos of male and female faces checked for various factors such as face position, lighting, and expression.

Surprisingly, the experiment showed that the Twitter algorithm preferred a slightly darker skin in its preview and was cropped to black faces in 52 of the 92 pictures it posted. Of course, given the sheer amount of evidence to the contrary found through more informal experimentation, Twitter still has some tweaking to do with its auto-crop tool. However, Prabhu’s findings should come in handy in helping the Twitter team narrow down the problem.

It should be noted that machine learning and AI predictive algorithms need not be explicitly designed to be racist in order to be racist. Face recognition technology has a long and frustrating history of unexpectedly racial prejudiceand commercial facial recognition software has repeatedly proven that it does less accurate in people with darker skin. This is because no system exists in a vacuum. Intentionally or unintentionally technology reflects the prejudices so much so that experts have a term for the phenomenon: algorithmic bias.

It is precisely for this reason that it needs further scrutiny before institutions that deal with civil rights issues on a daily basis add it to their arsenal. Mountains of evidence show that it disproportionately discriminates against people with color. Granted, Twitter’s biased auto-cropping is a pretty harmless problem (that should still be addressed quickly, don’t get me wrong). What civil rights activists rightly worried is when A police officer is dependent on an AI An automated system is used to track down a suspect or a hospital Patient to carry– In this case, an algorithmic bias could potentially lead to a life or death decision.

Source link