The algorithm estimated whom a person would want to see first in a picture so the image could be cropped to a suitable size on Twitter. But it was ditched after users found it chose white faces over Black ones.

Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia — Tony “Abolish ICE” Arcieri ? (@bascule) September 19, 2020 Twitter sought to identify further potential harms in the model by launching the industry’s first algorithmic bias bounty content.  The competition winners, who were announced on Monday, discovered a plethora of further issues.

Twitter’s algorithmic biases

Bogdan Kulynych, who bagged the $3,500 first-place prize, showed that the algorithm can amplify real-world biases and social expectations of beauty. Kulynych, a grad student at Switzerland’s EPFL technical university, investigated how the algorithm predicts which region of an image people will look at. The researcher used a computer-vision model to generate realistic pictures of people with different physical features. He then compared which of the images the model preferred. Kulynych said the model favored “people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits:” The other competition entrants exposed further potential harms. The runners-up, HALT AI, found the algorithm sometimes crops out people with grey hair, dark skin, or using wheelchairs, while third-place winner, Roya Pakzad, showed the model favors Latin scripts over Arabic.  The algorithm also has a racial preference when analyzing emoji. Vincenzo di Cicco, a software engineer, found that emoji with lighter skin tones are more likely to be captured.

Bounty hunting in AI

The array of potential algorithmic harms is concerning, but Twitter’s approach to identifying them deserves credit. There’s a community of AI researchers that can help mitigate algorithmic biases, but they’re rarely incentivized in the same way as whitehat security hackers. “In fact, people have been doing this sort of work on their own for years, but haven’t been rewarded or paid for it,” Twitter’s Rumman Chowdhury told TNW before the contest. The bounty hunting model could encourage more of them to investigate AI harms. It can also operate more quickly than traditional academic publishing. Contest winner Kulynych noted that this fast pace has both flaws and strengths: He added that there are also limitations in the approach. Notably, algorithmic harms are often a result of design rather than mistakes. An algorithm that spreads clickbait to maximize engagement, for instance, won’t necessarily have a “bug” that a company wants to fix. Even if some submissions only hinted at the possibility of the harm without rigorous proofs, the ‘bug bounty’ approach would enable to detect the harms early. If this evolves in the same way as security bug bounties, this would be a much better situation for everyone. The harmful software would not sit there for years until the rigorous proofs of harm are collected. “We should resist the urge of sweeping all societal and ethical concerns about algorithms into the category of bias, which is a narrow framing even if we talk about discriminatory effects,” Kulynych tweeted. Nonetheless, the contest showcased a promising method of mitigating algorithmic harms. It also invites a wider range of perspectives than one company can incorporate (or will want) to investigate the issues.  Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.