
two years ago, Twitter has launched what may be the tech industry’s most ambitious attempt at algorithmic transparency.Papers written by its researchers show that the AI system Twitter uses to crop images in tweets favors white faces and women, and that Politically right-wing posts get bigger algorithm boosts in several countries, including US, UK and France than those on the left.
By early October last year, when Elon Musk faced a court deadline to close his $44 billion acquisition of Twitter, the company’s latest research was nearly ready. It showed that a machine-learning program had incorrectly downgraded tweets that mentioned any of 350 terms related to identity, politics or sexuality, including “gay,” “Muslim” and “deaf,” Because the system is designed to limit the ambiguity of tweets from marginalized groups, it also hinders posts celebrating those communities. The discovery — and some of the fixes Twitter has developed — could help other social platforms better use AI to police content. But will anyone read the study?
A few months ago, Musk championed algorithmic transparency, saying he wanted to “open source” Twitter’s content recommendation code. Musk, on the other hand, has said that he will restore popular accounts that have been permanently banned for posting violating tweets. He also mocked some of the communities Twitter researchers were trying to protect, complaining about undefined “Xingnao virusAlso disturbing is the fact that Musk’s AI scientists at Tesla typically don’t publish their research.
Twitter’s AI ethics researchers ultimately decided their prospects were too bleak under Musk’s leadership and couldn’t wait to publish their research in an academic journal, or even finish writing company blog postal. So less than three weeks before Musk finally took ownership on Oct. 27, they rushed Moderate Bias Research to the open-access service Arxiv, where academics post research that hasn’t yet been peer-reviewed.
“We were rightfully concerned about what this leadership change would entail,” said Rumman Chowdhury, then director of engineering for Twitter’s Machine Learning Ethics, Transparency, and Accountability group, known as META. “There’s a lot of ideology and misconception about the kind of work that ethics teams do, like it’s part of a wake-up liberal agenda rather than real scientific work.”
Concerns about Musk’s regime prompted researchers at Cortex, Twitter’s machine learning and research organization, to release a series of studies secretly earlier than originally planned, according to Choudhury and five other former employees. The results cover topics including misinformation and recommendation algorithms. The frantic push and published papers have not been reported before.
Researchers hope to preserve the knowledge discovered on Twitter for anyone to use and improve other social networks. “I’m very passionate that companies should talk more openly about the problems they’re facing and try to lead the way and show people that this is a workable thing,” said Kyra Yee, lead author of the regulation paper.
Twitter and Musk did not respond via email to detailed requests for comment for this article.
Another study’s team worked overnight for final edits, then clicked “post on Arxiv” on the day Musk used Twitter, fearing retaliation from Musk, one researcher said. “We knew the runway would be closed when the Elon jumbo jet landed,” the source said. “We know we need to get this done before the acquisition closes. We can plant a flag in the ground and say it exists.”
The fear was not misplaced. Most of Twitter’s researchers lost their jobs or resigned under Musk’s leadership. On the META team, Musk fired all but one on November 4, and the remaining members, co-founder and head of research Luca Belli, resigned later that month.