August 5th is For Yang Kaicheng, this was not a normal day. Just the next day, a U.S. court released Elon Musk’s arguments on why he no longer needed to buy Twitter. Young, a doctoral student at Indiana University, was shocked to discover that his robot-detection software was at the center of a huge legal battle.
Twitter sued Musk in July after Tesla’s chief executive tried to withdraw a proposal to buy the platform for $44 billion. Musk, in turn, filed a countersuit alleging the social network misrepresented the number of fake accounts on the platform. Twitter has long believed that spam bots make up less than 5 percent of its total “monetizable” users (or those who can see ads).
Yang’s Botometer, a free tool that claims to identify the potential for Twitter accounts to become bots, played a key role in helping Musk’s team prove the number wasn’t real, according to legal documents. “Contrary to Twitter’s assertion that its business was minimally affected by fake or spam accounts, Musk’s preliminary estimates from various parties suggest otherwise,” Musk’s counterclaim said.
But distinguishing humans from robots is harder than it sounds, with one researcher accusing the Botometer of “pseudoscience” because it makes it seem easy. Twitter was quick to point out that Musk used a tool with a history of making mistakes. The platform reminded the court in its legal filing that Botometer defined Musk himself as a potential robot earlier this year.
Still, Botometer has become prolific, especially among university researchers, due to the need for tools that can distinguish bot accounts from humans. So it wasn’t just Musk and Twitter that were trialled in October, but the science behind bot detection.
Yang didn’t start the Botometer; he inherited it. The project was established about eight years ago. But as its founder graduated and graduated from college, the responsibility for maintaining and updating the tool fell to Young, who declined to confirm or deny whether he had any connection to Elon Musk’s team. Botometer is not his full-time job. It’s more of a side project, he said. He uses the tool when he is not doing research for his PhD project. “At the moment, it’s just me and my advisor,” he said. “So I’m the one who actually does the coding.”
Botometer is a supervised machine learning tool, which means it has learned to separate robots from humans. Yang said Botometer distinguishes bots from humans by looking at more than 1,000 details associated with a single Twitter account, such as its name, profile picture, followers, and ratio of tweets to retweets, and then assigning it a scale of 0 to 5 point. “A higher score means it’s more likely to be a robot, and a lower score means it’s more likely to be a human,” Yang said. “If an account has a score of 4.5, that means it’s likely to be a bot. But if it’s 1.2, it’s more likely to be a human.”