By its very nature, TikTok is more difficult to control than many other social media platforms, said Cameron Hickey, program director at the Algorithmic Transparency Institute. The brevity of video, and the fact that many videos can contain audio, video, and text elements, makes human discernment all the more necessary when judging whether something is against the platform’s rules. Even advanced AI tools, such as using speech-to-text to quickly identify problematic words, can become more difficult “when you’re dealing with audio that also has music behind it,” Hickey said. “The default mode for people to create content on TikTok is also to embed music.”
This becomes more difficult in languages other than English.
“What we generally know is that platforms do the best job of solving problem content, either where they are located or in the language of the people who created them,” Hickey said. “In these companies, there are more people making bad stuff than trying to get rid of it.”
Much of the disinformation Madung found was “synthetic content,” produced videos that looked like they were from old news broadcasts, or used screenshots that appeared to be from legitimate news outlets.
“Since 2017, we’ve noticed an emerging trend at the time, which is the misappropriation of the identities of mainstream media brands,” Madung said. “We’re seeing rampant use of this tactic on the platform, and it seems to be doing really well.”
Madung also spoke with former TikTok content moderator Gadear Ayed to gain a broader understanding of the company’s moderation efforts. Although Ayed does not moderate TikToks from Kenya, she told Madung she is often asked to moderate content in a language or setting she is not familiar with, and has no way of telling whether a medium is being manipulated.
“It’s common to find moderators being asked to moderate videos in a different language and context than they understand,” Ayed told Madung. “For example, I once had to moderate Heber even though I didn’t know the language or context. Incoming video. All I can rely on is the visual images I can see, but I have no control over anything written.”
A TikTok spokesperson told WIRED that the company prohibits election misinformation and promotion of violence and is “committed to protecting [its] platform and has a dedicated team protecting TikTok during Kenyan elections. The spokesperson also said it worked with fact-checking groups including Kenya’s AFP and planned to roll out features to connect its “communities with authoritative information about Kenya’s elections in our app”.
But even if TikTok removes the offending content, Hickey said it may not be enough. “A person can mix, duet, repost other people’s content,” Hickey said. This means that even if the original video is removed, other versions can live on without being discovered. TikTok videos can also be downloaded and shared on other platforms like Facebook and Twitter, and this is how Madung first encountered some of them.
Several of the videos flagged in the Mozilla Foundation report have since been removed, but TikTok did not respond to questions about whether it removed other videos or whether the videos themselves were part of a concerted effort.
But Ma Dong suspects they might be. “Some of the most egregious hashtags are things I’ll find when I research coordination campaigns on Twitter, and then I’m like, what if I searched for this on TikTok?”
Leave a Reply