Disinfo and hate speech flood TikTok ahead of Kenya’s election | MarketingwithAnoy

Naturally, TikTok is harder to moderate than many other social media platforms, according to Cameron Hickey, project director at the Algorithmic Transparency Institute. The brevity of the videos and the fact that many can include audio, visual and textual elements make human judgment even more necessary when deciding whether something violates the platform rules. Even advanced artificial intelligence tools, like using speech-to-text to quickly identify problematic words, are harder, “when the sound you’re dealing with also has music behind it,” Hickey says. “The default mode for people who create content on TikTok is also to embed music.”

It gets even harder in languages ​​other than English.

“What we generally know is that platforms do best in addressing problematic content in the places where they are based, or in the languages ​​that the people who created them speak,” says Hickey. . “And there are more people doing bad things than there are people in these companies trying to get rid of the bad things.”

Many pieces of misinformation Madung found were “synthetic content,” videos created to look like they might be from an old newscast, or they use screenshots that appear to be from legitimate news media.

“Since 2017, we have noticed that at that time there was a budding tendency to acquire the identity of the mainstream media brands,” says Madung. “We’re seeing widespread use of this tactic on the platform, and it seems to be doing exceptionally well.”

Madung also spoke with former TikTok content moderator Gadear Ayed to gain a better understanding of the company’s moderation efforts more broadly. Although Ayed did not moderate TikToks from Kenya, she told Madung that she was often asked to moderate content in languages ​​or contexts she was unfamiliar with and would not have had the context to tell if a piece of media had been manipulated.

“It’s common to find moderators being asked to moderate videos that were in languages ​​and contexts that were different from what they understood,” Ayed told Madung. “For example, at one point I had to moderate videos that were in Hebrew, even though I did not know the language or the context. The only thing I could trust was the visual image of what I could see, but everything written I could not moderate. “

A TikTok spokesman told WIRED that the company prohibits misinformation about the election and promotion of violence and is “committed to protecting the integrity of [its] platform and has a dedicated team working to protect TikTok during the Kenyan election. “The spokesman also said it works with fact-checking organizations, including Agence France-Presse in Kenya, and plans to roll out features to connect its” community with authoritative information about the Kenyan election in our app. “

But even though TikTok removes the offensive content, Hickey says it may not be enough. “One person can remix, duet, share someone else’s content,” Hickey says. This means that even if the original video is removed, other versions can live on, undetected. TikTok videos can also be downloaded and shared on other platforms, such as Facebook and Twitter, and this is how Madung first encountered some of them.

Several of the videos highlighted in the Mozilla Foundation report have since been removed, but TikTok did not respond to questions about whether it removed other videos or whether the videos themselves were part of a coordinated effort.

But Madung suspects they may be. “Some of the scariest hashtags were things I would find by researching coordinated campaigns on Twitter, and then I thought, what if I searched for this on TikTok?”

Leave a comment