“This is why I’m pissed the fuck off. We’re tired,” said a popular Black influencer Ziggi Tyler in a recent viral video on TikTok. “Anything Black-related is inappropriate content,” he continued later in the video.

Tyler was expressing his frustration with TikTok about a discovery he made while editing his bio in the app’s Creator Marketplace, which connects popular account holders with brands who pay them to promote products or services. Tyler noticed that when he typed phrases about Black content in his Marketplace creator bio, such as “Black Lives Matter” or “Black success,” the app flagged his content as “inappropriate.” But when he typed in phrases like “white supremacy” or “white success,” he received no such warning.

For Tyler and many of his followers, the incident seemed to fit within a larger pattern of how Black content is moderated on social media. They said it was evidence of what they believe is the app’s racial bias against Black people — and some urged their followers to leave the app, while others tagged TikTok’s corporate account and demanded answers. Tyler’s original video about the incident has received over 1.2 million views and over 25,000 comments; his follow up video has received another nearly 1 million views.

“I’m not going to sit here and let that happen,” Tyler, a 23-year-old recent college graduate from Chicago, told Recode. “Especially on a platform that makes all these pages saying things like, ‘We support you, it’s Black history month in February.’”

A spokesperson for TikTok told Recode that the issue was an error with its hate speech detection systems that it is actively working to resolve, and that it is not indicative of racial bias. TikTok’s policies do not restrict posting about Black Lives Matter, according to a spokesperson.

In this instance, TikTok told Recode that the app is mistakenly flagging phrases like “Black Lives Matter” because its hate speech detector is triggered by a combination of words involving the words “Black” and “audience” — because “audience” contains the word “die” in it.

“Our TikTok Creator Marketplace protections, which flag phrases typically associated with hate speech, were erroneously set to flag phrases without respect to word order,” a company spokesperson said in a statement. “We recognize and apologize for how frustrating this was to experience, and our team is working quickly to fix this significant error. To be clear, Black Lives Matter does not violate our policies and currently has over 27B views on our platform.” TikTok says it has reached out to Tyler directly, and that he hasn’t responded.

But Tyler said he didn’t find TikTok’s explanation to Recode to be adequate, and that he felt the company should have identified an issue in its hate speech detection system sooner.

“Regardless of what the algorithm is and how it picked up, somebody had to program that algorithm,” Tyler told Recode. “And if [the problem] is the algorithm, and the marketplace has been available since [2020], why wasn’t this a conversation you had with your team, knowing there have been racial controversies?” he asked.

Tyler isn’t alone in his frustration — he’s just one of many Black creators who have been protesting TikTok recently because they say they are unrecognized and underserved. Many of these Black TikTokers are participating in what they’re calling the “#BlackTikTok Strike,” in which they are refusing to make up original dances to a hit song — because they are angry that Black artists on the app are not being properly credited for the viral dances that they first choreograph and that other creators imitate.

These issues also connect to another criticism that’s been leveled at TikTok, Instagram, YouTube, and other social media platforms over the years: That their algorithms, which recommend and filter the posts everyone sees, often have inherent racial and gender biases.

In 2019, a study showed that leading AI models for detecting hate speech are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets, for example.

Findings like those have fostered an ongoing debate about the merits and potential harms that come with relying on algorithms – particularly developing AI models — to automatically detect and moderate social media posts.

Major social media companies like TikTok, Google, Facebook, and Twitter — though they acknowledge that these algorithmic models can be flawed — are still making them a key part of their rapidly expanding hate speech detection systems. They say they need a less labor-intensive way to keep up with the ever-expanding volume of content on the internet.

Tyler’s TikTok video also shows the tensions surrounding these apps’ lack of transparency about how they police content. In June 2020 during Black Lives Matter protests across the US, some activists accused TikTok of censoring certain popular #BlackLivesMatter posts — which for a time the app showed as having zero views even when they had billions of views. TikTok denied this and said it was a technical glitch affecting other hashtags as well. And in late 2019, TikTok executives were reportedly discussing tamping down political discussion on the app, according to Forbes, to avoid political controversy.

A spokesperson for TikTok acknowledged larger frustrations about Black representation on TikTok and said that earlier this month, the company launched an official @BlackTikTok account to help foster the Black TikTok community on the platform — and that overall, its teams are committed to developing recommendation systems that reflect inclusivity and diversity.

But for Tyler, the company has a lot more work to do. “This instance is just the tip of the iceberg and underneath the water level you have all of these issues,” said Tyler.

Similar Posts