Please disable your adblock and script blockers to view this page

Twitter is leaning too much on machine-based content moderation. Here’s why this is problematic

.

Twitter has taken a new approach to content moderation, which might not be that fruitful. While the platform has dealt with child pornography and paedophilia on the platform in an adequate manner, the way it is going about it may be a short-term solution at best.

Twitter is leaning too much on machine-based content moderation. Here’s why this is problematic

Twitter is relying heavily on machine-based content moderation and “trusted” users for its content moderation. Elon Musk needs to realise that content moderation on any social media platform needs humans to function properly. Image Credit: AFP

Twitter’s new Vice President of Trust and Safety Ella Irwin recently revealed that Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favouring restrictions on distribution rather than removing certain speech outright. While this is in line with what Musk had said about free speech when he took over, the way the social media platform is going about its content moderation is a stop-gap solution at best.

Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Irwin.

The company has faced harsh questions about its ability and willingness to moderate harmful and hateful content since Musk slashed half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees. Things for content moderation were made even worse when Musk’s team terminated over 4000 contractual content moderators and several other content moderation teams on Twitter’s payroll. Two sources familiar with the cuts said that more than 50% of the Health engineering unit was laid off. Health engineering is the term used for the content moderation team at Twitter.

On Friday, Musk vowed “significant reinforcement of content moderation and protection of freedom of speech” in a meeting with President Emmanuel Macron.

One approach, captured in the industry mantra “freedom of speech, not freedom of reach,” entails leaving up certain tweets that violate the company’s policies but barring them from appearing in places like the home timeline and search.

Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harms associated with viral abusive content.

Tweets containing derogatory words for African-Americans, Asians and Jews were triple the number seen in the month before Musk took over, while tweets containing a slur for homosexuals were up 31 per cent. All in all, usage of racial slurs went up by 500 per cent the day Elon Musk took over.

Irwin says Musk was focused on using automation more, arguing that the company had in the past erred on the side of using time and labour-intensive human reviews of harmful content. On child safety, for instance, Irwin said Twitter had shifted toward automatically taking down tweets reported by trusted figures with a track record of accurately flagging harmful posts.

Twitter is also restricting hashtags and search results frequently associated with abuse, like those aimed at looking up “teen” pornography. Past concerns about the impact of such restrictions on permitted uses of the terms were gone, she said.

The problem with this model is that it is outsourcing content moderation to users, who may not always be available to flag problematic content. This also opens up a completely different pandora’s box on who is considered a “trusted figure.” In the instance of child pornography and paedophilia, this was pretty easy to determine, but things get a lot murkier when you consider narratives that are tinted with political ideology. 

Another reason why this is problematic is that trolls on Twitter adapt, and perhaps more faster than developers at Twitter can detect. If Twitter starts using a system that automatically detects certain hashtags or keywords that are used by predators and trolls to block out their tweets or content from mass distribution, these actors will only need to comeup with a set of different characters that look like those words. Elon Musk has to realise that even though they may be faulty and may not be completely successful 100 per cent of the time, content moderation on social media requires human beings. 

..
..

Post a Comment

[blogger]

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget