Social Media Players Compromised Safety in Algorithm Race

Social Media Players Compromised Safety in Algorithm Race
Fotolia

Whistleblowers stated that social media giants allowed harmful content to circulate on their platforms. The internal research proved outrage-driven posts generated higher user engagement.

More than a dozen insiders said the companies made trade-offs between user safety and content engagement as short-form video reshaped the social media landscape. A former Meta engineer claimed management instructed teams to allow more borderline-harmful material, including misogyny and conspiracy theories, on Instagram and Facebook users’ feeds as the company attempted to compete with TikTok’s rapid growth. The engineer said staff were told the move was linked to financial pressure, adding the decision was made because the stock price is down.

Matt Motyl, a senior university researcher specialising in Meta’s business, revealed that Instagram Reels, launched in 2020 to rival TikTok, went live without adequate safeguards. Research reportedly suggested Reels comments showed higher levels of harmful behaviour, including bullying, harassment, and hate speech, compared with Instagram’s main feed. Motyl added that the company was aware of the risks tied to its recommendation systems, stating the platform’s algorithms created a path that maximizes profits at the expense of its audience’s well-being.

Separately, a member of TikTok’s trust and safety team said that moderation priorities sometimes favoured political complaints over cases involving harmful content featuring children. The employee claimed cases were handled to “maintain a strong relationship” with political figures and avoid potential regulatory action rather than prioritise user safety.

The whistleblower also warned that the volume of moderation cases had become difficult to manage, adding that material linked to trafficking, violence, terrorism, and sexual abuse appeared to be increasing. Former TikTok machine learning engineer Ruofan Ding described the company’s recommendation system as a “black box”, noting engineers had limited visibility over how deep learning algorithms promote content.

Both companies have issued statements, rejecting the claims. Meta said that any suggestion that they have deliberately amplified harmful content for financial gain is wrong. TikTok labelled the allegations as fabricated claims, stating it invests in technology designed to prevent harmful content.