Whistleblowers have alleged that Meta and TikTok have placed a priority on engagement over the safety of their users, resulting in the amplification of harmful content to protect their business interests.
During a BBC documentary titled Inside the Rage Machine, multiple former employees shared their concerns regarding the pressures that led both companies to compromise on safety, including issues related to violence, sexual harassment, terrorism, cyberbullying, and extremist content.
A former Meta engineer, referred to as Tim, stated that higher management directed teams to include more 'borderline' content in user feeds in light of competition with TikTok.
"They sort of told us that it’s because the stock price is down," Tim recounted.
He noted this modification represented a significant shift from previous efforts aimed at cutting harmful but legal content, like misogyny and conspiracy theories, as Meta sought to reclaim its position in the competitive social media landscape.
Tim suggested that the directive to ease restrictions originated from a senior vice-president at Meta who he believed reported directly to CEO Mark Zuckerberg.
"You’re losing to TikTok and therefore your stock price must suffer. People started becoming paranoid and reactive and they were like, let’s just do whatever we can to catch up. Where can we get like 2%, 3% revenue for the next quarter?" Tim shared.
These claims were backed by Matt Motyl, a former senior researcher at Meta, who stated that Instagram Reels was launched in 2020 without adequate safety measures, despite internal evidence indicating the product had elevated risks.
Motyl explained that between 2019 and 2023, he conducted large-scale experiments involving the platform's vast user base, many of whom were unaware that they were part of these tests.
"Meta’s products reach more than three billion users, and the more time users spend on the platform, the more ads are sold, generating more revenue. However, it is crucial that they get it right, as missteps lead to severe consequences," he asserted.
Documents reviewed by the BBC revealed that comments on Reels had significantly higher occurrences of harmful content compared to the rest of Instagram, encompassing bullying, harassment, hate speech, and calls for violence.
Motyl argued there was an ongoing conflict between protecting users from harmful content and the pursuit of engagement, with safety teams often overruled by product teams primarily focused on growth.
He further identified a 'power imbalance' operating within the company, necessitating approvals from Reels product teams before user safety features could be introduced.
"The Reels staff had incentives to not let those products launch because toxic content often generates more engagement than safer alternatives," Motyl noted.
Brandon Silverman, another former Meta employee, echoed these concerns, asserting that company leadership, particularly Zuckerberg, was intensely focused on competitive pressures.
"When potential competition emerges, no amount of financial investment is too great," Silverman remarked, noting the struggle safety teams faced in securing approval for even minor staffing increases compared to the resources allocated to Reels.
In addition, internal documents revealed Meta's understanding that its algorithms often incentivized engagement from contentious and divisive content, consequently fostering a market that prioritized profit over user welfare. One study concluded that creators were offered a pathway to maximize profits that ultimately compromised the well-being of their audience.
Silverman indicated that Meta initially was introspective about the repercussions of toxic content but that mentality has since shifted.
"Nobody’s saying you’re responsible for all polarization. We’re just saying you contribute to it and possibly in ways that can be mitigated," he said.
At TikTok, a whistleblower identified as Nick raised similar concerns, stating that corporate decisions were frequently influenced by the need to maintain favorable political relationships rather than prioritizing user safety.
Nick disclosed that internal dashboards showed his trust and safety team had to prioritize political complaints over reports concerning the safety of children and teenagers.
"If you’re feeling guilty on a daily basis because of what you’re instructed to do, at some point, you can decide, should I say something?" he questioned.
Due to the high volume of cases that moderators faced, especially concerning minors, Nick argued that staff reductions and structural changes had hampered the company's ability to ensure user safety.
"Material linked to terrorism, sexual violence, physical violence, and abuse appears to be on the rise," he warned.
He presented instances where complaints about a teenager facing cyberbullying were deprioritized compared to minor complaints against political figures.
When asked to prioritize cases involving minors over political complaints, management consistently refused, he indicated.
According to Nick, this policy reflects a stronger emphasis on maintaining relationships with political entities than ensuring the safety of children or the enforcement of regulations against harmful content.
Furthermore, Ruofan Ding, a former machine-learning engineer at TikTok, described the platform's recommendation algorithm as difficult to understand and control, explaining that engineers viewed content primarily as data points.
"For us, all the content is just an ID, a different number," Ding explained, stating reliance on safety units to filter out harmful materials.
However, as TikTok frequently updated its system, Ding observed an increase in borderline and problematic content being presented to users.
Teenagers have reported that the tools provided to indicate disinterest in certain content are ineffective, with many still receiving disturbing recommendations.
One 19-year-old, Calum, shared that from the age of 14, he was influenced by algorithmic content that was both racist and misogynistic, claiming, "The videos motivated me, but not in a positive way. They instilled a lot of anger in me."
UK counter-terrorism police have noted a rise in the 'normalization' of extremist content on social media platforms in recent months.
In reaction to these allegations, Meta has denied claims of knowingly promoting harmful content for profit. A representative stated, "Any assertion that we intentionally amplify harmful content for financial gain is incorrect."
The company emphasized its commitment to strict policies designed to safeguard users on its platforms, highlighting significant investments in safety and security over the past decade.
Additionally, Meta pointed out that improvements have been made to enhance teen safety online.
TikTok also refuted the allegations, labeling them as false, and insisted that claims of prioritizing political content over the safety of young users misrepresent the functionality of their moderation systems.
The company affirmed that specialized workflows do not lead to neglecting child safety cases, which are managed by dedicated teams utilizing parallel review processes.
A TikTok spokesperson remarked that the platform enables millions to explore new interests and communities while sustaining a lively creator economy. They assured that teenage accounts benefit from over 50 pre-set safety features designed to protect users from harmful experiences.

Comments (0)
You must be logged in to comment.
Be the first to comment on this article!