Meta apologized on Thursday, stating it had fixed an “error” that caused some Instagram users to see excessive violent and graphic content in their Reels recommendations.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson said in a statement shared with CNBC.
The statement follows complaints from Instagram users on social media about a surge in violent and “not safe for work” content recommendations.
Some reported seeing such content despite having the “Sensitive Content Control” set to its highest moderation level.
According to Meta’s policy, the company aims to protect users from disturbing imagery and removes particularly violent or graphic content.
Meta prohibits content such as “videos showing dismemberment, exposed innards, or charred bodies,” along with “sadistic remarks about images depicting human or animal suffering.”
However, Meta stated it permits certain graphic content when used to highlight or condemn issues like human rights abuses, armed conflicts, or terrorism, often with restrictions such as warning labels.
According to Meta’s website, the company uses internal technology and a team of over 15,000 reviewers to identify disturbing imagery.
Meta’s website states that its technology, including AI and machine learning tools, helps prioritize posts and remove “the vast majority of violating content” before users report it.