Meta announced on Thursday that Instagram will begin testing features aimed at blurring messages containing nudity to protect teenagers and prevent potential scammers from contacting them, as part of its efforts to address concerns regarding harmful content on its platforms.
Facing increasing scrutiny in the United States and Europe over allegations of addictive apps fueling mental health issues among young users, the tech giant is taking proactive steps to enhance safety measures.
The new protection feature for Instagram’s direct messages will utilize on-device machine learning to analyze images sent through the service for nudity. This feature will be enabled by default for users under 18, with Meta notifying adult users to encourage them to activate it.
Meta emphasized that the nudity protection feature will function even in end-to-end encrypted chats, ensuring privacy. However, Meta clarified that it won’t have access to these images unless they are reported.
While direct messages on Instagram are not encrypted like those on Messenger and WhatsApp, Meta intends to introduce encryption for Instagram in the future.
In addition to the nudity protection feature, Meta is developing technology to detect accounts potentially involved in sextortion scams. The company is also testing new pop-up messages for users who may have interacted with such accounts.
Earlier in January, Meta announced plans to limit the visibility of sensitive content, such as suicide, self-harm, and eating disorders, for teenage users on Facebook and Instagram.
Facing legal challenges, Meta has been sued by attorneys general from 33 U.S. states, including California and New York, who allege that the company misled the public about the risks associated with its platforms. In Europe, the European Commission has requested information from Meta regarding its measures to protect children from illegal and harmful content.