The United Kingdom data regulator is contacting Meta after a “concerning” report revealed that outsourced workers could access sensitive footage captured by the company’s AI smart glasses.
Meta acknowledged that subcontractors may occasionally review content, including videos and images recorded by the glasses, to help improve user experience, BBC News reports.
According to an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, some videos reviewed by a Kenya-based subcontractor included glasses-wearers using the toilet or engaging in sexual activity.
“We see everything – from living rooms to naked bodies,” one worker reportedly said.
Meta stated it takes the protection of users’ data very seriously and is continually improving its tools and measures to safeguard it.
“Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you,” the tech giant stated.
Meta said it may blur faces in images as part of its filtering process, but sources speaking to SvD and GP reported that this sometimes fails, leaving people’s faces visible.
While users must activate recording manually or via voice command, they might not realize that some videos and images are occasionally reviewed by humans, as outlined in Meta’s detailed privacy policies and terms of service.
According to the report,
Meta shared a link to its Supplemental Platforms Terms of Service but did not specify which sections addressed human review of user content.
However, the UK’s data regulator, the Information Commissioner’s Office said that “devices processing personal data, including smart glasses, should give users control and ensure appropriate transparency.”
“Service providers must clearly explain what data is collected and how it is used,” it said in a statement.
“The claims in this article are concerning. We will be writing to Meta to request information on how it is meeting its obligations under UK data protection law.”

