In the upcoming months, Meta will start identifying and categorizing photos produced by artificial intelligence services provided by other companies.
The organization will begin employing a set of invisible marks integrated into the files, according to the company’s chief policy officer on Tuesday.
According to a blog post by the company’s president of global affairs, Nick Clegg, Meta will apply the labels to any content carrying the markers that is posted to its Facebook, Instagram, and Threads services in an effort to inform users that the images—which frequently resemble real photos—are actually digital creations.
Any material produced by the corporation using its own AI techniques already has labels. Once the new method is operational, Meta will apply the same to photos generated on systems managed by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Google, an Alphabet company.
The statement offers a sneak peek at a new set of guidelines that tech firms are creating to lessen the possible negative effects of generative AI technologies, which can produce phony but realistic-looking material in response to basic instructions.
The strategy expands upon a model that some of the same businesses developed a decade ago to coordinate the removal of information that is prohibited on several platforms, such as images that represent mass brutality and child exploitation.