Meta reported on Wednesday that it discovered “likely AI-generated” content being used deceptively on its Facebook and Instagram platforms.
This included comments praising Israel’s handling of the Gaza conflict, posted beneath content from global news organizations and U.S. lawmakers.
According to Meta’s quarterly security report, the accounts responsible for this content posed as Jewish students, African Americans, and other concerned citizens, specifically targeting audiences in the United States and Canada. Meta attributed the campaign to Tel Aviv-based political marketing firm STOIC.
While Meta has encountered AI-generated profile photos in influence operations since 2019, this report marks the first disclosure of text-based generative AI technology usage in such campaigns since the technology’s emergence in late 2022.
Researchers have expressed concerns that generative AI, capable of quickly and inexpensively creating human-like text, images, and audio, could facilitate more effective disinformation campaigns and potentially influence elections.
Meta security executives, in a press call, stated that they swiftly dismantled the Israeli campaign and do not believe that novel AI technologies have hindered their ability to disrupt influence networks.
They also noted they had not encountered AI-generated imagery of politicians realistic enough to be mistaken for authentic photos.
“There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them,” said Mike Dvilyanski, Meta’s head of threat investigations.
The report highlighted six covert influence operations disrupted by Meta in the first quarter of the year. Besides the STOIC network, Meta also dismantled an Iran-based network focused on the Israel-Hamas conflict, though it did not identify any use of generative AI in that campaign.
Meta, along with other tech giants, continues to grapple with the potential misuse of new AI technologies, particularly concerning elections. Researchers have found that image generators from companies like OpenAI and Microsoft have produced photos containing voting-related disinformation, despite company policies against such content.
These companies have emphasized the use of digital labeling systems to mark AI-generated content at its creation, but these tools do not work on text, and researchers have questioned their effectiveness.
Meta’s defenses will be tested with the upcoming elections in the European Union in early June and in the United States in November.