Alex Omenye
A University of Pennsylvania student, Olga Loiek, sought an online audience—but not the one she found. Shortly after launching her YouTube channel in November last year, the 21-year-old from Ukraine discovered that her image had been misappropriated and manipulated by artificial intelligence to create alter egos on Chinese social media platforms.
Her digital doppelgangers, such as “Natasha,” were depicted as Russian women fluent in Chinese, praising China for its support of Russia and selling products like Russian candies.
These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s following.
“This is literally my face speaking Mandarin with the Kremlin and Moscow in the background, saying things I would never say,” Loiek told Reuters. “It was really creepy.”
Loiek’s experience is part of a growing trend of seemingly Russian women on Chinese social media expressing affection for China and supporting Russia in its war efforts by selling products from Russia.
However, these personas do not exist—they are AI-generated using clips of real women found online, often without their consent. Experts say these videos are used to pitch products to single Chinese men.
The accounts using Loiek’s image have sold tens of thousands of dollars in products, including candies. Some posts even include disclaimers that the content may have been created using AI.
These AI avatars exploit the Russia-China “no limits” partnership, declared in 2022 when Russian President Vladimir Putin visited Beijing just days before Russia invaded Ukraine.
Jim Chai, CEO of XMOV, a company that develops advanced AI technology, explained how common this technology has become. “To produce my 2D digital human, I just need to shoot a 30-minute video of myself, re-work the video, and adjust the lip-sync if I change the language,” said Chai, emphasizing how realistic the results can be.
Loiek’s case highlights the risks associated with AI’s potentially illegal or unethical applications. As powerful tools for creating and disseminating content become more widespread, concerns about AI contributing to misinformation, fake news, and copyright violations have intensified. The rise of generative AI systems like ChatGPT has only fueled these worries.
In response, China issued draft guidelines in January to standardize the AI industry, proposing over 50 national and industry-wide standards by 2026.
The European Union’s AI Act, which imposes strict transparency obligations on high-risk AI systems, also came into force this month, setting a potential global benchmark.
However, Xin Dai, an associate professor at Peking University Law School, noted that regulation struggles to keep pace with AI development. “We can only predict that increasingly powerful tools for creating and disseminating content will continue to emerge,” said Dai. “The volume is simply too large, not only in China but across the global internet.”