"I donât want anyone to think that I ever said these horrible things in my life. Using a Ukrainian girl for a face promoting Russia. Itâs crazy.â
Olga Loiek has seen her face appear in various videos on Chinese social media - a result of easy-to-use generative AI tools available online.
âI could see my face and hear my voice. But it was all very creepy, because I saw myself saying things that I never said,â says the 21-year-old, a student at the University of Pennsylvania.
The accounts featuring her likeness had dozens of different names like Sofia, Natasha, April, and Stacy. These âgirlsâ were speaking in Mandarin - a language Olga had never learned. They were apparently from Russia, and talked about China-Russia friendship or advertised Russian products.
âI saw like 90% of the videos were talking about China and Russia, China-Russia friendship, that we have to be strong allies, as well as advertisements for food.â
One of the biggest accounts was âNatasha imported foodâ with a following of more than 300,000 users. âNatashaâ would say things like âRussia is the best country. Itâs sad that other countries are turning away from Russia, and Russian women want to come to Chinaâ, before starting to promote products like Russian candies.
This personally enraged Olga, whose family is still in Ukraine.
But on a wider level, her case has drawn attention to the dangers of a technology that is developing so quickly that regulating it and protecting people has become a real challenge.
From YouTube to Xiaohongshu
Olgaâs Mandarin-speaking AI lookalikes began emerging in 2023 - soon after she started a YouTube channel which is not very regularly updated.
About a month later, she started getting messages from people who claimed they saw her speak in Mandarin on Chinese social media platforms.
Intrigued, she started looking for herself, and found AI likenesses of her on Xiaohongshu - a platform like Instagram - and Bilibili, which is a video site similar to YouTube.
âThere were a lot of them [accounts]. Some had things like Russian flags in the bio,â said Olga who has found about 35 accounts using her likeness so far.
After her fiancé tweeted about these accounts, HeyGen, a firm that she claims developed the tool used to create the AI likenesses, responded.
They revealed more than 4,900 videos have been generated using her face. They said they had blocked her image from being used anymore.
A company spokesperson told the BBC that their system was hacked to create what they called âunauthorised contentâ and added that they immediately updated their security and verification protocols to prevent further abuse of their platform.
But Angela Zhang, of the University of Hong Kong, says what happened to Olga is âvery common in Chinaâ.
The country is âhome to a vast underground economy specialising in counterfeiting, misappropriating personal data, and producing deepfakesâ, she said.
This is despite China being one of the first countries to attempt to regulate AI and what it can be used for. It has even modified its civil code to protect likeness rights from digital fabrication.
Statistics disclosed by the public security department in 2023 show authorities arrested 515 individuals for âAI face swapâ activities. Chinese courts have also handled cases in this area.
But then how did so many videos of Olga make it online?
One reason could be because they promoted the idea of friendship between China and Russia.
Beijing and Moscow have grown significantly closer in recent years. Chinese leader Xi Jinping and Russian President Putin have said the friendship between the two countries has âno limitsâ. The two are due to meet in China this week.
Chinese state media have been repeating Russian narratives justifying its invasion of Ukraine and social media has been censoring discussion of the war.
âIt is unclear whether these accounts were coordinating under a collective purpose, but promoting a message that is in line with the governmentâs propaganda definitely benefits them,â said Emmie Hine, a law and technology researcher from the University of Bologna and KU Leuven.
âEven if these accounts arenât explicitly linked to the CCP [Chinese Communist Party], promoting an aligned message may make it less likely that their posts will get taken down.â
But this means that ordinary people like Olga remain vulnerable and are at risk of falling foul of Chinese law, experts warn.
Kayla Blomquist, a technology and geopolitics researcher at Oxford University, warns that âthere is a risk of individuals being framed with artificially generated, politically sensitive contentâ who could be subject to ârapid punishments enacted without due processâ.
She adds that Beijingâs focus in relation to AI and online privacy policy has been to build out consumer rights against predatory private actors, but stresses that âcitizen rights in relation to the government remain extremely weakâ.
Ms Hine explains that the âfundamental goal of Chinaâs AI regulations is to balance maintaining social stability with promoting innovation and economic developmentâ.
âWhile the regulations on the books seem strict, thereâs evidence of selective enforcement, particularly of the generative AI licensing rule, that may be intended to create a more innovation-friendly environment, with the tacit understanding that the law provides a basis for cracking down if necessary,â she said.
'Not the last victimâ
But the ramifications of Olgaâs case stretch far beyond China - it demonstrates the difficulty of trying to regulate an industry that seems to be evolving at break-neck speed, and where regulators are constantly playing catch-up. But that doesnât mean theyâre not trying.
In March, the European Parliament approved the AI Act, the worldâs first comprehensive framework for constraining the risks of the technology. And last October, US President Joe Biden announced an executive order requiring AI developers to share data with the government.
While regulations at the national and international levels are progressing slowly compared to the rapid race of AI growth, we need âa clearer understanding of and stronger consensus around the most dangerous threats and how to mitigate themâ, says Ms Blomquist.
âHowever, disagreements within and among countries are hindering tangible action. The US and China are the key players, but building consensus and coordinating necessary joint action will be challenging,â she adds.
Meanwhile, on the individual level, there seems to be little people can do short of not posting anything online.
Meanwhile, on the individual level, there seems to be little people can do short of not posting anything online.
âThe only thing to do is to not give them any material to work with: to not upload photos, videos, or audio of ourselves to public social media,â Ms Hine says. âHowever, bad actors will always have motives to imitate others, and so even if governments crack down, I expect weâll see consistent growth amidst the regulatory whack-a-mole.â
Olga is â100% sureâ that she will not be the last victim of generative AI. But she is determined not to let it chase her off the internet.
She has shared her experiences on her YouTube channel, and says some Chinese online users have been helping her by commenting under the videos using her likeness and pointing out they are fake.
She adds that a lot of these videos have now been taken down.
âI wanted to share my story, I wanted to make sure that people will understand that not everything that youâre seeing online is real,â says she. âI love sharing my ideas with the world, and none of these fraudsters can stop me from doing that.â