Source: Xinhua
Editor: huaxia
2025-11-22 14:45:45
BEIJING, Nov. 22 (Xinhua) -- Recently, internet users noticed a Chinese actress seemingly promoting different products simultaneously across three livestreams, each featuring her in a different outfit -- all artificial-intelligence (AI)-generated deepfakes of her likeness. The incident has made headlines in China this month.
In a video interview, actress Wen Zhengrong said she was shocked to discover such deepfakes months ago, prompting her to confront an impersonating account by asking, "If you are Wen Zhengrong, then who am I?" She was swiftly blocked.
Wen's experience illustrates how accessible AI tools can facilitate abuses, including the creation of convincing imitations of public figures for fake endorsements.
Earlier this year, an AI-generated likeness of TV host Li Zimeng appeared in a promotion for what was advertised as "deep-sea fish oil," which was later revealed to be ordinary candy. Beijing's market regulators responded with the city's first penalty for "AI false advertising."
Experts interviewed by Xinhua described these incidents as tests of how China can maintain an open and inclusive attitude toward AI while upholding firm red lines against misuse, thereby striking a balance between innovation and oversight.
ENFORCEMENT GAP
While China implemented "Artificial Intelligence Generated Content Labeling Measures" on Sept. 1, requiring synthetic videos, voices and images to carry clear labels, the AI impersonation surge highlights the challenges of enforcement.
"The labeling requirement was not a cure-all," said Zhao Jingwu, an associate professor in the School of Law at Beihang University in Beijing. Domestic AI service providers generally comply by embedding identifiers, he said, but many problematic videos are created using foreign tools or private generators that leave few traces.
"What the system can manage is the traceable source," Zhao said. "The challenge lies in those that cannot be traced." He cautioned that "without oversight across the entire chain, relying solely on explicit or implicit identifiers can hardly build a true barrier of trust."
The low cost of abuse -- sometimes just minutes of AI content generation -- incentivizes unscrupulous merchants to fabricate celebrity endorsements for quick profits. Violators, experts say, routinely re-edit videos, fragment or rotate them to evade detection.
"Enforcement is chasing something that evolves daily," said Li Min, a senior partner at Shanghai-based Hansheng Law Office. Yet most platforms remain reactive, waiting for user complaints and algorithm matches before taking content down, the lawyer noted. "This process is fundamentally outpaced by the speed of AI-generated deception."
Moreover, for victims, gathering evidence and proving that the content is AI-generated is technically complex and time-consuming. Wen's team, according to media reports, once reported 50 impersonating accounts in a single day -- only to find that some had resurfaced quickly in new forms.
Livestreaming and short-video platforms are under pressure to do more. Companies have deployed detection algorithms and watermark-scanning tools, but the scale and sophistication of abuse continue to grow.
"Our strategy is to fight AI with AI -- the goal is not just to remove violating videos but to build technical defenses that evolve faster than methods of deception," said an employee at a major short-video platform, who requested anonymity to speak openly.
Her platform has removed more than 100,000 impersonation videos and dealt with over 1,400 accounts this year.
Still, as platforms pledge to sharpen detection tools, they are expected to play a more active role, not only responding to flagged content but proactively identifying high-risk material, according to experts.
The platform employee said her platform is exploring stronger disclosure labels and on-screen prompts to help viewers recognize potentially synthetic content -- an increasingly necessary step in an environment where "seeing" is no longer synonymous with "believing."
BROADER CULTURAL ADJUSTMENT
Beyond law and technology, the deepfake surge also exposes a broader cultural adjustment. The public is encountering synthetic content faster than it can understand how the technology works or how easily it is misused. Some experts call this "AI literacy" -- the ability to identify AI-generated content and distinguish use from misuse -- which they say could be improved through education.
Ethical concerns are also mounting. Zhang Linghan, head of the institute of AI law at China University of Political Science and Law in Beijing, said ethical norms must act as a "guiding constraint" while formal rules catch up. In cases such as "AI resurrection" of the deceased, she said, explicit consent from close relatives should be the minimum standard. Reviving someone without this consent touches on human dignity and "technical possibility cannot stand in for ethical legitimacy."
Days after Wen's story made headlines, the Cyberspace Administration of China (CAC) released a statement on Nov. 14 saying authorities had recently "severely dealt with" a batch of online accounts that used AI to mimic celebrities in promoting products in livestreams and short videos, thus misleading users.
The CAC said cyberspace authorities would maintain a "high-pressure" stance, continuing to hold platforms accountable and to "dispose of and expose" malicious marketing accounts.
As China moves to strengthen AI oversight, its regulatory framework will have to evolve -- and become flexible enough to support AI innovations while limiting abuses. As Zhang put it: "The goal of law is not to stop innovation, but to ensure innovation moves forward steadily and securely."
Analysts foresee an AI governance network where law sets direction, platforms enforce accountability, technology safeguards operations, and a digitally literate public participates in oversight. ■