Today, I’d like to recommend an article by Professor Zhang Linghan (张凌寒) from the China University of Political Science and Law, which analyzes the recently released AI-generated content rule. Linghan is a close friend of mine and one of the leading scholars in China’s AI governance field, having been on the TIME100 AI 2024 list. As an expert deeply engaged in cyberspace law and algorithm governance, she has actively contributed to China’s AI regulatory consultations while also demonstrating a rare global perspective among Chinese scholars.
She serves as an expert on the UN High-Level Advisory Body on AI, actively participating in global AI governance discussions and effectively conveying China’s governance principles and practices. The Chinese AI Law (Scholars’ Proposal), which she spearheaded, has become a key reference for understanding China’s AI governance trajectory and legislative philosophy.
I also want to highlight an important initiative recently launched under Professor Zhang’s leadership—the AI Governance Compass https://aigovernancecompass.com/sy). AI Governance Compass is a leading international platform dedicated to sharing insights into China’s AI governance. Its mission is to create a comprehensive and authoritative resource that provides global visitors with in-depth information on China’s AI governance practices, policies, regulations, and evolving trends. With bilingual content available in both Chinese and English, AI Governance Compass is designed to be accessible to a diverse audience, from experts to the general public.
For those looking to gain a deeper understanding of China’s AI governance practices, this is a platform you definitely shouldn’t miss.
A full translation of Linghan’s article is available below:
Leading international practices in labeling AI-generated content to create a clean and healthy cyberspace
Since 2022, domestic and international generative AI large models have rapidly iterated and developed, transforming traditional paradigms of content production and information dissemination. At the same time, the potential risks brought by generative AI – such as deepfakes and false information – have entered the regulatory spotlight of the international community, prompting countries around the world to take measures to respond.
The "Measures for Labeling AI-Generated Synthetic Content" (hereinafter referred to as the "Labeling Measures") further clarify the specific requirements and responsibilities for fulfilling labeling obligations throughout the entire lifecycle of AI-generated synthetic content, thereby improving China’s governance framework for generative AI.
I. The Institutional Value of Labeling AI-Generated Content Has Become an International Consensus
First, the labeling system for AI-generated content has a unique institutional value. On the one hand, labeling can effectively distinguish AI-generated synthetic information and prevent the spread and misuse of false information; on the other hand, it helps users quickly understand the attributes or parameters of generative AI products or services; finally, labeling assists regulatory authorities in evaluating and tracing AI-generated synthetic content, thereby promoting the legal and compliant development of such content.
Second, the unique institutional value of a labeling system has already become an international consensus. Recently, countries and international organizations around the world have taken measures to promote the establishment of a labeling system for generative AI. In March 2024, the United Nations issued the "Opportunities for Secure, Reliable, and Trustworthy AI Systems to Promote Sustainable Development," which encourages the development and deployment of effective, accessible, and adaptable mechanisms for content authentication and source identification with international interoperability. In April 2024, the US National Institute of Standards and Technology (NIST) released "Reducing the Risks of Synthetic Content," which listed methods for detecting, verifying, and marking synthetic content, including digital watermarking and metadata recording. In May 2024, the Singaporean government issued the "Generative AI Governance Framework," proposing two technical solutions for labeling – digital watermarking and encrypted traceability. On August 1, 2024, the EU’s Artificial Intelligence Act officially came into force, stipulating that limited-risk AI systems generating synthetic audio, images, videos, or textual content must fulfill transparency and clear labeling obligations. In March 2025, the UK’s "Artificial Intelligence (Regulation) Bill" passed its first reading, which requires anyone offering AI-related products or services to provide customers with "clear and unambiguous health warnings and labels."
In summary, whether through international organizations or national policies, the institutional value of labeling AI-generated content is recognized and corresponding standards have been proposed. On the value level, the aim is to promote authenticity in order to curb false information and deepfakes; on the technical level, most jurisdictions rely on watermarking, content labels, and similar technologies for regulation.
II. Generative AI Is Transforming the Paradigm of Content Production and Information Dissemination
First, generative AI is revolutionizing production methods. Unlike traditional models of professionally produced content or user-generated content, generative AI – based on recursive generation models – can rapidly accumulate and update content through extensive dialogue and feedback with users; and based on generative adversarial networks, it is capable of producing realistic images, audio, and video content.
Second, generative AI is empowering industry development. The 2025 Government Work Report emphasized the continued promotion of the "AI Plus" initiative, which seeks to better combine digital technology with manufacturing and market strengths, thereby supporting the widespread application of large models. Generative AI has already injected a new quality of productivity into various industries.
In the field of scientific research, generative AI can quickly review and summarize reference materials, propose research ideas, conduct statistical analysis, and even generate academic papers. A study surveying 1,600 researchers found that 25% of them used AI-assisted tools for writing papers. In the manufacturing sector, the World Economic Forum’s “Lighthouse Factory” awards in October 2024 showed that the application of generative AI increased factory labor productivity by an average of 50%. In the audiovisual field, online audio platforms, through collaborations with automobile manufacturers to build connected vehicles and with home appliance companies to develop smart home IoT solutions, are accelerating the integration of online audio-visual services into in-car entertainment systems, smart homes, wearables, and other terminals.
Third, generative AI also brings potential risks. While it drives technological innovation across industries, it also introduces the potential risks of deepfakes and false information. In January 2024, a multinational company based in Hong Kong experienced a “face-swapping” scam, in which fraudsters used publicly available media materials and deepfake technology to synthesize images and voices of the company’s executives and create a false video conference involving multiple participants, defrauding the company of up to HK$200 million. In one lawsuit filed by an American passenger against a US airline, the defense lawyer cited six cases of AI-generated content; however, the judge found that the outcomes and citations in those cases were entirely fabricated. Not only were factual details incorrect, but even scholarly information was also invented.
Fourth, the risks associated with generative AI urgently need to be addressed through regulation. Every technological innovation is a double-edged sword. While it is crucial to harness the positive functions of generative AI, it is equally important to acknowledge its potential risks and take appropriate regulatory and preventive measures. The "Interim Measures for the Administration of Generative AI Services" issued by the Cyberspace Administration of China in 2022 was the first departmental regulation worldwide to explicitly propose labeling obligations for AI-generated synthetic content, requiring deep synthesis service providers to fulfill labeling obligations. The Interim Measures for Generative AI Services continue to adopt that requirement.
III. The Labeling Measures Further Refine and Advance the Management of AI-Generated Content
The Labeling Measures set forth requirements for labeling methods for AI-generated content. They call for explicit labels to clearly indicate “which content is generated,” while implicit labels indicate “who generated it” and “by whom it is disseminated.” This clarifies the compliance responsibilities of various actors (referred to as "service providers") in the full lifecycle of AI-generated synthetic content, thereby reducing AI-related security risks and promoting the healthy development of the industry. This represents a further refinement and standardization of the labeling system.
First, during the production stage of AI-generated synthetic content, service providers should add labels to the content. Service providers are required to add explicit labels to content generated in typical application scenarios – including text, images, audio, video, and virtual environments – and also add implicit labels in the file metadata to clarify content attributes and service provider information.
Second, during the dissemination stage, different actors should verify the compliance of the labels. When providing features such as downloading, copying, or exporting AI-generated content, service providers must ensure that the files contain the required explicit labels. Online content dissemination platforms should check whether the file metadata includes the implicit labels, and take appropriate measures to add prominent visual cues around the published content to clearly inform users. Internet application distribution platforms should verify the relevant labeling materials during app review or when the app goes live.
Third, during the usage stage of AI-generated content, users also have labeling obligations. Service providers must clearly explain the methods, styles, and normative requirements for labeling AI-generated content in the user service agreements and prompt users to read and understand the related labeling management requirements. When users upload AI-generated content to online dissemination platforms, they should proactively declare and use the platform’s labeling functions.
At present, the labeling system for AI-generated synthetic content primarily focuses on determining whether the content is machine-generated. As labeling technologies advance, the labeling system will gradually shift from a mere formal assessment to a quality evaluation of “sufficient reliability,” further promoting the healthy development of the industry.