Recently, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration jointly released a new rule mandating tech companies to label AI-generated content. The rule with the name of "Labeling Measures for Artificial Intelligence-Generated Contents" will take effect on September 1, 2025.
An official from CAC stated that the Measures aim to “promote the healthy development of AI, regulate the labeling of AI-generated and synthesized content, protect the legitimate rights and interests of citizens, legal entities, and other organizations, and safeguard public interests”.
In addition, along with the “Labeling Measures,” the mandatory national standard “Cybersecurity Technology – Method for Labeling AI-Generated Synthetic Content” (hereinafter referred to as the “Labeling Standard”) has been officially approved and issued by the State Administration for Market Regulation and the Standardization Administration of China, and will be implemented simultaneously with the Labeling Measures.
The coordinated release of the "Labeling Measures" and "Labeling Standard" has resolved the longstanding issue of misalignment between management regulations and technical standards in past governance efforts. It has created a governance closed-loop in which institutional directives drive technology implementation, while technological advancements, in turn, reinforce institutional requirements. This marks a shift in China's AI security governance from "fragmented management" to "systematic coordination," introducing a completely new perspective on AI safety governance.
According to an interpretive article by Zhang Zhen, a Senior Engineer at the National Computer Network Emergency Response Technical Coordination Center, the "Labeling Measures" contribute a Chinese solution to international AI governance. They demonstrate China’s responsibility and commitment in global AI development, reflect the nation’s active philosophy of "people-oriented and benevolent AI," and pave a safe, trustworthy, and sustainable path for building AI technology that is auditable, supervised, and traceable.
In another interpretive article, Jin Bo, Deputy Director of the Third Research Institute of the Ministry of Public Security, emphasized that labeling AI-generated synthetic content has become an international norm. Whether it is the EU’s already implemented Artificial Intelligence Act, Australia’s officially released "Security and Responsible AI Consultation: Australia’s Interim Response," or proposals under development in the United States—such as the "Integrity Act for Edited and Deepfake Content Source Protection" and the "Digital Content Traceability Labeling Act"—or the UK's "Artificial Intelligence (Regulation) Act," all require some form of watermarking and labeling obligation. Although different countries have varying specific requirements for labeling technologies—such as digital watermarks, digital fingerprints, and encrypted metadata—and the intensity of labeling obligations imposed on platforms may differ, there is a consistent philosophy across the board: shifting from post-hoc content review to embedding risk controls at the generation stage and enhancing the technical capacity for AI governance.
Zhang Linghan, professor at the Institute of Data Law and Governance of China University of Political Science and Law and a member of the Chinese side of the United Nations High-Level Advisory Group on Artificial Intelligence, stated that the Labeling Measures set forth specific provisions for both implicit and explicit labeling along the entire chain of production, dissemination, and use of AI-generated synthetic content. Their purpose is to eliminate the spread of false and harmful information as much as possible without affecting the user experience and to create conditions for further tracing of the content.
The dissemination of AI-generated rumors has repeatedly attracted public attention. In March 2025, the news that “a top-tier male celebrity in Macau gambled and lost between 1 and 2 billion RMB” spread widely online, sparking numerous speculations and heated discussions, with many netizens pointing the finger at Jay Chou. In response, on March 11, Jay Chou’s management company, “JVR Music,” issued a refutation statement, clearly stating that the rumor was unrelated to Jay Chou and urging the public not to spread misinformation.
On March 14, on the same day, the Cybersecurity Bureau of the Ministry of Public Security released its investigation results. It was found that on March 10, 2025, at 10:00 a.m., a netizen named Xu used the AI smart-generation feature of an app called “Mou Shu” to create a rumor with the title “Top Celebrity Exposed for Losing All His Wealth in Overseas Gambling, Triggering a Media Tsunami.” Xu then disseminated the information through various online platforms, causing the rumor to spread rapidly and widely, severely disrupting public order. As a result, Xu was punished with eight days of administrative detention.
In January 2025, during the earthquake in Shigatse, Tibet, an image of a child trapped under rubble was widely circulated. However, the image was actually AI-generated rather than a real disaster scene.
During the National Day holiday in 2024, a large number of AI-generated “voice replacement” videos featuring Xiaomi CEO Lei Jun were widely disseminated. The content involved complaints about traffic jams, holiday adjustments, gaming, and even included vulgar insults. These videos, due to their highly realistic simulation of Lei Jun’s voice, attracted widespread attention, with over 120 million views.
During the 2025 National People’s Congress, Lei Jun stated that the abuse of AI face-swapping and voice imitation technologies had become a major area of illegal infringement, and he himself had suffered from “being criticized for eight days” as he defended his rights.
The Labeling Standard categorizes labels into explicit and implicit labels.
For explicit labeling, the Labeling Measures require that service providers add a clear, visible label to AI-generated synthetic content in text, audio, images, videos, virtual scenes, etc. When offering features such as downloading, copying, or exporting such content, the file must include the required explicit label.
Specifically, for text, the service provider should add a textual or symbolic prompt at the beginning, end, or an appropriate place in the middle; for audio, a voice or audio cue should be added at the beginning, end, or an appropriate place in the middle; for images, a prominent label should be added at an appropriate location; for video, a prominent label should be added on the opening screen, along the edges of the playback area, or at an appropriate point in the middle; and for virtual scenes, a prominent label should be added either on the opening screen or at an appropriate place during continuous service.
In addition, AI service providers are required to embed implicit labels in the file metadata of generated synthetic content, including information such as content attributes, the service provider’s name or code, content identification numbers, and other production element information. The use of digital watermarks and similar implicit labeling forms is encouraged.
The Labeling Standard stipulates that the user service agreement must clearly explain the methods, styles, and other normative details for labeling AI-generated synthetic content, and it should prompt users to carefully read and understand the relevant labeling management requirements.
Zhang Linghan revealed that, during the formulation process of the Labeling Standard, considerable attention was paid to how to set the obligations of dissemination platforms for content uploaded by producers that requires labeling but lacks proper labels.
Article 6 of the Labeling Standard stipulates that dissemination platforms should adopt appropriate measures to add prominent labeling around the content when publishing it, clearly informing the public that the content is AI-generated. This includes: verifying whether the file metadata contains an implicit label that explicitly indicates the content is AI-generated; if the file metadata does not contain an implicit label but the user declares it as AI-generated; or if the file metadata does not show an implicit label and the user does not declare it as AI-generated, but an explicit label or other traces of AI generation are detected.
The Labeling Standard also mandates that no organization or individual shall maliciously remove, tamper with, forge, or conceal the labels of AI-generated synthetic content, nor provide tools or services to others to carry out such malicious actions, or use improper labeling methods to harm the legitimate rights and interests of others.
An official from the National Cyberspace Administration stated during a press briefing that the Labeling Measures focus on resolving issues such as “what content is generated,” “who generated it,” and “where it is generated from,” and they aim to promote full-chain security management from generation to dissemination. The goal is to build a trustworthy artificial intelligence technology system.
Considering that enterprises need time to fully understand the relevant regulations and standards, and to develop their capabilities and functions accordingly, a transitional period of approximately six months has been set for the implementation of the Labeling Measures and the accompanying mandatory national standard, with both coming into effect on September 1, 2025.
Zhang Linghan indicated that the Labeling Standard is designed to clarify the specific methods for labeling AI-generated synthetic content, providing further continuity with previously issued regulations such as the “Administrative Measures for Algorithmic Recommendations of Internet Information Services,” the “Administrative Measures for Deep Synthesis of Internet Information Services,” and the “Interim Measures for the Administration of Generative Artificial Intelligence Services.”
Dr. Wu Shenkuo, Ph.D. supervisor at Beijing Normal University Law School and deputy director of the Research Center of the China Internet Association, stated that the Labeling Standard mandates a compulsory labeling obligation for AI-generated synthetic content. Such mandatory labeling can help audiences quickly identify the authenticity of information and enhance their ability to discern truth from falsehood. At the same time, the Labeling Standard assigns responsibilities along the entire chain—from developers to users to platforms—thus establishing a closed-loop accountability system for the labeling framework. In addition, a dynamic regulatory mechanism has been constructed, particularly requiring the filing of algorithm models and generation rules, thereby establishing a traceability system. With the introduction of a national monitoring platform, dynamic technical safeguards can be implemented.
Wu Shenkuo believes that on one hand, the Labeling Standard will help curb the generation, dissemination, and spread of AI-generated false information from the source; on the other hand, by designing a mechanism where content is verifiable, it will enhance the credibility and authenticity of information in cyberspace.
An anonymous lawyer noted that, since technologies such as metadata embedding and watermarks are still under development, factors like their effectiveness, interoperability, and tamper-resistance—as well as whether users, content distribution platforms, and regulators have reliable tools to identify synthetic content—may be key factors influencing the implementation of the Labeling Measures and the allocation of responsibilities among parties.
Circular on Issuing the "Measures for Labeling Artificial Intelligence-Generated Contents "
关于印发《人工智能生成合成内容标识办法》的通知
Cyberspace Administration of China (CAC) [2025] No. 2
国信办 [2025]第2号
To all provincial, autonomous regional, and municipal cyberspace administrations, communications administrations, public security departments, radio and television bureaus, and relevant authorities in Xinjiang Production and Construction Corps:
各省、自治区、直辖市互联网信息办公室、通信管理局、公安厅(局)、广播电视局,新疆生产建设兵团互联网信息办公室、工业和信息化局、公安局、文化体育广电和旅游局:
To promote the healthy development of AI, regulate the labeling of AI-generated and synthesized content, protect the legitimate rights and interests of citizens, legal entities, and other organizations, and safeguard public interests, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration have formulated the "Measures for Labeling Artificial Intelligence-Generated Contents ", which are hereby issued for strict implementation.
为了促进人工智能健康发展,规范人工智能生成合成内容标识,保护公民、法人和其他组织合法权益,维护社会公共利益,国家互联网信息办公室、工业和信息化部、公安部、国家广播电视总局制定了《人工智能生成合成内容标识办法》,现印发给你们,请认真遵照执行。
Cyberspace Administration of China
Ministry of Industry and Information Technology
Ministry of Public Security
National Radio and Television Administration
March 7, 2025
国家互联网信息办公室、工业和信息化部、公安部、国家广播电视总局,2025年3月7日
Measures for Labeling Artificial Intelligence-Generated Contents
人工智能生成合成内容标识办法
Article 1 These Measures are formulated to promote the healthy development of AI, regulate content labeling, protect rights, and uphold public interests, in accordance with the Cybersecurity Law of the People’s Republic of China, the Internet Information Service Algorithm Recommendation Management Provisions, the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services, the Interim Measures for the Management of Generative AI Services, and other laws and regulations.
第一条 为了促进人工智能健康发展,规范人工智能生成合成内容标识,保护公民、法人和其他组织合法权益,维护社会公共利益,根据《中华人民共和国网络安全法》、《互联网信息服务算法推荐管理规定》、《互联网信息服务深度合成管理规定》、《生成式人工智能服务管理暂行办法》等法律、行政法规和部门规章,制定本办法。
Article 2 These regulations apply to internet information service providers (hereinafter referred to as "service providers") that fall under the circumstances defined in the Regulations on the Management of Algorithmic Recommendations for Internet Information Services, the Regulations on the Management of Deep Synthesis in Internet Information Services, and the Interim Measures for the Management of Generative Artificial Intelligence Services. Such providers conducting labeling activities for artificial intelligence-generated synthetic content shall comply with these regulations.
第二条 符合《互联网信息服务算法推荐管理规定》、《互联网信息服务深度合成管理规定》、《生成式人工智能服务管理暂行办法》规定情形的网络信息服务提供者(以下简称“服务提供者”)开展人工智能生成合成内容标识活动,适用本办法。
Article 3 AI-generated and synthesized content refers to text, images, audio, video, virtual scenes, and other information created using AI technology. Identifiers include explicit and implicit identifier.An explicit identifier refers to an labeling presented in a way that can be clearly perceived by users, using text, sound, graphics, and other means, added during the generation of synthetic content or interactive scene interfaces.
第三条 人工智能生成合成内容是指利用人工智能技术生成、合成的文本、图片、音频、视频、虚拟场景等信息。人工智能生成合成内容标识包括显式标识和隐式标识。隐式标识是指采取技术措施在生成合成内容文件数据中添加的,不易被用户明显感知到的标识。
Article 4 Where the generative synthesis services provided by service providers fall under the circumstances specified in Paragraph 1 of Article 17 of the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services, explicit labels shall be added to the generated synthetic content in accordance with the following requirements:
(a) For text: Add textual prompts or common symbolic indicators at the beginning, end, or appropriate middle positions of the text, or place prominent labels on interactive scenario interfaces or adjacent to the text;
(b) For audio: Add voice prompts or audio rhythm cues at the beginning, end, or appropriate middle positions of the audio, or include prominent labels on interactive scenario interfaces;
(c) For images: Add prominent labels at appropriate positions within the image;
(d) For videos: Include prominent labels at appropriate positions in the initial frames or surrounding areas of video playback. Labels may also be added at appropriate positions during the end or middle of the video;
(e) For virtual scenarios: Add prominent labels at appropriate positions in the initial scene. Labels may also be added at appropriate positions during the ongoing service of the virtual scenario;
(f) For other generative synthesis scenarios: Prominent labels shall be added based on their application-specific characteristics.
When service providers offer functions such as downloading, copying, or exporting generative synthetic content, they shall ensure that the files contain explicit labels that meet the requirements.
第四条 服务提供者提供的生成合成服务属于《互联网信息服务深度合成管理规定》第十七条第一款情形的,应当按照下列要求对生成合成内容添加显式标识:
(一)在文本的起始、末尾或者中间适当位置添加文字提示或者通用符号提示等标识,或者在交互场景界面、文字周边添加显著的提示标识;
(二)在音频的起始、末尾或者中间适当位置添加语音提示或者音频节奏提示等标识,或者在交互场景界面中添加显著的提示标识;
(三)在图片的适当位置添加显著的提示标识;
(四)在视频起始画面和视频播放周边的适当位置添加显著的提示标识,可以在视频末尾和中间适当位置添加显著的提示标识;
(五)呈现虚拟场景时,在起始画面的适当位置添加显著的提示标识,可以在虚拟场景持续服务过程中的适当位置添加显著的提示标识;
(六)其他生成合成服务场景根据自身应用特点添加显著的提示标识。
服务提供者提供生成合成内容下载、复制、导出等功能时,应当确保文件中含有满足要求的显式标识。
Article 5 Service providers shall, in accordance with Article 16 of the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services add implicit identifiers to the file metadata of generated or synthetic content. These identifiers must include production-related information such as the attribute details of the synthetic content, the service provider’s name or code, and content serial numbers.
第五条 服务提供者应当按照《互联网信息服务深度合成管理规定》第十六条的规定,在生成合成内容的文件元数据中添加隐式标识,隐式标识包含生成合成内容属性信息、服务提供者名称或者编码、内容编号等制作要素信息。
鼓励服务提供者在生成合成内容中添加数字水印等形式的隐式标识。
文件元数据是指按照特定编码格式嵌入到文件头部的描述性信息,用于记录文件来源、属性、用途等信息内容。
Article 6 Service providers offering internet information content dissemination services shall adopt the following measures to regulate the dissemination of generative synthetic content:
(a) Verify whether implicit labels exist in the file metadata. If the file metadata explicitly identifies the content as generative synthetic, prominently label the published content in an appropriate manner to clearly inform the public that the content is generative synthetic;
(b) If no implicit labels are detected in the file metadata, but the user declares the content as generative synthetic, prominently label the published content in an appropriate manner to alert the public that the content may be generative synthetic;
(c) If no implicit labels are detected in the file metadata, the user does not declare the content as generative synthetic, but the service provider detects explicit labels or other traces of synthesis, identify the content as suspected generative synthetic. Prominently label the published content in an appropriate manner to alert the public that the content is suspected to be generative synthetic;
(d) Provide necessary labeling functionalities and remind users to actively declare whether their published content includes generative synthetic material.
For the circumstances outlined in items (a) to (c) of the preceding paragraph, dissemination elements such as generative synthetic content attributes, platform names or codes, and content identifiers shall be added to the file metadata.
第六条 提供网络信息内容传播服务的服务提供者应当采取下列措施,规范生成合成内容传播活动:
(一)核验文件元数据中是否含有隐式标识,文件元数据明确标明为生成合成内容的,采取适当方式在发布内容周边添加显著的提示标识,明确提醒公众该内容属于生成合成内容;
(二)文件元数据中未核验到隐式标识,但用户声明为生成合成内容的,采取适当方式在发布内容周边添加显著的提示标识,提醒公众该内容可能为生成合成内容;
(三)文件元数据中未核验到隐式标识,用户也未声明为生成合成内容,但提供网络信息内容传播服务的服务提供者检测到显式标识或者其他生成合成痕迹的,识别为疑似生成合成内容,采取适当方式在发布内容周边添加显著的提示标识,提醒公众该内容疑似生成合成内容;
(四)提供必要的标识功能,并提醒用户主动声明发布内容中是否包含生成合成内容。
有前款第一项至第三项情形的,应当在文件元数据中添加生成合成内容属性信息、传播平台名称或者编码、内容编号等传播要素信息。
Article 7 Application distribution platforms shall require internet application service providers to disclose whether they provide artificial intelligence-generated synthetic services during the review process for app listing or launch. If an internet application service provider offers artificial intelligence-generated synthetic services, the application distribution platform shall verify the relevant materials related to generative synthetic content labeling.
第七条 互联网应用程序分发平台在应用程序上架或者上线审核时,应当要求互联网应用程序服务提供者说明是否提供人工智能生成合成服务。互联网应用程序服务提供者提供人工智能生成合成服务的,互联网应用程序分发平台应当核验其生成合成内容标识相关材料。
Article 8 Service providers shall clearly specify in their user service agreements the methods, formats, and other specifications for labeling generative synthetic content, and shall prompt users to carefully read and understand the relevant labeling management requirements.
第八条 服务提供者应当在用户服务协议中明确说明生成合成内容标识的方法、样式等规范内容,并提示用户仔细阅读并理解相关的标识管理要求。
Article 9 If a user requests a service provider to provide generative synthetic content without explicit labels, the service provider may furnish such content without explicit labels after clarifying the user’s labeling obligations and usage responsibilities through a user agreement. The service provider shall lawfully retain relevant logs, including recipient information, for no less than six months.
第九条 用户申请服务提供者提供没有添加显式标识的生成合成内容的,服务提供者可以在通过用户协议明确用户的标识义务和使用责任后,提供不含显式标识的生成合成内容,并依法留存提供对象信息等相关日志不少于六个月。
Article 10 Users employing internet information content dissemination services to publish generative synthetic content shall actively declare such content and utilize the labeling functions provided by the service provider to mark it.
No organization or individual shall maliciously delete, alter, forge, or conceal generative synthetic content labels as required by these regulations. They shall not provide tools or services to others for such malicious acts, nor use improper labeling methods to infringe upon the legitimate rights and interests of others.
第十条 用户使用网络信息内容传播服务发布生成合成内容的,应当主动声明并使用服务提供者提供的标识功能进行标识。
任何组织和个人不得恶意删除、篡改、伪造、隐匿本办法规定的生成合成内容标识,不得为他人实施上述恶意行为提供工具或者服务,不得通过不正当标识手段损害他人合法权益。
Article 11 Service providers conducting labeling activities shall also comply with relevant laws, administrative regulations, departmental rules, and mandatory national standards.
第十一条 服务提供者开展标识活动的,还应当符合相关法律、行政法规、部门规章和强制性国家标准的要求。
Article 12 When fulfilling procedures such as algorithm filing and security assessments, service providers shall submit materials related to generative synthetic content labeling in accordance with these regulations. They shall enhance the sharing of labeling information to support efforts to prevent and combat illegal and criminal activities.
第十二条 服务提供者在履行算法备案、安全评估等手续时,应当按照本办法提供生成合成内容标识相关材料,并加强标识信息共享,为防范打击相关违法犯罪活动提供支持和帮助。
Article 13 Violations of these regulations shall be addressed by the cyberspace administration, telecommunications, public security, radio and television authorities, and other relevant departments in accordance with their duties and pursuant to applicable laws, administrative regulations, and departmental rules.
第十三条 违反本办法规定的,由网信、电信、公安和广播电视等有关主管部门依据职责,按照有关法律、行政法规、部门规章的规定予以处理。
Article 14 These regulations shall take effect on September 1, 2025.
第十四条 本办法自2025年9月1日起施行。
Measures for Labeling Artificial Intelligence-Generated Contents” Answering Questions from Reporters
《人工智能生成合成内容标识办法》答记者问
Recently, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration jointly released the Measures for Labeling Artificial Intelligence-Generated Contents (hereinafter referred to as the Labeling Measures), which will take effect on September 1, 2025. An official from the Cyberspace Administration of China recently answered reporters’ questions regarding the Labeling Measures.
近日,国家互联网信息办公室、工业和信息化部、公安部、国家广播电视总局联合发布《人工智能生成合成内容标识办法》(以下简称《标识办法》),自2025年9月1日起施行。日前,国家互联网信息办公室有关负责人就《标识办法》有关问题回答了记者提问。
Q: What is the background for the introduction of the Labeling Measures?
A: In recent years, the rapid development of AI technology has provided convenient tools for generating and synthesizing text, images, audio, video, and other content. While this has facilitated the fast production and dissemination of massive amounts of information online, it has also led to issues such as misuse of generative technologies and the accelerated spread of false information, raising widespread societal concerns. Following extensive research, public consultation, and multi-round technical trials, the Labeling Measures were formulated to further regulate labeling activities for AI-generated synthetic content.
The Labeling Measures focus on the critical aspect of “labeling AI-generated synthetic content” to help users identify false information, clarify labeling obligations for service providers, standardize labeling practices across content creation and dissemination processes, enhance security at reasonable costs, promote AI adoption in scenarios like text dialogue, content creation, and design assistance, mitigate risks of AI misuse, and ensure healthy, orderly AI development.
一、问:请介绍一下《标识办法》的出台背景?
答:近年来,人工智能技术快速发展,为生成合成文字、图片、音频、视频等信息提供了便利工具,海量信息得以快速生成合成并在网络平台传播,在促进经济社会发展的同时,也产生了生成合成技术滥用、虚假信息传播扩散加剧等问题,引发社会各界的关注关切。经深入开展调研、广泛征求意见、多轮技术论证试点,国家互联网信息办公室联合工业和信息化部、公安部、国家广播电视总局制定了《标识办法》,进一步规范人工智能生成合成内容标识活动。
《标识办法》聚焦人工智能“生成合成内容标识”关键点,通过标识提醒用户辨别虚假信息,明确相关服务主体的标识责任义务,规范内容制作、传播各环节标识行为,以合理成本提高安全性,促进人工智能在文本对话、内容制作、辅助设计等各应用场景加快落地,同时减轻人工智能生成合成技术滥用危害,防范利用人工智能技术制作传播虚假信息等风险行为,推动人工智能健康有序发展。
Q: What is the overarching rationale behind the Labeling Measures?
A: The framework includes four key principles:
Refining existing regulations: Building on labeling requirements outlined in the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services, Provisions on the Administration of Deep Synthesis of Internet-Based Information Services, and the Interim Measures for the Management of Generative AI Services, the Labeling Measures provide detailed implementation rules.
Addressing critical issues: The measures aim to resolve questions like “what is AI-generated,” “who generated it,” and “where it originated,” ensuring end-to-end security management from generation to dissemination to foster trustworthy AI.
Balancing development and security: To accommodate technical and cost challenges (e.g., embedding implicit labels in text or digital watermarks in multimedia files), the measures avoid mandatory requirements for such methods. Instead, they propose low-cost solutions like text symbols, audio rhythm cues, and file metadata indicators.
Integrating management and technical standards: To ensure effective implementation, the mandatory national standard Cybersecurity Technology—Measures for Labeling Artificial Intelligence-Generated Contents(GB/T XXXX-2025) was released in tandem, providing operational guidance for compliance.
二、问:请问制定《标识办法》的总体思路是什么?
答:一是细化已有规定。《互联网信息服务算法推荐管理规定》、《互联网信息服务深度合成管理规定》、《生成式人工智能服务管理暂行办法》中提出了标识有关要求,《标识办法》作为规范性文件,进一步细化标识的具体实施规范。二是解决关键问题。《标识办法》重点解决“哪些是生成的”“谁生成的”“从哪里生成的”等问题,推动由生成到传播各环节的全流程安全管理,力争打造可信赖的人工智能技术。三是统筹发展和安全。考虑人工智能技术发展需要,针对在文本内容中添加隐式标识,在多媒体文件中添加数字水印,仍是技术难点或可能增加企业成本,不作强制要求。为降低平台企业标识成本,提升落地实施的可操作、可执行性,创新提出文本符号标识、音频节奏标识、文件元数据标识等低成本实施的可行方法。四是管理要求与技术标准一体化考虑。为推动《标识办法》落地实施,强制性国家标准《网络安全技术 人工智能生成合成内容标识方法》同步发布,更好地指导相关主体规范开展标识活动。
Q: What is the scope of the Labeling Measures?
A:The Administrative Measures for the Labeling of AI-Generated Synthetic Content (hereinafter referred to as the Labeling Measures) stipulate that network information service providers conducting AI-generated synthetic content labeling activities under the circumstances specified in the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services, the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services, and the Interim Measures for the Management of Generative AI Services shall be governed by these Measures.
Additionally, Article 2 of the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services clarifies that "where other laws or administrative regulations have separate provisions, those provisions shall prevail," while Article 2(2) of the Interim Measures for the Management of Generative AI Services states that "if the state has separate provisions for the use of generative AI services in activities such as news publishing, film and television production, and artistic creation, such provisions shall apply". The Labeling Measuresadhere to these applicability clauses, specifying that if other regulations impose separate labeling requirements for specific activities, those requirements shall take precedence.
三、问:《标识办法》适用范围是什么?
答:《标识办法》规定,符合《互联网信息服务算法推荐管理规定》、《互联网信息服务深度合成管理规定》、《生成式人工智能服务管理暂行办法》规定情形的网络信息服务提供者开展人工智能生成合成内容标识活动,适用本办法。
同时,《互联网信息服务深度合成管理规定》第二条明确“法律、行政法规另有规定的,依照其规定”,《生成式人工智能服务管理暂行办法》第二条第二款明确“国家对利用生成式人工智能服务从事新闻出版、影视制作、文艺创作等活动另有规定的,从其规定”。《标识办法》同样遵照上述适用条款,开展特定活动对内容标识另有规定的,从其规定。
Q: How do the Labeling Measures relate to the mandatory national standard Cybersecurity Technology—Measures for Labeling Artificial Intelligence-Generated Contents?
A: The "Labeling Measures" primarily establish management requirements at the legislative level, clarifying the responsibilities and obligations of all entities involved in the creation and dissemination of generated/synthesized content. To promote innovation and development in AI technology, the Measures do not impose specific operational requirements for implementation.
The "Cybersecurity Technology—Measures for Labeling Artificial Intelligence-Generated Contents," formulated and implemented as a mandatory national standard, specifies the specific implementation methods and operational procedures for mandatory labeling requirements. Both documents were released simultaneously and will take effect on September 1, 2025, to better guide relevant entities in standardizing labeling activities.
四、问:请说明一下《标识办法》与强制性国家标准《网络安全技术 人工智能生成合成内容标识方法》的关系?
答:《标识办法》主要从立法层面提出管理要求,明确生成合成内容制作传播各主体的责任义务,为促进人工智能技术创新发展,对具体实施操作不做要求。《网络安全技术 人工智能生成合成内容标识方法》以强制性国家标准形式制定实施,主要提出强制执行部分的标识具体实施方式和操作方法,两者同步推出,于2025年9月1日同步实施,以更好地指导相关主体规范开展标识活动。
Q: What are the main contents of the mandatory national standard "Cybersecurity Technology—Measures for Labeling Artificial Intelligence-Generated Contents "?
A: The standard supports the "Labeling Measures" by specifying detailed requirements for content labeling methods applicable to AI-generated and synthesized content service providers and network information dissemination service providers. Explicit Labeling Methods: It defines approaches for AI-generated content service providers to add explicit identifiers—such as text, watermarks, voice prompts, or rhythmic cues—to text, images, audio, video, and virtual scenarios. These methods aim to prominently alert the public and prevent confusion or mislabeling during the content generation and synthesis process. Implicit labeling via Metadata: It outlines methods for service providers to embed implicit identifiers in file metadata. This facilitates efficient labeling of generated/synthesized content by dissemination platforms and provides a basis for fulfilling their responsibility to inform the public. Reserved Fields for Security: The standard introduces reserved fields within metadata for recording security-related information, such as identifier integrity and content consistency. This design allows flexibility for future innovations in labeling technologies and enhances the protection of identifier security..
五、问:强制性国家标准《网络安全技术 人工智能生成合成内容标识方法》的主要内容?
答:标准支撑《标识办法》,对人工智能生成合成内容服务提供者与网络信息传播服务提供者提出了内容标识方法的具体要求。一是明确人工智能生成合成内容服务提供者对文本、图片、音频、视频、虚拟场景等内容,添加文字、角标、语音、节奏等显式标识的方法,在内容生成合成环节提出了显著提示公众、防范混淆误认的方案。二是明确服务提供者在文件中添加元数据隐式标识的方法,为内容传播服务提供者有效识别生成合成内容提供便捷方案,也为内容传播服务提供者履行向公众提醒提示主体责任提供了依据。三是在元数据隐式标识设置了预留字段,可用于记录标识完整性、内容一致性等安全防护信息,为促进标识技术创新发展和保护标识安全性预留了空间。
Q: What specific requirements does the "Labeling Measures" clarify for service providers?
A: The Labeling Measures specify that service providers must: Add explicit identifiers (e.g., text, watermarks, audio cues, or visual markers) to all generated/synthesized content, including text, audio, images, video, and virtual scenarios. When offering download, copy, or export functions for such content, they must ensure the files contain compliant explicit identifiers. Embed implicit identifiers in the file metadata of generated/synthesized content. These identifiers must include production-related details such as content attributes, the service provider’s name/code, and content labeling numbers. Clearly outline in user service agreements the methods, formats, and standards for identifying generated/synthesized content, and remind users to carefully read and understand the relevant labeling management requirements.
六、问:《标识办法》明确了服务提供者哪些具体要求?
答:《标识办法》明确服务提供者应当对文本、音频、图片、视频、虚拟场景等生成合成内容添加显式标识,在提供生成合成内容下载、复制、导出等功能时,应当确保文件中含有满足要求的显式标识;应当在生成合成内容的文件元数据中添加隐式标识,隐式标识包含生成合成内容属性信息、服务提供者名称或者编码、内容编号等制作要素信息;应当在用户服务协议中明确说明生成合成内容标识的方法、样式等规范内容,并提示用户仔细阅读并理解相关的标识管理要求。
Q: What measures does the Labeling Measures require internet application distribution platforms to adopt in regulating AI-generated and synthesized services?
A: Article 7 of the Labeling Measures stipulates that internet application distribution platforms must, during app listing or launch reviews, require application service providers to disclose whether they offer AI-generated and synthesized services and verify the relevant materials related to content labeling.
七、问:《标识办法》明确互联网应用程序分发平台应当采取哪些措施规范人工智能生成合成服务?
答:《标识办法》第七条明确,互联网应用程序分发平台在应用程序上架或者上线审核时,应当要求互联网应用程序服务提供者说明是否提供人工智能生成合成服务,并核验其生成合成内容标识相关材料。
Q: How to compliantly obtain generated/synthesized content without explicit identifiers?
A: Previously, Article 17 of the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services issued by the Cyberspace Administration of China mandated prominent identifiers for deep synthesis services that "may cause public confusion or mislabeling." Building on this, Article 4 of the Labeling Measures further specifies explicit identifier requirements for text, audio, images, video, virtual simulations, and other scenarios, ensuring that generated/synthesized content carries compliant explicit identifiers when presented to the public.
Additionally, to address practical applications of generated/synthesized content and respond to societal and industry needs, Article 9 of the Labeling Measures stipulates that online platforms, while complying with relevant laws and regulations, may provide users with content lacking explicit identifiers after clarifying user responsibilities in agreements and retaining relevant log information as required by law. However, users must subsequently comply with Article 10 and other provisions by actively declaring the synthetic nature of the content and adding explicit identifiers before publicly releasing or disseminating such content.
八、问:如何合规获得没有添加显式标识的生成合成内容?
答:前期,国家互联网信息办公室出台的《互联网信息服务深度合成管理规定》第十七条针对“可能导致公众混淆或者误认的”深度合成服务情形提出显著标识要求。在此基础上,《标识办法》第四条进一步明确了针对文本、音频、图片、视频、虚拟拟真等具体场景的显式标识要求,确保生成合成内容在面向公众时具有满足要求的显式标识。
此外,充分考虑生成合成内容在实际场景中的落地应用,积极回应社会关切和产业需要,《标识办法》第九条提出,在用户主动要求提供未添加显式标识内容时,网站平台在不违反相关法律法规要求前提下,可通过在用户协议中明确责任义务并依法留存相关日志信息后,面向用户予以提供。同时,用户在后续使用过程中,需遵守《标识办法》第十条等相关要求,主动声明生成合成情况并添加显式标识后,方可面向公众发布和传播。
Q: What specific requirements does the "Administrative Measures for the Labeling of AI-Generated Synthetic Content" (hereinafter referred to as the "Labeling Measures") clarify for standardizing labeling practices?
A: Article 10 of the Labeling Measures explicitly stipulates that no organization or individual shall maliciously delete, alter, forge, or conceal the labels of AI-generated synthetic content as required by these measures. They are also prohibited from providing tools or services to facilitate such malicious acts or from infringing on the legitimate rights and interests of others through improper labeling practices
九、问:《标识办法》对规范开展标识行为明确了哪些具体要求?
答:《标识办法》第十条明确任何组织和个人不得恶意删除、篡改、伪造、隐匿本办法规定的生成合成内容标识,不得为他人实施上述恶意行为提供工具或者服务,不得通过不正当标识手段损害他人合法权益。
Q: What considerations underpin the official implementation timeline for the Labeling Measures and its supporting mandatory national standards?
A: Adhering to the principle of gradual governance, the implementation accounts for the time needed for enterprises to fully comprehend the regulations and standards, conduct targeted capacity-building, and develop functional systems. Based on the technical complexity of labeling implementation and practical experience from pilot programs, a six-month transition period has been established before the Labeling Measures and its supporting mandatory national standards take full effect
十、问:关于《标识办法》和配套强制性国家标准的正式施行时间的考虑?
答:坚持循序渐进的治理原则,考虑到企业需要时间充分理解相关规定和标准规范,针对性地开展能力建设和功能研发,基于标识技术实施的复杂程度、试点试行的实践经验,设定《标识办法》和配套强制性国家标准6个月左右的施行过渡期。