Following the release of TC260-003 "Basic Requirements for the Security of Generative Artificial Intelligence Services" (TC260 doc)by China’s National Cybersecurity Standardization Technical Committee (TC260) on March 4th, the committee has now issued another draft national standard titled "Cybersecurity Technology - Basic Requirements for the Security of Generative Artificial Intelligence Services." This new standard is open for public comments until July 22nd.
The draft national standard specifies requirements for the security of pretraining data, model security, and security measures for generative AI services, and provides reference points for security evaluation. According to the committee, the standard “applies to organizations or individuals providing generative AI services through interactive interfaces or programmable interfaces, and it can guide service providers in conducting security evaluation and serve as a reference for relevant authorities.”
1. What is the relationship between this national standard and the TC260 doc?
Simply put, it is a national standard, while the TC260 doc is a technical document.
China's national standards system includes mandatory standards (强制性标准) and recommended standards (推荐性标准), both issued by the Standardization Administration of China (SAC). Mandatory standards usually start with "GB," while recommended standards use "GB/T." "GB" stands for "Guo Biao," the pinyin abbreviation for "national standard"( Guo Jia Biao Zhun). Both represent national-level technical norms and standards applicable nationwide.
In contrast, the TC260 technical documents are issued by the National Cybersecurity Standardization Technical Committee. They often serve as preparatory materials for national standards or as interim "quasi-standards" in the absence of a national standard, providing industry guidance. They do not use the "GB/T" prefix but are labelled with "TC260" or other related identifiers to distinguish them from national standards.
The draft national standard released yesterday is a recommended standard, meaning its application is not mandatory but advisory, allowing companies to choose whether to comply based on their needs.
According to the National Cybersecurity Standardization Technical Committee, this national standard is based on the TC260 doc, which"has achieved a good consensus among various regulators and companies, has formed relevant security practices, and has been widely practised in the tech community, providing a solid foundation for the standard's industrial application."
2. What are the significant changes in the draft national standard compared to the TC260 doc?
The national standard largely inherits most of the content from the TC260 doc, reorganizes it, and standardizes certain terms. For example, "corpus" has been uniformly changed to "training data."
The most significant change is in the "Model Security Requirements" section. The section of the TC260 document previously stated, "If service providers want to develop their GAI service based on third-party foundational models, they need to make sure the foundational models their service is based on have been registered with relevant authorities." The draft national standard did not include the requirement, indicating a clear policy shift. Now, providers are free to use open-source models like Llama 3 and fine-tune them to offer services. The shift indicates that, following substantial questioning of the previous restriction from China's AI industry and many experts, Chinese regulators have realised the issue and responded to the industry by discarding the restriction.
Interestingly, while China is getting more open to allowing domestic AI companies to use American open-source models, the U.S. Congress is pushing forward legislation that may restrict or even ban Chinese AI startups from using American open-source models. The U.S. Department of Commerce is also soliciting public opinion on national security risks associated with open-source "dual-use foundational models" and is considering a new regulatory push to restrict the export of proprietary or closed-source AI models, whose software and the data it is trained on are kept under wraps.
The draft national standard has relaxed some requirements in the “security measures” section. TC260 doc required GAI service providers to suspend services if users input illegal or harmful information three times in a row or five times in a day or if they induce the generation of such information. The draft standard, however, only requires providers to set and publicize a rule saying they will do it this way and has made the specific numbers vague, allowing more flexibility. Specifically, it states that if users “repeatedly input illegal or harmful information or reach a certain number of such inputs in a day, AI service providers should take measures such as suspending services.”
3. Why is this national standard needed?
In July 2023, China’s Cyberspace Administration and six other departments released the "Interim Rules for Generative Artificial Intelligence Services," marking the country’s first legal regulation of AI. However, the rules are generally broad in their stipulations, and the industry requires more detailed operational guidance. In this context, the Chinese government decided to formulate the national standard to elaborate on the security requirements outlined in the interim rules. This includes detailing network security, data security, and personal information protection during the development of generative AI services and addressing security risks in application scenarios, hardware and software environments, generated content, and rights protection during the service process.
The National Cybersecurity Standardization Technical Committee disclosed that this national standard, along with two other ongoing national standards, "Cybersecurity Technology - Security Specifications for Pre-training and Optimized Training Data of GAI" and "Cybersecurity Technology - Security Specifications for Data Annotation of GAI," are all supporting documents for the "Interim Rules for GAI Services."
The drafting of this national standard began in June 2023. The Chinese government initiated the work by establishing a drafting group to research the technological development, industrial application, and security needs of generative AI both domestically and internationally. This included multiple internal discussions and extensive solicitation of opinions, resulting in a preliminary draft. In August and September 2023, reports were made to the Big Data Security Standards Special Working Group and the 2023 Second Batch of National Cybersecurity Standards Project Review Experts Meeting. On May 11, 2024, the drafting group reported to the New Technology Security Standards Special Working Group, which discussed and agreed to open the draft for public comments. The drafting group made final modifications to the draft based on feedback received by May 14, 2024.
Keep reading with a 7-day free trial
Subscribe to Geopolitechs to keep reading this post and get 7 days of free access to the full post archives.