In mid-February, I attended several side events at the Paris AI Action Summit. However, I have been quite busy since returning, so today, I am catching up on my thoughts and reflections.
From the Chinese side, the most notable aspect of this summit was undoubtedly the high-level delegation led by Vice Premier Zhang Guoqing, who attended as President Xi Jinping’s Special Envoy. Despite their presence, the Chinese government delegation remained quite low-key. Another major point of interest was the announcement of a new organization, the China AI Safety & Development Association (CnAISDA), jointly established by several Chinese AI safety research institutions. This organization seems to be China’s response to AI safety institutes in the U.S. and the U.K..
Currently, countries such as the United States, the United Kingdom, Japan, Singapore, and France have established AI safety institutes. The U.S. and U.K. institutes have even jointly conducted AI model evaluations of OpenAI’s GPT-4o and Anthropic’s Claude 3.5. In recent months, there has been much speculation in the West about whether China would establish a similar institution, with some studies even analyzing which Chinese organization might be suitable for such a role.
Judging from its name, CnAISDA appears to reflect China’s long-standing AI governance philosophy of “balancing AI safety and development.” However, in reality, this may simply highlight the lack of a unified leadership on AI safety among Chinese research institutions and suggest that the Chinese government has not yet fully decided on its approach—or perhaps prefers to maintain the status quo for now.
Despite some media referring to CnAISDA as "China’s AI Safety Institute", it is fundamentally different from existing AI safety institutes in the U.S. and U.K.. In fact, it is not even a concrete organization, but rather a loose network without a secretariat or executive body. Superficially, it brings together almost all of China’s top AI research institutions, but if it merely lists these institutions under the banner of a “network” without a clear agenda, concrete actions, or the necessary authority and resources, then its actual significance remains questionable.
In my view, for now, the network’s main function will likely be to represent China in dialogues and collaborations with international AI safety institutions. However, whether it will eventually conduct model evaluations like the U.S. and U.K. AI safety institutes is a different question—one that depends on China’s complex AI regulatory landscape. In the short term, such a role seems unlikely.
Several Chinese AI companies also made an appearance at the summit. Following Zhipu AI’s signing of the Frontier AI Safety Commitments at the Seoul Summit last year, two other Chinese AI startups—MiniMax and 01.AI (零一万物)—signed the committment this time. However, the summit organizers took an unusually low-profile approach, quietly updating the official website’s signatory list just two days before the summit without making any public announcements. NVIDIA and the U.S.-based AI startup Magic, based in San Francisco, also signed the pledge.
In addition, Baidu and Lenovo, as founding members, joined France’s newly launched “Sustainable AI Alliance” alongside 35 other tech companies from the U.S. and Europe. This initiative aims to promote AI’s development along a more environmentally sustainable path.
Some Chinese attendees—both publicly and in private conversations—shared their experiences and impressions of the summit. Overall, many felt that the tone of the Paris Summit had shifted from a strong focus on AI safety to a greater emphasis on AI development and innovation. Notably, after a period of self-reflection, the EU now seems ready to fully enter the AI development race. As a result, global competition for AI computing power, talent, and funding is expected to intensify further.
Some believe that the AI safety debate might enter a cooling-off period—scientists will continue researching AI safety risks, but they will also need stronger empirical evidence proving that "existential risks" posed by AI are imminent or already occurring to persuade the public and policymakers to take action.
One thing that shocked some Chinese attendees was U.S. Vice President J.D. Vance’s criticism of EU-style tech regulation at the summit. Many had assumed that the U.S. and the EU were aligned on AI safety governance, yet now, it seems that a rift has emerged within the transatlantic alliance. This was especially evident in the fact that neither the U.S. nor the U.K. signed the final declaration issued at the summit.
Some speculate that this division could open opportunities for stronger China-EU cooperation in AI, but most remain skeptical—particularly given the deep ideological and policy differences between China and the EU.
From the perspective of the Chinese delegation, the U.S. has long sought to isolate China from AI global governance efforts—particularly by bringing Europe into its camp. Under the Biden administration, the network of AI safety institutes was essentially an effort to standardize AI model evaluations and red teaming within the West, effectively establishing a de facto regulatory framework that excludes China.
It remains unclear how the Trump administration will approach U.S. AI safety institutes and the broader AI safety network, but the overall direction is unlikely to change significantly. Some Chinese experts are concerned about what this exclusionary approach means for global AI governance.
A key worry is that China may feel compelled to create its own AI governance system, setting up its own standards and frameworks in opposition to the West. If that happens, global AI governance may become increasingly fragmented—similar to what happened with internet and data governance. This bloc-based, fragmented approach would pose significant challenges for technology, industry, and globalization. It is not an ideal outcome for anyone.