China’s approach to AI governance, after going through a wave of policy enthusiasm a few years ago, has now entered a calmer phase. Academia and key opinion leaders (KOLs) continue to debate AI governance intensely, but the government has become much more cautious, and many of those academic discussions seem to have limited influence on actual policymaking. The Cyberspace Administration of China (CAC), after rolling out rules on generative AI, has not rushed to issue new hard laws with binding force. But this doesn’t mean CAC is stepping back or falling behind. On the contrary, by leveraging the technically powerful TC260, it keeps its influence alive through the annual updates of the AI Safety Governance Framework.
On the international front, with the creation of the WAIO, CAC will likely stick to its domestic regulatory system while letting WAIO represent China and its allies on the global stage, including at the UN. In effect, WAIO is set to replace the recently announced “China AI Safety and Development Network” (CnAISDA) as China’s main vehicle for AI safety governance, mirroring the UK’s AISI, France’s AISI, and the U.S.’s CAISI.
On September 9, 2024, during the annual National Cybersecurity Awareness Week, CAC orchestrated the release of the first version of the AI Safety Governance Framework (1.0).
The framework was led by TC260, which gathered input from across the AI community, reflecting a consensus-driven process designed to ensure practicality and broad acceptance among industry stakeholders. The framework was not a new hard law but more like a governance “toolbox”: it objectively described AI safety risks, laid out principles and general countermeasures, and included guidelines for governance and risk management. Its main value was in laying the groundwork for more detailed rules and regulations to come.
Specifically, version 1.0 broadly categorised risks into three types.
Inherent risks of AI technology: such as lack of explainability, bias in training data leading to discriminatory outputs, vulnerability to adversarial attacks, or leakage of personal data.
Application risks: such as flaws in chips or software, deepfake content that misleads or scams, or failures in critical infrastructure that could even help malicious actors access nuclear or biochemical weapon knowledge.
Derivative risks: the indirect social and environmental effects, like job displacement, massive electricity use from AI data centers, students losing creativity by over-relying on AI, or—in extreme scenarios—AI developing self-awareness beyond human control. At that stage, the framework functioned more like a “declaration,” spelling out the goals and principles of governance.
Yet in just a year, AI technologies advanced faster than expected: large models evolved into intelligent agents, lightweight open-source models lowered entry barriers, and breakthroughs like brain-computer interfaces drew closer.
These developments brought new risks, which in turn prompted the release of Framework 2.0 on 15 September 2025, also during the National Cybersecurity Awareness Week. Compared with 1.0, the 2.0 version focuses on keeping pace with technological change, turning a “declaration” into an “operational manual.”
Put simply, version 1.0 was like a “declaration”, setting out why and in which directions AI should be governed. Version 2.0 is like an “instruction manual”, detailing how to govern AI, at what stages, and how to respond when problems arise. It reflects the evolution of China’s approach to AI governance: from principles to mechanisms, from frameworks to implementation, and from a domestic focus to an international presence.
To be more specific, version 2.0 made several significant upgrades.
First, its risk categories are more granular. It breaks down risks into inherent, application, and derivative types with more specific examples: hallucinated outputs, deepfake proliferation, the spread of nuclear/biochemical weapon knowledge, energy consumption and carbon emissions, and even AI self-awareness.
Second, 2.0 goes beyond principles to include specific technical and governance measures. Technical steps include explainability, adversarial training, watermarking, and kill switches; governance measures include legal frameworks, ethical reviews, open-source and supply chain security, AIGC traceability, sector-specific classification and grading, evaluation systems, and shared risk databases.
Third, its safety guidelines are much more detailed. Instead of general advice to developers, providers, and users, 2.0 sets out instructions for each stage of the lifecycle—R&D, deployment, operation, and use—adding up to over 30 measures. Examples include version rollback mechanisms during R&D, third-party tool vulnerability checks during deployment, logging and watermarking during operation, and user guidance on avoiding sensitive data input or overuse.
In governance, version 1.0 emphasized open cooperation and sharing best practices. Version 2.0 goes further, calling for advancing global AI governance rules through the UN, APEC, G20, BRICS, and other multilateral platforms. It also stresses raising public AI safety awareness, creating complaint channels, and strengthening industry self-regulation, bringing not just government and industry but also the general public into the governance chain.
Here is the official announcement from the CAC:
Release of the AI Safety Governance Framework (Version 2.0)
On September 15, at the main forum of the 2025 National Cybersecurity Awareness Week, the AI Safety Governance Framework (Version 2.0) (hereinafter referred to as Framework 2.0) was officially released.
To implement the Global AI Governance Initiative, the AI Safety Governance Framework (Version 1.0) (hereinafter referred to as Framework) was issued in September 2024 and attracted wide attention at home and abroad. Over the past year, AI technologies and applications have advanced rapidly. To address new opportunities and challenges, under the guidance of the Cyberspace Administration of China, the National Computer Network Emergency Response Technical Team/Coordination Center of China organized professional AI institutions, research institutes, and industry enterprises to jointly formulate Framework 2.0. As a technical document of the National Information Security Standardization Technical Committee, Framework 2.0 builds upon the 2024 version, incorporating the latest developments in AI technology and applications, continuously tracking risk evolution, refining and optimizing risk categories, exploring risk grading methods, and dynamically updating preventive and governance measures.
An official from the National Computer Network Emergency Response Technical Team/Coordination Center of China noted that the release of Framework 2.0 aligns with global trends in AI development, balances technological innovation with governance practices, and deepens consensus on AI safety, ethics, and governance. It aims to foster a safe, trustworthy, and controllable AI development ecosystem and build a collaborative governance model that transcends borders, sectors, and industries. At the same time, it will help advance cooperation on AI safety governance under multilateral mechanisms, promote the inclusive sharing of technological achievements worldwide, and ensure that all of humanity benefits from the dividends of AI development.
The bilingual Chinese–English version of the AI Safety Governance Framework 2.0 is attached at the end of CAC’s statement as an appendix, which you may freely download and read. https://www.cac.gov.cn/2025-09/15/c_1759653448369123.htm
For your comparison and reference, I have also included CAC’s statement on version 1.0 (with the full English text of the framework likewise attached at the end). https://www.cac.gov.cn/2024-09/09/c_1727567886199789.htm
Finally, here is an analysis I wrote on September 9, 2024, the day CAC released version 1.0, which I have linked again for your reference.
China Unveils AI Safety Governance Framework to Lead Global Standards
Today, on the occasion of the annual Cybersecurity Awareness Week(网络安全宣传周), China released its first “Artificial Intelligence Safety Risk Governance Framework”(人工智能安全治理框架).