This was a "dialogue for the sake of dialogue." Apart from agreeing that AI presents both opportunities and risks for humanity and requires global governance, the two sides did not reach any consensus or achieve any concrete results. This was expected. However, given that the two countries are home to the world's leading AI companies and are currently experiencing tense bilateral relations, being able to sit down and calmly discuss AI governance and regulation itself is already a success.
The discussions were quite positive. The US described the dialogue as "candid and constructive," and the Chinese side also noted that the discussions were "in-depth, professional, and constructive." Both parties agreed on the need to continue the dialogue, with the US stating that "maintaining open communication channels on AI risk and safety is an important part of responsibly managing competition." The Chinese side reiterated their commitment to implementing the important consensus reached by the two heads of state in San Francisco. This indicates that there will likely be a second Sino-US intergovernmental dialogue on AI.
Despite the positive atmosphere, there were still some clashes and tensions. For example, the US raised concerns about the “misuse of AI”, particularly by China, while the Chinese side expressed a firm stance against US restrictions and suppression of China's AI sector. China is expected to bring up issues such as the export control of AI chips, indicating that China might be linking AI governance with the broader context of US suppression of China's AI industry. The US's mention of China's "misuse of AI" likely refers to issues such as surveillance, disinformation, and human rights, which are frequently raised by the US government when discussing concerns about China. These points are important for the US to raise, especially with the upcoming elections, as they need to address their domestic audience.
The perspectives on the AI dialogue differ between the two sides. The Chinese delegation was led by the Department of North American and Oceanian Affairs of the Ministry of Foreign Affairs rather than the Department of Armed Control, which is responsible for global AI governance negotiations. The delegation included representatives from the Ministry of Science and Technology, the National Development and Reform Commission, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Foreign Affairs Office of the CCP. This suggests that China views this dialogue more from the perspective of China-US relations, similar to climate change discussions, with AI safety and governance being a secondary focus. On the other hand, the US delegation was co-led by the White House National Security Council and the Department of State, including some of the highest-level officials in technology and national security, such as Seth Center, the Acting Special Envoy for Critical and Emerging Technology at the Department of State. This indicates that the US views this dialogue as a professional exchange, focusing on AI safety risks and governance.
Regarding views on global AI governance, China upheld the core principles of the "Global Initiative on AI Governance," emphasizing the role of the United Nations as the main channel and the need for global AI governance frameworks and standards to be based on broad consensus (not dominated by small circles or the US and its allies). The US did not directly respond to whether they support discussing global AI governance rules under the UN framework and seemed reluctant to use definitive terms like "rules" or "standards," only acknowledging that building "global consensus" is important.
Over the past few months, I have participated in several discussions on global AI governance, both domestically and internationally. One prominent observation is that even at the expert and think tank level, the two sides have differences in understanding AI safety issues and global governance pathways.
American experts seem to be most concerned not about traditional AI safety risks such as ethics, misinformation, copyright, and data privacy violations, but rather about whether China is truly taking the “existential risks” of advanced AI models seriously (existential risks refer to the potential for AI to be used in controlling and manufacturing nuclear, chemical, and biological weapons, posing a catastrophic threat to humanity). They question whether China will adopt the same standards as the US to control these risks, prevent proliferation, and mitigate threats. American companies are likely more interested in whether China can reach a consensus with the US on preventing AI's existential risks, adopt similar governance models, and the impact on US companies' international competitiveness if China remains lax while the US tightens regulations.
On the other hand, most Chinese experts are more concerned about China's right to development in the AI field, such as the boundaries of US export controls and sanctions. Some experts believe China has already surpassed the US in AI regulation, with many core regulatory measures such as watermarking, pre-training data legality, and safety evaluation being long-standing compliance obligations for Chinese companies. However, the US currently lacks congressional legislation to regulate AI, and the White House's executive order is uncertain due to the ambiguous electoral landscape, leaving AI governance reliant on voluntary commitments from the private sector. In this context, both the Chinese government and enterprises have little confidence in the US's regulatory intent and capability.
Regarding the future mechanisms for global AI governance, there are also some differing views between the two sides. Some American experts have asked their Chinese counterparts whether China's "Global AI Governance Initiative," which supports discussions under the United Nations framework to establish an international AI governance body, means that China will only recognize organizations established by the UN or if it is also open to other multilateral mechanisms. For China, if the US were to initiate global AI governance negotiations based on the UK-led Bletchley Process, would China participate?
Chinese experts tend to prefer discussing global AI governance under the UN framework, emphasizing the UN's universality and broad representativeness, particularly considering developing countries' voices and representations. They believe that the UN framework ensures inclusivity and fairness, providing a platform where the interests and concerns of all nations, especially those of developing countries, can be adequately addressed.
As a former international law practitioner, I must say that I am somewhat disappointed with the global governance over the past two decades. This year marks the 55th anniversary of the internet, yet the United Nations has not reached any international treaties on cybersecurity and data governance. Even negotiations on universally supported issues like combating transnational cybercrime have progressed slowly. The fragmented state of international rules on cyberspace and data security has, in reality, hindered globalization in the digital age. Geopolitical considerations and excessive politicization are significantly to blame, trapping sovereign states in a vicious cycle of mutual suspicion. States often suspect that any international proposals from others hide ulterior motives intended to harm their own interests, or believe that others are using certain values to denigrate and isolate them. Alternatively, they label each other with various accusations and criticize legitimate domestic regulatory measures. This makes it difficult for countries to focus on genuine global governance issues and humanity's urgent threats.
Constructing international governance rules for AI will be a challenging task. When considering their positions, sovereign nations will inevitably consider their national and industrial interests. Additionally, since AI technology and applications are still rapidly evolving, establishing global rules may take a long time. Especially if major powers do not reduce their "battle mindset" and biases based on values and social governance models, reaching any consensus will be quite difficult. While the UN is undoubtedly the preferred platform, the reality is that achieving consensus on complex and significant issues within the UN is becoming increasingly difficult. Countries can support the UN's primary role in global AI governance but should also remain open to other channels. After all, it is not unprecedented for the international community to negotiate and establish new international treaties outside the UN framework, such as through diplomatic conferences.
When governing AI safety risks, it is essential to distinguish between "existential risks" and "general safety risks." Governance of AI weapons and AI used in nuclear, chemical, and biological weapons can be discussed within existing international arms control mechanisms or through new processes and should generally apply existing international law. If countries come to a strong enough consensus on these "existential risks" in the future, or if extreme risk events akin to "Chernobyl" occur, gathering the political consensus necessary to establish stringent governance norms will likely be possible. However, at present, we may need concrete research to prove the existence of such risks.
Ethical issues and other "general safety risks" associated with AI are more closely tied to industrial interests. Due to the different legislative paths of various countries and the varied practices of enterprises, it may not be easy to establish sufficient state practice and “opinio juris” to create opportunities for international law formulation quickly. It seems unlikely that global governance mechanisms for nuclear weapons or climate change could be easily applied to AI. This is because AI governance is far from being as mature as nuclear energy or climate change governance, and its effective implementation relies heavily on the cooperation of businesses and the private sector.
This reminds me of the success of the "UN Guiding Principles on Business and Human Rights" (UNGP). Business and human rights issues are politically sensitive, similar to AI, and require the cooperation of businesses and the private sector. However, thanks to the excellent work of the UN Secretary-General's Special Representative John Ruggie, the UN unanimously adopted the UNGP, establishing the three pillars of business and human rights: “state duty to protect human rights”, “corporate responsibility to respect human rights”, and “access to remedy for victims”. Although the UNGP is a non-binding soft law document, it has gained widespread acceptance among UN member states. Its core principles have been incorporated into domestic laws in many countries or adopted into human rights policies and supply chain cooperation agreements by multinational corporations. While it has not yet achieved perfection in preventing and addressing human rights violations by transnational corporations, it has had a real impact and laid a solid foundation for any future international law formation. The international governance of AI could potentially follow a similar path, starting with soft law, progressing steadily, balancing safety governance with industrial interests, and eventually finding a solution and pathway widely accepted by the international community.
Discussion about this post
No posts
Excellent piece. Very important distinction on the way both sides are approaching the AI dialogue. Neither side as a clear AI "interagency process." Different government bodies have different authorities a ND agendas. No one body is focused on an AI governance for the advanced large language and multimodal models and AGI. Very interesting to see which organizations in China are clearly part of an emerging Chinese AI interagency. In the US, AI policy has been primarily run out of the White House. This is not sustainable...