U.S. and China Decline to Join Military AI Responsibility Declaration
The Third Summit on the Responsible Use of Artificial Intelligence in the Military Domain was held from 4 to 5 in A Coruña, Spain. The meeting discussed how military artificial intelligence can be used to strengthen international peace and security, and how to avoid risks arising from irresponsible use or system failures. Li Chijiang, Deputy Director-General of the Department of Arms Control of China’s Ministry of Foreign Affairs, led the Chinese delegation.
According to a readout released by the Department of Arms Control of the Chinese Ministry of Foreign Affairs, Li elaborated on China’s concept of “human-centered military artificial intelligence” during the summit.
All countries should uphold a bottom line of prudence and responsibility, abandon the obsession with absolute military advantage, and earnestly safeguard strategic balance and stability; adhere to a people-centered approach, comply with international humanitarian law, and ensure that relevant weapon systems always remain under human control; follow the principle of “AI for good,” and promote military applications of artificial intelligence that serve the maintenance of peace and security; implement the principle of agile governance by balancing security controls, technological development, and peaceful use through tiered and categorized management; and uphold multilateralism, support the United Nations in playing its due role, and promote the establishment of governance frameworks and standards based on broad consensus.
Military intelligentization is a major trend in the development of armed forces worldwide. How to responsibly use artificial intelligence in the military domain concerns the shared future of all humanity and constitutes a common challenge of our time. China maintains that the international community should embrace a global governance philosophy of extensive consultation, joint contribution, and shared benefits, as well as a security concept that is common, comprehensive, cooperative, and sustainable, and work together to build effective governance mechanisms to ensure that artificial intelligence always develops in a direction conducive to the progress of human civilization.
As a major artificial intelligence power, China has consistently attached great importance to risk prevention and security governance in the military application of AI, and has adhered to the concept of “human-centered military artificial intelligence. Countries, when applying relevant technologies, should uphold a bottom line of prudence and responsibility, maintain a people-centered approach, follow the principle of AI for good, implement agile governance, adhere to multilateralism, and promote military AI applications that serve the maintenance of peace and security.
China has incorporated “strengthening AI governance” into the recommendations for its 15th Five-Year Plan. In accordance with global trends in military transformation, objective national security needs, and the practical requirements of building national defense and armed forces, China will advance military intelligentization, refrain from engaging in an AI arms race with other countries, and maintain policy transparency.
China is also actively leveraging the advantages of military AI in peacekeeping and humanitarian assistance, with positive results. Li stressed that China will continue to uphold the principles of openness, inclusiveness, and mutual learning, strengthen communication and exchanges with all countries, deepen practical cooperation, and jointly contribute to global governance of military artificial intelligence.
Beijing Daily reporter Jin Liang wrote that 85 countries participated in the summit. After two days of talks, about one-third of the participating countries issued a joint declaration on regulating the deployment of AI technologies in warfare, but only 35 countries ultimately signed the declaration; neither China nor the United States signed.
According to Jin, the United States’ refusal to sign binding international rules on military AI stems from multiple strategic considerations. At the core is Washington’s concern that international rules could constrain rapid technological iteration and military deployment flexibility, thereby eroding the first-mover advantage and technological gap it has already established in the field of military AI.
At the same time, the United States is highly sensitive about rule-making authority and is unwilling to co-shape standards with non-“like-minded” countries such as China within multilateral frameworks like the United Nations. Instead, it prefers to build exclusive “small-circle” governance rules within its alliance system to preserve its technological and discursive dominance.
In addition, by avoiding explicit international obligations, the United States retains strategic ambiguity in sensitive areas such as autonomous weapons and battlefield AI decision-making, avoids transparency constraints, and preserves room for “gray-zone” military operations.
Finally, at the level of public discourse, the United States avoids substantive constraints while portraying itself as a “responsible user of AI,” shifting risks and pressure onto competitors and using the banner of rules to serve the practical objective of containing rivals and consolidating strategic advantages.
As for why China did not sign the declaration, Jin argues that the main reasons lie in doubts about the vague formulation of principles such as “responsible use” and the lack of mechanisms to balance the technological advantages of early-mover states. China believes that the existing framework could entrench Western-dominated technological hegemony and constrain the technological autonomy and security space of developing countries.
China has long advocated multilateral governance, emphasizing that international rules should balance security and development, and opposing the politicization of technology or bloc confrontation. Certain provisions of the declaration are seen as potentially embedding ideological bias and failing to reflect fairness.
At the same time, China is rapidly catching up in the field of military AI and needs to preserve strategic space for independent research, development, and deployment, avoid being constrained by unreasonable rules, and ensure that national sovereignty and security are not subject to external interference.
Jin further notes that two shared structural obstacles prevented both China and the United States from signing the declaration.
First, military AI involves highly sensitive defense secrets and operational capabilities, making verification and enforcement of any international rules extremely difficult. While countries seek to mitigate risks through rules, they also worry that adversaries might “free-ride” or violate commitments, creating a classic prisoner’s dilemma.
Second, the pace of AI technological iteration far exceeds the cycle of rule-making. The declaration’s largely principled provisions are insufficient to address concrete risks such as autonomous weapons, algorithmic bias, and battlefield misjudgment. Both China and the United States therefore view the declaration as “incomplete,” lacking practical binding force, and of limited value to sign.



Really strong breakdown of the structural dynamics at play. The prisoner's dilemma point around verification is critical, states can't bind themselves when they fundamentaly distrust opponents to follow through. In defense tech policy circles, this same dynamic plays out repeatedly where soft norms fail because nobody wants to be the first mover on transparency.