China’s Ministry of Industry and Information Technology (MIIT) today released the Administrative Measures for the Ethical Management of Artificial Intelligence Science and Technology (Trial) (Draft for Public Comment).
The measures were jointly drafted by MIIT together with the Cyberspace Administration of China, the National Development and Reform Commission, the Ministry of Science and Technology, the Ministry of Agriculture and Rural Affairs, the Ministry of Culture and Tourism, the National Health Commission, the People’s Bank of China, the National Administration of Financial Regulation, the Chinese Academy of Sciences, and the China Association for Science and Technology.
The stated aim is to “implement the requirements of the Global AI Governance Initiative and the Global AI Governance Action Plan, regulate AI science and technology ethics governance, balance high-quality development with high-level security, and promote the high-quality development of the AI industry.”
China began placing emphasis on science and technology ethics governance three years ago. In March 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the Opinions on Strengthening the Governance of Science and Technology Ethics, the country’s first programmatic document for systematically deploying ethics governance in science and technology.
Subsequently, in September 2023, the Ministry of Science and Technology released the Administrative Measures for Science and Technology Ethics Review (Trial), which came into effect on December 1, 2023, marking the first unified regulatory framework for ethics review in scientific research activities.
These measures stipulate that Chinese research institutions, universities, and enterprises engaged in high-risk scientific and technological activities involving life and health, the ecological environment, and public safety must establish ethics review committees and carry out reviews in accordance with laws and regulations.
According to MIIT, the Administrative Measures for the Ethical Management of AI Science and Technology (Trial) serve as a detailed application of the 2022 Opinions on Strengthening the Governance of Science and Technology Ethics and the 2023 Administrative Measures for Science and Technology Ethics Review (Trial) specifically in the AI domain.
During its drafting, the Chinese government sought extensive input from enterprises, universities, and research institutions, took into account AI development trends and governance needs, and drew on domestic and international experience in AI ethics governance—aiming to align with international best practices while ensuring compatibility with China’s national conditions.
This drafted measures can be understood as a set of “ethical safety rules” established by the Chinese government for AI development. The core idea is that no matter how rapidly AI innovates, principles of fairness, safety, responsibility, and transparency must come first, ensuring that technological progress does not create risks to human health, social order, or environmental sustainability.
It stipulates that AI research, development, and applications carried out domestically, if they involve potential ethical risks, must go through an “ethics review” process. Who is responsible? Each university, research institute, medical institution, or enterprise engaged in AI activities. Where necessary, they must set up independent ethics committees. If they cannot conduct reviews themselves, they may entrust local or third-party “ethics service centers” to assist.
The review process falls into four categories—general, simplified, expert re-examination, and emergency. Complex projects require formal meetings, low-risk projects may go through simplified procedures, and in special cases experts must be involved or urgent responses arranged.
What exactly do the reviews examine? Whether algorithms are fair and unbiased; whether systems are controllable and trustworthy, and can be interrupted or directed by humans in case of problems; whether operations are transparent and explainable, with clear disclosure of uses and risks; whether data and decision processes are traceable, so responsibility can be identified when things go wrong; and whether the research as a whole has genuine social value with a reasonable balance of risks and benefits.
At the same time, the measures stress the need to establish standards, build service systems, encourage technological innovation and public education, promote the training of AI ethics governance talent in universities, enterprises, and research institutions, develop risk-monitoring and auditing tools, and promote AI products that comply with ethical requirements. Government departments are tasked with overall supervision, requiring all relevant entities to register and report their activities regularly on a national platform, and imposing penalties in accordance with law on those who violate the measures.
The full English translation of the draft measures:
Administrative Measures for the Ethical Management of Artificial Intelligence Science and Technology (Trial)
(Draft for Public Comment)
Chapter I General Provisions
Article 1 [Purpose and Basis]
In order to regulate the ethical governance of artificial intelligence (AI) science and technology activities, promote fairness, justice, harmony, safety, and responsible innovation, and facilitate the healthy development of the AI industry, these Measures are formulated in accordance with the Law of the People’s Republic of China on Progress of Science and Technology, the Opinions on Strengthening the Governance of Science and Technology Ethics, the Administrative Measures for Science and Technology Ethics Review (Trial) (hereinafter the “Ethics Measures”), and other relevant laws, regulations, and provisions.
Article 2 [Scope of Application]
These Measures apply to AI science and technology activities carried out within the territory of the People’s Republic of China that may pose ethical risks and challenges in areas such as life and health, human dignity, ecological environment, public order, and sustainable development, as well as other AI-related science and technology activities that are subject to ethics review under laws, administrative regulations, and relevant national provisions.
Article 3 [Ethical Principles]
AI science and technology activities shall integrate ethical requirements throughout the entire process, adhering to the principles of enhancing human well-being, respecting life and rights, respecting intellectual property, upholding fairness and justice, reasonably controlling risks, maintaining openness and transparency, ensuring controllability and trustworthiness, and strengthening responsibility. They must comply with the Constitution, laws, regulations, and relevant provisions of China.
Chapter II Ethical Support and Promotion
Article 4 [Standards Development]
Establish and improve an AI ethics standards system, promote the formulation of international, national, industry, and group standards for AI ethics, and support the establishment of international platforms for standards cooperation. Encourage universities, research institutes, medical and health institutions, enterprises, and scientific and technological associations to participate in the development, validation, and promotion of AI ethics standards.
Article 5 [Service System and Mechanisms]
Promote the construction of an AI ethics service system, strengthen monitoring, early warning, assessment, certification, and consultation services for AI ethics risks, enhance enterprises’ capacity for risk prevention, increase support for micro, small, and medium-sized enterprises, and promote international exchanges and cooperation on AI ethics.
Article 6 [Encouraging Innovation]
Encourage universities, research institutes, medical and health institutions, enterprises, and relevant associations to engage in frontier innovation and cross-disciplinary research on AI ethics, support technical innovation to mitigate risks, promote orderly open access to ethical technologies, IP, and high-quality datasets, strengthen the development of risk management and auditing tools, explore scenario-based risk evaluation, and promote ethical AI products and services.
Article 7 [Publicity and Education]
Guide relevant organizations to strengthen education and publicity on AI ethics, leverage the role of associations, encourage public participation, and promote demonstration practices. Encourage mass media to conduct targeted communication on AI ethics.
Article 8 [Talent Development]
Support universities, research institutes, medical and health institutions, enterprises, and associations in developing education and training on AI ethics, advancing professional and curricular systems, training talent through diverse approaches, and promoting talent exchanges.
Chapter III Implementing Entities
Article 9 [Responsible Entities]
Universities, research institutes, medical and health institutions, and enterprises engaging in AI science and technology activities are responsible for AI ethics management within their organizations. Qualified entities should establish an AI Ethics Committee. Such committees shall be provided with staff, facilities, funding, and measures to ensure independence. Qualified entities are encouraged to pursue certification of their ethics management systems.
Article 10 [Committees and Responsibilities]
The charter, composition, and responsibilities of ethics committees shall be consistent with Articles 5–8 of the Ethics Measures. Committees shall include experts with backgrounds in AI technology, application, ethics, and law.
Article 11 [Ethics Service Centers]
Local authorities and relevant competent departments may, as appropriate, establish professional AI ethics service centers. These centers may provide ethics review, training, and consultation services, and shall establish proper systems and procedures, be staffed with qualified personnel, and operate under supervision of relevant authorities.
Chapter IV Work Procedures
Section 1: Application and Acceptance
Articles 12–13 outline the application process, required submission materials (e.g., research plan, risk assessments, compliance commitments), acceptance criteria, and selection of procedures (general, simplified, or emergency), with requirements for notification and supplementary documentation.
Section 2: General and Simplified Procedures
Articles 14–20 specify how ethics reviews are conducted under general and simplified procedures, including committee composition, consultation with independent experts, and review focus areas (fairness, controllability, transparency, traceability, qualifications, and proportionality of risks/benefits). Decisions must be made within 30 days of acceptance, with provisions for appeals, follow-up reviews, and recognition of results across institutions.
Section 3: Expert Re-examination
Articles 21–26 establish a “Review List” for high-risk AI activities requiring expert re-examination, procedures for application, review by expert groups, application of results, follow-up, and alignment with other regulatory measures.
Section 4: Emergency Procedures
Articles 27–28 require committees or service centers to establish emergency review procedures for public emergencies, generally within 72 hours, with expedited preliminary steps for projects requiring expert re-examination.
Chapter V Supervision and Management
Articles 29–32 assign responsibilities to national and local authorities, require reporting of ethics committee activities and projects to the national science and technology management information platform, ensure information-sharing, establish complaint mechanisms, and stipulate penalties for violations.
Chapter VI Supplementary Provisions
Articles 33–37 provide definitions, authorize local authorities and associations to issue implementing rules, clarify the relationship with other regulations, assign interpretive authority to MIIT and relevant departments, and specify the effective date.
Annex: List of AI Activities Requiring Expert Re-examination, including:
Development of human–machine integration systems strongly affecting human behavior, psychology, emotions, or health;
Development of algorithmic models, applications, and systems with public opinion mobilization or social consciousness-shaping capabilities;
Development of highly autonomous decision-making systems for scenarios involving safety or personal health risks.
Great write up. Curious how enforcement and cross-institution recognition will work in practice for SMEs.