China Issues New Rules on AI Ethics Review and Support
On April 3, 2026, China’s Ministry of Industry and Information Technology, together with nine other government agencies, jointly issued the Administrative Measures for the Ethical Review and Services of Artificial Intelligence Science and Technology (Trial).
Since the Opinions on Strengthening the Governance of Science and Technology Ethics in 2022 established the top-level design, China has gradually elevated artificial intelligence—alongside the life sciences—as a key domain of governance. The Measures for the Ethical Review of Science and Technology (Trial) released in 2023, together with subsequent regulations on generative AI, established a core mechanism centered on “primary responsibility for self-review by institutions, supplemented by expert review for high-risk cases,” while tightly linking ethical compliance with algorithm filing and security assessments. In particular, AI technologies with capabilities for public opinion mobilization, highly autonomous decision-making, or deep human–machine integration have been explicitly included in the “high-risk science and technology activities list,” requiring mandatory review by third-party experts.
The introduction of the Administrative Measures for the Ethical Review and Services of Artificial Intelligence Science and Technology (Trial) marks a new stage in China’s AI ethics governance, characterized by a dual emphasis on professionalization and service provision.
This system not only requires companies to submit proof of ethical review during algorithm filing, but also introduces, for the first time, a dual-track access model combining “algorithm filing + ethical evaluation.” At the same time, regulatory attention is expanding beyond content security to encompass broader societal and labor protections. For example, “algorithm auditing” mechanisms targeting platform-based sectors such as ride-hailing and food delivery now mandate that algorithmic systems incorporate human override functions to prevent “algorithmic exploitation” of workers. China is thus transforming AI governance from a focus on “content and security regulation” into a more institutionalized, operational, and auditable “ethical compliance system,” embedded within the broader framework of national technology governance and industrial policy.
In terms of institutional design, the system rests on a three-tier structure: internal ethics committees within organizations, external service centers, and government-led expert review. All universities, research institutions, and companies engaged in AI development are required to establish ethics committees and assume primary responsibility. Where internal capacity is insufficient, organizations may delegate to external “ethics review service centers.” For high-risk projects—such as those affecting public opinion, psychological behavior, or involving highly automated decision-making—mandatory entry into a government-led expert review process is required. In essence, this design embeds AI ethics governance within organizations while preserving the state’s ultimate supervisory authority.
At the operational level, the Measures establish a relatively comprehensive quasi-administrative approval process. Before a project can commence, applicants must submit detailed materials, including technical plans, data sources, algorithmic mechanisms, application scenarios, and, most critically, ethical risk assessments and contingency plans. Review bodies are required to issue decisions within 30 days and may request revisions or reject applications outright. Approval is not a one-off process: projects are subject to ongoing monitoring and review during operation, and may be suspended or terminated if risk conditions change. This implies that AI projects in China will be subject to a form of “dynamic regulation” similar to that applied to pharmaceuticals or medical research.
From the perspective of review criteria, the Measures effectively establish an operationalized indicator system for AI governance in China. Six key dimensions are emphasized: whether the technology genuinely promotes social well-being; whether it entails algorithmic discrimination; whether the system is controllable and reliable; whether it is transparent and explainable; whether accountability is traceable; and whether privacy is adequately protected. While these standards appear broadly aligned with Western AI governance frameworks—such as the OECD principles and the EU AI Act—their implementation places greater emphasis on controllability and risk prevention, reflecting a more engineering-oriented approach to governance.
The most distinctive feature of this framework is that it is not a traditional regulatory model focused solely on review without support; rather, it embeds a systematic service provision mechanism alongside oversight. On the one hand, it establishes compliance boundaries through tools such as ethical review and expert reassessment; on the other, it provides enterprises with risk identification and compliance capabilities through ethics review service centers and standardized evaluation tools. In essence, it transforms AI ethics from a mere compliance threshold into a capability that can be provided and outsourced, reflecting a governance approach that both manages risk and promotes development.
Full translation of the Measures (Unofficial):
Interim Measures for the Ethical Review and Services of Artificial Intelligence Science and Technology Activities
Chapter I General Provisions
Article 1
In order to regulate the ethical governance of artificial intelligence science and technology activities, promote fairness, justice, harmony, safety, and responsible innovation, and facilitate the healthy development of the artificial intelligence industry, these Measures are formulated in accordance with the Law of the People’s Republic of China on Scientific and Technological Progress, the Opinions on Strengthening the Governance of Science and Technology Ethics, the Measures for the Ethical Review of Science and Technology (Interim) (hereinafter referred to as the “Ethics Measures”), and other relevant laws, regulations, and provisions.
Article 2
The artificial intelligence science and technology activities to which these Measures apply refer to artificial intelligence scientific research, technological development, and other activities conducted within the territory of the People’s Republic of China that may pose ethical risks or challenges in terms of human dignity, public order, life and health, ecological environment, and sustainable development, as well as other science and technology activities that are required to undergo artificial intelligence science and technology ethics review in accordance with laws, administrative regulations, and relevant national provisions.
Article 3
Entities conducting artificial intelligence science and technology activities shall integrate ethical requirements throughout the entire process, adhere to the principles of promoting human well-being, respecting life and rights, ensuring fairness and justice, reasonably controlling risks, maintaining openness and transparency, protecting privacy and security, and ensuring controllable and trustworthy artificial intelligence, and shall comply with the Constitution, laws and regulations of China, and relevant provisions.
Chapter II Services and Promotion
Article 4
A system of artificial intelligence science and technology ethics standards shall be established and improved, and efforts shall be made to promote the formulation of relevant international standards, national standards, industry standards, and group standards, and to support the establishment of platforms for international standardization exchanges and cooperation.
Higher education institutions, research institutions, medical and health institutions, enterprises, and scientific and technological social organizations are encouraged to participate in the formulation, validation, and promotion of artificial intelligence science and technology ethics standards.
Article 5
The construction of an artificial intelligence science and technology ethics service system shall be advanced, strengthening the supply of services such as artificial intelligence science and technology ethics risk monitoring and early warning, testing and evaluation, certification, and consulting, improving enterprises’ capabilities in technological research and development and artificial intelligence science and technology ethics risk prevention, increasing support and service efforts for small, medium, and micro enterprises in artificial intelligence science and technology ethics review, and promoting international exchange and cooperation in artificial intelligence science and technology ethics.
Article 6
Higher education institutions, research institutions, medical and health institutions, enterprises, and scientific and technological social organizations are encouraged to carry out research on artificial intelligence science and technology ethics review, support technological innovation in artificial intelligence science and technology ethics review, strengthen the use of technical means to prevent artificial intelligence science and technology ethics risks; promote the orderly open sharing of high-quality datasets for artificial intelligence science and technology ethics review, strengthen the research and development of general risk management, evaluation, and auditing tools, explore science and technology ethics risk assessment and evaluation based on application scenarios; promote artificial intelligence products and services that comply with science and technology ethics, and protect intellectual property rights related to artificial intelligence science and technology ethics review technologies.
Article 7
Publicity and education on artificial intelligence science and technology ethics shall be carried out, the role of scientific and technological social organizations in artificial intelligence science and technology ethics publicity and education shall be brought into play, public participation shall be encouraged, practical demonstrations shall be promoted, and public awareness and literacy in ethics shall be enhanced. Mass media shall be guided to conduct targeted publicity and education on artificial intelligence science and technology ethics.
Article 8
Support shall be provided to higher education institutions, research institutions, medical and health institutions, enterprises, and scientific and technological social organizations to carry out education and training related to artificial intelligence science and technology ethics, promote the development of professional systems and curriculum systems, cultivate artificial intelligence science and technology ethics talents through multiple approaches, and promote talent exchange.
Chapter III Implementing Entities
Article 9
Higher education institutions, research institutions, medical and health institutions, enterprises, and other entities engaged in artificial intelligence science and technology activities shall be the responsible entities for the management of artificial intelligence science and technology ethics review within their organizations, and shall establish artificial intelligence science and technology ethics committees (hereinafter referred to as the “Committee”) in accordance with the relevant requirements of Article 4 of the Ethics Measures.
The Committee shall be equipped with necessary personnel, office premises, funding, and other conditions, and effective measures shall be taken to ensure that the Committee can independently carry out its work. Qualified relevant entities are encouraged to carry out certification related to artificial intelligence science and technology ethics management systems.
Article 10
The charter, composition, and the responsibilities and obligations of Committee members shall comply with Articles 5 to 8 of the Ethics Measures. The composition of the Committee shall include experts with corresponding professional backgrounds in artificial intelligence technology, applications, ethics, law, and other fields.
Article 11
Local authorities and relevant competent departments may, in light of actual circumstances, rely on relevant entities to establish specialized artificial intelligence science and technology ethics review and service centers (hereinafter referred to as “Service Centers”).
Service Centers may accept commissions from other entities to provide services such as artificial intelligence science and technology ethics review, re-examination, training, and consulting. A Service Center shall not simultaneously provide both review and re-examination services for the same artificial intelligence science and technology activity.
Service Centers shall establish standardized management systems and procedures, be equipped with full-time personnel with the capability to conduct artificial intelligence science and technology ethics review and services, and shall be subject to supervision by local or relevant competent departments.
Chapter IV Working Procedures
Section I Application and Acceptance
Article 12
For artificial intelligence science and technology activities falling within the scope specified in Article 2 of these Measures, the person in charge of the artificial intelligence science and technology activity shall apply to the Committee of their entity. Where the entity has not established a Committee or the Committee is unable to meet the requirements for conducting science and technology ethics review, an application shall be submitted to the Service Center entrusted by the entity to conduct the ethics review; where there is no affiliated entity, a qualified Service Center shall be entrusted to conduct the ethics review.
The person in charge of the artificial intelligence science and technology activity shall, in accordance with the provisions, submit application materials to the Committee or the Service Center. The application materials shall mainly include:
(1) the artificial intelligence science and technology activity plan, including research background, objectives and plans, legal qualification materials of the relevant institutions involved, personnel information, sources of funding, the algorithm mechanisms and principles to be adopted, data sources and methods of acquisition, testing and evaluation methods, the software and hardware products to be formed, expected application fields and applicable groups, etc.;
(2) the ethical risk assessment of the artificial intelligence science and technology activity, as well as prevention, control, and emergency response plans, including assessment of potential ethical risks arising from the expected application of artificial intelligence technology, monitoring and early warning measures for ethical risks, and prevention and control plans for potential ethical risks;
(3) a letter of commitment to comply with artificial intelligence science and technology ethics and research integrity requirements.
Article 13
The Committee or the Service Center shall determine whether to accept the application based on the submitted materials and notify the applicant. Where the application is accepted, the applicable procedure—general, simplified, or emergency—shall be determined based on factors such as the likelihood and degree of occurrence of ethical risks and emergency circumstances. Ethics review shall be conducted through offline or online forms as required by different procedures. Where the materials are incomplete, the applicant shall be informed in a one-time comprehensive manner of the materials that need to be supplemented.
Section II General Procedures and Simplified Procedures
Article 14
Meetings for artificial intelligence science and technology ethics review shall be chaired by the Chairperson of the Committee or a Vice-Chairperson designated by the Chairperson. No fewer than five members shall be present, and members from different categories as specified in Article 10 of these Measures shall be included. Service Centers may organize and implement their work with reference to the Committee’s provisions.
Based on review needs, experts or consultants in relevant fields who do not have a direct interest in the matter may be invited to provide advisory opinions. Advisory experts shall not participate in voting.
Article 15
When conducting artificial intelligence science and technology ethics review, the Committee or the Service Center shall focus on the following aspects:
(1) In terms of human well-being, whether the artificial intelligence science and technology activity has scientific and social value; whether the research objectives contribute positively to enhancing human well-being and achieving sustainable social development; and whether the risks of the activity are reasonably balanced against its benefits.
(2) In terms of fairness and justice, whether the standards for selecting training data and the design of algorithms, models, and systems are reasonable; whether measures have been taken to prevent bias, discrimination, and algorithmic exploitation, and to ensure objectivity and inclusiveness in resource allocation, access to opportunities, and decision-making processes.
(3) In terms of controllability and trustworthiness, whether the robustness of models and systems can be ensured to cope with open environments, extreme situations, and interfering factors; whether users are able to control, guide, and intervene in the basic operation of models and systems; and whether continuous monitoring plans and emergency response plans have been formulated.
(4) In terms of transparency and explainability, whether information such as the purpose, operational logic, interaction methods, and potential risks of algorithms, models, and systems is reasonably disclosed; and whether effective technical means are adopted to enhance explainability.
(5) In terms of accountability and traceability, whether measures such as log management are in place to clearly record sufficient information on data, algorithms, models, and systems at each stage, ensuring full-chain traceability and management; and whether the qualifications of scientific and technical personnel meet relevant requirements.
(6) In terms of privacy protection, whether sufficient measures are taken to ensure the effective protection of privacy data in activities such as data collection, storage, processing, and use, as well as in the research and development of new data technologies.
Article 16
The Committee or the Service Center shall, within 30 days from the date of acceptance of the application, make a decision of approval, approval after modification, or disapproval. In complex cases or where supplementary or corrective materials are required, the time limit may be appropriately extended, and the extended period shall be specified.
For cases requiring modification or disapproval, the Committee or the Service Center shall provide suggestions for modification or state the reasons. Where the applicant has objections, they shall file an appeal with the Committee or the Service Center within three working days from the date of receipt of the decision. Where the grounds for appeal are sufficient, the Committee or the Service Center shall make a new decision within seven working days.
Article 17
The person in charge of the artificial intelligence science and technology activity shall promptly identify changes in ethical risks and report such changes to the Committee or the Service Center.
The Committee or the Service Center shall, in accordance with Article 19 of the Ethics Measures, conduct follow-up reviews of approved artificial intelligence science and technology activities, promptly grasp changes in ethical risks, and may, where necessary, make decisions such as suspending or terminating relevant activities. The interval for follow-up reviews shall generally not exceed 12 months.
Article 18
Where multiple entities jointly carry out artificial intelligence science and technology activities, mutual recognition of ethics review results among entities may be conducted based on actual circumstances.
Article 19
A simplified procedure may be applied under any of the following circumstances:
(1) the likelihood and degree of ethical risks of the artificial intelligence science and technology activity are not higher than the routine risks encountered in daily life;
(2) minor modifications are made to an already approved activity plan without increasing the risk-benefit ratio;
(3) follow-up reviews of activities without major adjustments in earlier stages.
Article 20
The Committee or the Service Center shall formulate working procedures and tracking frequency for reviews under the simplified procedure. Simplified reviews shall be conducted by two or more members designated by the Chairperson of the Committee. Service Centers may organize implementation with reference to the Committee’s provisions.
Where, during the simplified review process, a negative opinion arises, doubts exist regarding the review content, or members’ opinions are inconsistent, the case shall be transferred to the general procedure.
Section III Expert Re-examination Procedures
Article 21
The Ministry of Industry and Information Technology and the Ministry of Science and Technology, together with relevant departments, shall formulate and publish a “List of Artificial Intelligence Science and Technology Activities Requiring Expert Re-examination” (hereinafter referred to as the “Re-examination List”), and dynamically adjust it as needed.
Article 22
For artificial intelligence science and technology activities included in the Re-examination List, after passing the preliminary review by the Committee or the Service Center, an application for expert re-examination shall be submitted by the entity. Where multiple entities are involved, the leading entity shall be responsible for the application.
Central enterprises, as well as higher education institutions, research institutions, and medical and health institutions directly under central and state organs, shall directly submit applications to the relevant competent departments for organizing expert re-examination. Other entities shall submit applications to local authorities for organizing expert re-examination.
Article 23
The entity undertaking the artificial intelligence science and technology activity shall submit materials for expert re-examination in accordance with Article 27 of the Ethics Measures.
Local or relevant competent departments shall, in accordance with Articles 28 to 30 of the Ethics Measures, organize the establishment of expert review groups to review the compliance and rationality of the preliminary review opinions, and shall provide feedback on the re-examination opinions to the applying entity within 30 days of receiving the application.
Local or relevant competent departments may entrust Service Centers to carry out specific re-examination work.
Article 24
The Committee or the Service Center shall make a final ethics review decision based on the expert re-examination opinions.
Article 25
The Committee or the Service Center shall strengthen follow-up reviews of artificial intelligence science and technology activities included in the Re-examination List, with intervals generally not exceeding six months.
Where there are significant changes in ethical risks, a new ethics review shall be conducted in accordance with Article 20 of the Ethics Measures and an application for expert re-examination shall be submitted.
Article 26
Where artificial intelligence science and technology activities are subject to regulatory measures such as registration, filing, or administrative approval in areas including deep synthesis, algorithmic recommendation, and generative artificial intelligence service management, and where compliance with ethical requirements is incorporated as a condition for approval or regulatory content, expert re-examination may no longer be required.
Section IV Emergency Procedures
Article 27
The Committee or the Service Center shall establish emergency review systems for artificial intelligence science and technology ethics, specifying emergency review processes and standard operating procedures under urgent circumstances such as public emergencies. Emergency reviews shall generally be completed within 72 hours. For activities subject to expert re-examination procedures, the review prior to expert re-examination shall generally be completed within 36 hours.
Article 28
The Committee or the Service Center shall ensure the quality and timeliness of emergency ethics reviews, strengthen follow-up work and process supervision, and, where necessary, may invite advisory experts in relevant fields to attend meetings and provide opinions.
Chapter V Supervision and Administration
Article 29
The Ministry of Science and Technology shall be responsible for overall coordination and guidance of national science and technology ethics supervision. The Ministry of Industry and Information Technology, together with relevant departments, shall be responsible for artificial intelligence science and technology ethics governance and strengthen coordination and guidance of emergency ethics reviews.
Relevant departments shall, within the scope of their respective responsibilities, supervise and administer artificial intelligence science and technology ethics reviews within their industries and systems. Local authorities shall, within the scope of their responsibilities, supervise and administer artificial intelligence science and technology ethics reviews within their jurisdictions.
Article 30
Entities shall, in accordance with Articles 43 to 45 of the Ethics Measures, register relevant information of Committees and artificial intelligence science and technology activities included in the Re-examination List through the National Science and Technology Ethics Management Information Registration Platform, and submit annual reports on Committee work and implementation reports of activities included in the Re-examination List, among other materials.
Service Centers shall register and submit annual work reports in accordance with the above provisions.
The Ministry of Science and Technology and relevant competent departments shall share information related to artificial intelligence science and technology ethics registration.
Article 31
Local authorities, relevant competent departments, and entities engaged in artificial intelligence science and technology activities shall, based on the actual conditions of their industries, systems, and entities, establish smooth channels for reporting violations of artificial intelligence science and technology ethics, and handle such matters in accordance with relevant provisions.
Article 32
Where violations of these Measures occur in the course of artificial intelligence science and technology activities or related ethical work, they shall be investigated and handled, and corresponding penalties imposed, in accordance with the Cybersecurity Law of the People’s Republic of China, the Data Security Law of the People’s Republic of China, the Personal Information Protection Law of the People’s Republic of China, the Law of the People’s Republic of China on Scientific and Technological Progress, and other relevant laws, regulations, and provisions.
Chapter VI Supplementary Provisions
Article 33
For time limits stipulated in these Measures, where not specified as working days, they shall be counted as calendar days.
The term “local” as used in these Measures refers to provincial-level administrative departments designated by provincial people’s governments to be responsible for the ethics review and management of artificial intelligence science and technology. The term “relevant competent departments” refers to relevant departments under the State Council.
Article 34
Local authorities and relevant competent departments may, in accordance with these Measures and based on actual circumstances, formulate or revise rules and detailed measures for artificial intelligence science and technology ethics review and services within their respective regions, industries, or systems. Scientific and technological social organizations may formulate specific norms and guidelines for ethics review and services in their respective fields.
Article 35
Where relevant competent departments have special provisions for artificial intelligence science and technology ethics review and services within their industries or systems that are consistent with the spirit of these Measures, such provisions shall prevail. Matters not provided for in these Measures shall be governed by the Ethics Measures and relevant laws and regulations.
Article 36
These Measures shall be interpreted by the Ministry of Industry and Information Technology in conjunction with relevant departments.
Article 37
These Measures shall come into force upon the date of issuance.
Annex: List of Artificial Intelligence Science and Technology Activities Requiring Expert Re-examination
The development of human–machine integration systems that have a significant impact on human behavior, psychological emotions, and life and health.
The development of algorithm models, application programs, and systems that possess the capability to influence public opinion, social mobilization, and social consciousness.
The development of highly autonomous automated decision-making systems for scenarios involving safety risks and risks to personal health.
This list shall be dynamically adjusted as required.



Thanks, good summary....