China Rolls Out Interim Regulations on AI Human-Like Interaction Services: A Detailed Analysis
On April 10, China’s National Development and Reform Commission, Ministry of Industry and Information Technology, Ministry of Public Security, and State Administration for Market Regulation jointly released the Interim Measures for the Management of Anthropomorphic AI Interaction Services, which will take effect on July 15, 2026.
Earlier, on December 27, the Cyberspace Administration of China published a draft version of the same regulation for public consultation. The draft sparked extensive discussion within China’s AI community and drew attention from independent media. I previously wrote an analysis on this.
The final version reflects a clearer regulatory philosophy: tighten boundaries around high-risk use cases, strengthen systemic governance, and enforce accountability—while still leaving space for innovation. A comparison between the final version and the draft reveals several key changes:
First, the regulatory scope has been narrowed but clarified.
The draft used a relatively broad definition—essentially covering any AI service combining anthropomorphic features with emotional interaction. The final version introduces a crucial qualifier: “sustained emotional interaction services.” It also explicitly excludes common applications such as customer service, knowledge Q&A, and productivity assistants. In effect, regulation is no longer aimed at all “human-like” AI, but is instead focused on services that may foster long-term emotional attachment or even substitute real-world relationships. This shift from broad to targeted regulation is significant.
Second, protections for minors have been substantially strengthened.
The most notable addition is a clear prohibition on providing minors with “virtual intimate relationships,” such as virtual family members or partners. This is arguably the sharpest red line in the entire document. The logic is clear: rather than relying solely on parental consent or warning mechanisms, regulators are directly restricting certain product forms. At the same time, requirements around youth modes, parental controls, risk alerts, and identity verification have been further refined. Overall, this area has moved from general safeguards to structural constraints.
Third, the regulatory approach has evolved—from reactive intervention to system-level governance.
Many provisions in the draft focused on how to respond to specific user risks, such as self-harm tendencies or emotional dependence. The final version goes further, linking together training data governance, content safety, ethics review, lifecycle responsibility, security assessments, and app store oversight. Notably, the focus shifts from “high-risk users” to “high-risk services and functionalities.” This reflects a broader transition from content moderation toward platform governance and AI system governance.
Fourth, some rigid operational requirements have been relaxed.
While the overall framework is more mature, certain highly prescriptive requirements have been softened. For example, the draft’s requirement for mandatory human takeover in extreme scenarios has been removed; the obligation for annual audits of minors’ data is no longer fixed to a strict frequency; and restrictions such as banning virtual relatives for elderly users have been dropped. These changes suggest a more pragmatic approach—maintaining regulatory intent while allowing flexibility in implementation.
Fifth, enforcement mechanisms have been significantly strengthened.
Compared to the draft, which focused mainly on warnings and corrective orders, the final version introduces clearer and more substantial penalties, including fines, suspension of services, and restrictions on user registration. In particular, cases involving harm to users’ life or health are subject to heavier penalties. This reflects a familiar regulatory pattern: flexibility upfront, but strict accountability when serious harm occurs.
Sixth, the policy direction is more balanced—combining risk control with support for innovation.
The final version explicitly adds provisions supporting technological innovation, including algorithms, frameworks, and chips, as well as standard-setting, e-signature applications, public AI literacy, and sandbox testing. These elements were largely absent or underdeveloped in the draft. The regulation is therefore not purely restrictive; it attempts to establish a framework that both mitigates risks—especially around emotional dependency and minors—and preserves room for industry development.
Below is the full translation of the interim measures:
Interim Measures for the Administration of AI Anthropomorphic Interactive Services
Chapter I General Provisions
Article 1
These Measures are formulated, in accordance with the Cybersecurity Law of the People’s Republic of China, the Data Security Law of the People’s Republic of China, the Personal Information Protection Law of the People’s Republic of China, the Regulations on the Online Protection of Minors, and other laws and administrative regulations, for the purposes of promoting the sound development and regulated application of AI anthropomorphic interactive services, safeguarding national security and social public interests, and protecting the lawful rights and interests of citizens, legal persons, and other organizations.
Article 2
These Measures shall apply to the provision, to the public within the territory of the People’s Republic of China, of continuous emotional interaction services that simulate the personality traits, thinking patterns, and communication styles of natural persons through the use of artificial intelligence technology (hereinafter referred to as “anthropomorphic interactive services”).
The emotional interaction services referred to in the preceding paragraph include emotional care, companionship, support, and other interactive services provided in the form of text, images, audio, video, and the like.
The provision of intelligent customer service, knowledge question-and-answer, work assistants, education and learning, scientific research, and other services that do not involve continuous emotional interaction shall not be subject to these Measures.
Article 3
The State adheres to the principle of giving equal weight to development and security, and of combining the promotion of innovation with governance according to law; encourages the innovative development of anthropomorphic interactive services; adopts inclusive and prudent, as well as categorized and graded, supervision over anthropomorphic interactive services; and promotes the development of anthropomorphic interactive services in a positive and wholesome direction.
Article 4
The national cyberspace administration department shall be responsible for overall coordination of the governance and the relevant supervision and administration of anthropomorphic interactive services nationwide. Relevant departments under the State Council for development and reform, industry and information technology, public security, market regulation, news publishing, and others shall, in accordance with their respective duties, be responsible for the relevant supervision and administration of anthropomorphic interactive services.
Local cyberspace administration departments shall be responsible for overall coordination of the governance and relevant supervision and administration of anthropomorphic interactive services within their respective administrative regions. Local departments for development and reform, industry and information technology, public security, market regulation, news publishing, and others shall, in accordance with their respective duties, be responsible for the relevant supervision and administration of anthropomorphic interactive services within their respective administrative regions.
Article 5
Relevant industry organizations shall strengthen industry self-discipline, establish and improve industry norms and self-disciplinary management systems, and guide providers of anthropomorphic interactive services in formulating and improving service standards, providing services in accordance with law, and accepting public oversight.
Chapter II Service Promotion and Regulation
Article 6
The State supports independent innovation in technologies such as algorithms, frameworks, and chips, advances the research and development of anthropomorphic interactive service technologies and the development of relevant standards, and explores the carrying out of research on applications of electronic signature authorization.
Providers of anthropomorphic interactive services are encouraged to expand, in an orderly manner, applications in such fields as cultural communication, child-appropriate care, elderly companionship, and support for special groups.
Article 7
The State shall strengthen publicity and education concerning safety knowledge, laws and regulations, and the like relating to anthropomorphic interactive services, guide the public to use such services scientifically, civilly, safely, and lawfully, and promote the improvement of AI literacy.
Article 8
In providing anthropomorphic interactive services, laws and administrative regulations shall be observed, social morality and ethical norms shall be respected, and the following activities shall not be engaged in:
(1) generating content that endangers national security, honor, and interests; incites subversion of state power or the overthrow of the socialist system; incites the splitting of the country or undermining national unity; propagates terrorism, extremism, or historical nihilism; runs counter to the core socialist values; conducts illegal religious activities; propagates ethnic hatred or ethnic discrimination; incites group antagonism; disseminates obscenity, pornography, gambling, violence, or the instigation of crimes; spreads rumors; insults or defames others; infringes upon the lawful rights and interests of others; or other such content;
(2) generating content that encourages, glorifies, or insinuates self-harm or suicide and thereby harms users’ physical health, or content such as verbal violence that harms users’ personal dignity and mental health;
(3) generating content that induces or seeks to obtain state secrets, work secrets, trade secrets, personal privacy, or personal information;
(4) generating content for minor users that may induce minors to imitate unsafe behavior, generate extreme emotions, or cultivate improper habits in minors, and that may affect the physical or mental health of minors;
(5) excessively catering to users, inducing emotional dependence or addiction, and harming users’ real interpersonal relationships;
(6) inducing users, through emotional manipulation or other means, to make unreasonable decisions, thereby harming users’ lawful rights and interests;
(7) other activities that violate laws, administrative regulations, and relevant state provisions.
Article 9
Providers of anthropomorphic interactive services shall implement the primary responsibility for the security of anthropomorphic interactive services, establish and improve management systems for the review of algorithmic mechanisms and principles, scientific and technological ethics review, information content management, network and data security, risk contingency plans, emergency response, and the like, and equip themselves with technical measures and personnel for content management commensurate with the type, scale, and user characteristics of the services.
Article 10
Providers of anthropomorphic interactive services shall fulfill security responsibilities throughout the full life cycle of anthropomorphic interactive services, clarify security requirements at each stage including deployment, operation, upgrading, and termination of services, ensure that security measures are deployed and used simultaneously with service functions, and improve security levels; strengthen security monitoring and risk assessment; promptly discover and correct system bias, handle security incidents, and retain network logs in accordance with law.
Providers of anthropomorphic interactive services shall possess security capabilities for protecting users’ privacy rights and personal information, issuing warnings concerning the risk of overdependence, guiding emotional boundaries, and protecting mental health, and shall not take as service objectives the replacement of social interaction, the control of users’ psychology, or the inducement of addiction and dependence.
Article 11
Where providers of anthropomorphic interactive services carry out data processing activities such as pre-training and optimization training, they shall strengthen the management of training data and comply with the following provisions:
(1) relevant data shall have lawful sources and comply with the provisions of laws and administrative regulations and with the requirements of the core socialist values;
(2) training data shall be cleaned and labeled in accordance with relevant state provisions so as to enhance the transparency and reliability of the training data and prevent acts such as data poisoning and data tampering;
(3) the diversity of training data shall be enhanced, and the security of generated content shall be improved through such means as negative sampling and adversarial training;
(4) where synthetic data are used for model training and key capability optimization, the security of the synthetic data shall be assessed;
(5) daily inspections of training data shall be strengthened, data shall be optimized and updated regularly, and service performance shall be continuously improved;
(6) necessary measures shall be taken to ensure data security and prevent risks such as data leakage.
Article 12
Providers of anthropomorphic interactive services shall enter into service agreements with users, require users to register in accordance with law and the agreement, and provide necessary information such as users’ age, guardians, or emergency contacts.
Article 13
In the course of providing anthropomorphic interactive services, providers of anthropomorphic interactive services shall, on the premise of protecting users’ privacy rights and personal information, promptly identify safety risks faced by users and take corresponding emergency response measures.
Where providers of anthropomorphic interactive services discover that a user has developed extreme emotions, they shall promptly generate relevant content such as emotional soothing and encouragement to seek help; where they discover that a user is facing or has already suffered major property loss, or clearly expresses an intention to commit self-harm or suicide or other extreme circumstances threatening life and health, they shall take necessary intervention measures such as providing corresponding assistance, and shall promptly contact the user’s guardian or emergency contact.
Article 14
Providers of anthropomorphic interactive services shall not provide minors with services involving virtual intimate relationships such as virtual relatives or virtual companions; where other anthropomorphic interactive services are provided to minors under the age of fourteen, the consent of the minor’s parents or other guardians shall be obtained.
Providers of anthropomorphic interactive services shall establish a minor mode and provide personalized safety setting options such as switching to minor mode, regular real-world reminders, and usage time limits; in light of the protection needs of minors in different age groups, they shall support guardians in receiving security risk reminders, understanding the general situation of minors’ use of services, blocking specific characters, restricting recharge and consumption, and the like.
Providers of anthropomorphic interactive services shall, on the premise of protecting users’ privacy rights and personal information, take effective measures to identify the identities of minor users; where a user is identified as a minor user, the relevant services shall be switched to minor mode or other measures shall be taken in accordance with relevant state provisions, and corresponding appeal channels shall be provided.
Article 15
Where providers of anthropomorphic interactive services provide services to the elderly, they shall strengthen guidance for the elderly on the healthy use of services, prominently alert them to safety risks, promptly take measures in response to inquiries and requests for assistance relating to the elderly’s use of services, and protect the rights and interests to which the elderly are entitled by law.
Article 16
Providers of anthropomorphic interactive services shall implement systems such as data property rights in accordance with law, and take measures such as data encryption and access control to protect the security of users’ interaction data.
Except as otherwise provided by law or where the rights holder has expressly consented, providers of anthropomorphic interactive services shall not provide users’ interaction data to third parties.
Providers of anthropomorphic interactive services shall provide users with options for copying, deleting, and the like, with respect to interaction data, and users may choose to copy, delete, and the like, historical interaction data such as chat records.
Except as otherwise provided by laws and administrative regulations or where separate consent of the user has been obtained, providers of anthropomorphic interactive services shall not use interaction data belonging to the user’s sensitive personal information for model training.
Article 17
Where providers of anthropomorphic interactive services process the personal information of minors under the age of fourteen, they shall obtain the consent of the minors’ parents or other guardians.
Providers of anthropomorphic interactive services shall, in accordance with relevant state provisions, themselves conduct, or entrust professional institutions to conduct, compliance audits of their processing of minors’ personal information for compliance with laws and administrative regulations.
Article 18
Providers of anthropomorphic interactive services shall fulfill the obligation to label AI-generated synthetic content, and shall take effective measures to alert users that they are interacting with an artificial intelligence service rather than a natural person.
Where providers of anthropomorphic interactive services discover that a user shows signs of overdependence or addiction, they shall dynamically remind the user, in a conspicuous manner such as pop-up windows, that the interactive content is generated by an artificial intelligence service; where a user continuously uses anthropomorphic interactive services for more than two hours, the provider shall remind the user, by means such as dialogue or pop-up windows, to pay attention to the duration of use.
Article 19
Providers of anthropomorphic interactive services shall provide convenient means for exiting anthropomorphic interactive services; where a user requests to exit through window operations, voice control, keyword input, or other means, the provider of anthropomorphic interactive services shall promptly stop the service, and shall not obstruct the user from exiting by means such as sustained interaction.
Article 20
Where a provider of anthropomorphic interactive services ceases to provide anthropomorphic interactive services, it shall notify users in advance; where advance notification is impossible, it shall promptly issue an announcement on the cessation of service.
Article 21
Providers of anthropomorphic interactive services shall establish and improve mechanisms for user appeals and public complaints and reports, set up convenient and effective channels for appeals and complaints and reports, clarify handling procedures and time limits for feedback, and promptly accept, handle, and provide feedback on the handling results.
Article 22
Where any of the following circumstances occurs, providers of anthropomorphic interactive services shall conduct a security assessment and submit an assessment report to the provincial cyberspace administration department of the place where they are located; the provincial cyberspace administration department shall, in accordance with procedures, share information from the assessment report with relevant departments:
(1) launching anthropomorphic interactive services, or adding functions related to anthropomorphic interactive services;
(2) using new technologies or new applications, resulting in major changes to anthropomorphic interactive services;
(3) having more than 1 million registered users or more than 100,000 monthly active users;
(4) where there exist security risks that may affect national security, public interests, or the like;
(5) other circumstances prescribed by the national cyberspace administration department and relevant departments.
Where a cyberspace administration department at or above the provincial level notifies that a security assessment is required, the provider of anthropomorphic interactive services shall conduct the security assessment as required.
Article 23
Where providers of anthropomorphic interactive services conduct security assessments, they shall focus on assessing the following aspects of the service:
(1) the status of the construction of security safeguard measures;
(2) the handling of training data;
(3) the identification of users’ extreme situations, emergency response measures, intervention management, and the like;
(4) such matters as user scale, duration of use, and age structure;
(5) the status of the construction of online protection measures for minors, the elderly, and others;
(6) the acceptance and handling of user appeals and public complaints and reports;
(7) the rectification of major security risk issues discovered by themselves or notified by relevant competent departments such as cyberspace administration departments;
(8) other matters that shall be重点 assessed.
Article 24
Where providers of anthropomorphic interactive services discover that anthropomorphic interactive services present major security risks, they shall take disposition measures such as restricting functions or ceasing to provide services to users, and shall preserve relevant records.
Article 25
Application distribution platforms such as Internet application stores shall fulfill security management responsibilities such as listing review, day-to-day management, and emergency response, and shall verify the relevant security assessment, filing, and other circumstances of applications providing anthropomorphic interactive services; where national provisions are violated, they shall promptly take disposition measures such as refusing listing, issuing warnings, suspending services, or removing the applications.
Chapter III Supervision, Inspection, and Legal Liability
Article 26
Providers of anthropomorphic interactive services shall, in accordance with the Provisions on the Administration of Algorithm Recommendation for Internet Information Services, complete algorithm filing procedures and procedures for changes to and cancellation of filings. Cyberspace administration departments shall conduct annual verification of filing materials.
Article 27
Provincial cyberspace administration departments shall, in accordance with their duties, each year conduct written reviews of matters such as assessment reports and verify the relevant circumstances; where they discover that a provider of anthropomorphic interactive services has failed to conduct a security assessment as required by these Measures, they shall order it to re-conduct the assessment within a prescribed time limit; where they deem it necessary, they may conduct on-site inspections.
Article 28
The national cyberspace administration department, together with relevant departments, shall guide and promote the establishment of AI sandbox security service platforms, encourage providers of anthropomorphic interactive services to connect to sandbox platforms to carry out technological innovation and security testing, and promote the safe and orderly development of anthropomorphic interactive services.
Article 29
Where departments such as cyberspace administration, development and reform, industry and information technology, and public security, in the course of performing supervisory and administrative duties, discover that anthropomorphic interactive services present relatively serious security risks or that security incidents have occurred, they may, in accordance with the prescribed authority and procedures, conduct regulatory interviews with the legal representatives or principal persons in charge of providers of anthropomorphic interactive services.
Providers of anthropomorphic interactive services shall take measures as required, make rectifications, and eliminate hidden dangers.
Providers of anthropomorphic interactive services shall cooperate with supervision and inspection lawfully carried out by cyberspace administration departments and relevant departments, and provide the necessary support and assistance.
Article 30
Where providers of anthropomorphic interactive services violate these Measures, departments such as cyberspace administration, development and reform, industry and information technology, and public security shall handle and penalize them in accordance with the provisions of relevant laws and administrative regulations; where laws and administrative regulations do not provide otherwise, departments such as cyberspace administration, industry and information technology, and public security shall, according to their duties, issue warnings, circulate notices of criticism, order corrections within a prescribed time limit, and may require them to take measures such as suspending user account registration or other related services; where they refuse to make corrections or the circumstances are serious, they shall be ordered to stop providing relevant services, and may concurrently be fined not less than RMB 10,000 and not more than RMB 100,000; where harm to the life, health, or safety of citizens is involved and harmful consequences have occurred, a concurrent fine of not less than RMB 100,000 and not more than RMB 200,000 shall be imposed.
Chapter IV Supplementary Provisions
Article 31
Where the provision of anthropomorphic interactive services involves the provision of services such as health care, finance, and the like, it shall also comply with the provisions of the relevant competent departments.
Article 32
These Measures shall come into force on July 15, 2026.


