CAC Tech Bureau Head on China’s AI-Generated Content Labeling Rules
On February 2, 2026, Yu Yonghe, head of the Network Management Technology Bureau at China’s Cyberspace Administration, published a signed article explaining how China built its labeling system for AI-generated content.
Yu argues that as generative AI has rapidly moved into the mainstream, China has used content labeling to turn long-standing governance questions—what content is AI-generated, who generated it, and where it came from—into rules that are identifiable, traceable, and workable in practice.
Through the introduction of the AI-Generated and Synthesized Content Labeling Measures, a mandatory national standard, and a set of practical implementation guidelines, China has gradually put together a labeling framework that covers the entire lifecycle of content generation, distribution, and use. In terms of design, the system builds on earlier regulatory approaches to algorithmic recommendation and deep-synthesis technologies, while steadily tightening technical and operational requirements. It also adopts lighter-touch labeling methods—such as corner labels for text and rhythm-based cues for audio—and takes into account differences in platforms’ technical capabilities and upgrade costs, leaving companies room to choose solutions that fit their circumstances.
Yu points out that the AI labeling system was developed through close collaboration between experts and platform companies. Legal, technical, and standards specialists contributed throughout the process, while major internet platforms participated in pilot testing and validation, helping ensure the system would work in practice.
He also notes that the technical design reflects the realities of most platforms’ capabilities and costs. Rather than relying solely on complex digital watermarking, the system adopts a mix of explicit labels and metadata-based implicit labels, allowing platforms to choose solutions that fit their technical capacity. To keep costs manageable, implicit labels only retain information on the most recent dissemination platform, and detection of AI-generated content is not made mandatory, instead combining user disclosure with platform-level identification.
According to Yu, since the system came into force, China’s major platforms have moved quickly to comply, labeling has scaled up rapidly, the space for disinformation has been noticeably reduced, and public awareness of AI-generated content has improved—pointing to an emerging Chinese model of AI governance with growing international relevance.
The Network Management Technology Bureau is a technical regulatory department within the Cyberspace Administration of China and the Office of the Central Cyberspace Affairs Commission. Its core role is to translate policy goals on cybersecurity and data security into enforceable technical rules and system-level requirements. The bureau coordinates work on network operations security, protection of critical information infrastructure, technical compliance for data processing and cross-border data flows, and technical assessments in cybersecurity reviews. In China’s internet and data governance system, it plays a key role in turning high-level principles into concrete technical and engineering solutions.
What follows is a full translation of Yu’s article. All errors are mine.
In 1950, British scientist Alan Mathison Turing proposed the “Turing Test” thought experiment to explore how to distinguish whether one is communicating with a human or a machine. More than 70 years later, with the rapid development of generative artificial intelligence, some AI applications are already capable of interactive communication with humans, and the generation and synthesis of speech, images, and video content have become increasingly realistic. At present, the abuse of AI technologies to fabricate and disseminate false information has become a common global challenge. In response, using content labeling to identify generated or synthesized content has become a shared consensus among many countries.
The widespread application of AI technologies brings both new opportunities and new challenges to socioeconomic development. The CPC Central Committee, with Comrade Xi Jinping at its core, attaches great importance to the healthy development and secure governance of artificial intelligence. On November 28, 2025, during the 23rd collective study session of the Political Bureau of the CPC Central Committee, General Secretary Xi Jinping emphasized that “at present, new technologies and applications such as artificial intelligence and big data are constantly emerging, posing challenges to the governance of the online ecosystem while also providing new enabling conditions.” The Suggestions of the CPC Central Committee on Formulating the Fifteenth Five-Year Plan for National Economic and Social Development, adopted at the Fourth Plenary Session of the 20th CPC Central Committee, call for strengthening AI governance and improving relevant laws and regulations, policy frameworks, application norms, and ethical standards. Adhering to the principle of balancing development and security, the Cyberspace Administration of China actively encourages innovation in AI while steadily advancing law-based AI governance. Focusing on the critical stage of identifying generated and synthesized content, it has promulgated and implemented the system for labeling AI-generated and synthesized content (hereinafter referred to as the “AI Labeling System”), laying a solid foundation for China’s full life-cycle AI governance framework.
Series of outreach briefings on the labeling system for AI-generated and synthesized content (Zhejiang stop).
Implementing the decisions and arrangements of the CPC Central Committee and building an AI labeling system to meet the needs of governance in the new era
Establishing an AI labeling system is an important measure to thoroughly implement the decisions and arrangements of the CPC Central Committee. General Secretary Xi Jinping has pointed out that “it is necessary to improve long-term mechanisms for online ecosystem governance, and to focus on enhancing its forward-looking, precision-based, systematic, and coordinated nature, so as to continuously foster a clean and upright cyberspace.” The Decision of the CPC Central Committee on Further Deepening Reform Comprehensively and Advancing Chinese Modernization, adopted at the Third Plenary Session of the 20th CPC Central Committee, calls for improving mechanisms for the development and management of generative artificial intelligence. The establishment and implementation of the AI labeling system represents an important step in carrying out these decisions, marking China’s gradual move toward more precise rule-making in the field of AI governance. It is of great significance for promoting the healthy and orderly development of new technologies and applications, safeguarding cyberspace security, and enhancing overall AI governance capacity.
Establishing an AI labeling system is also a practical necessity for fostering a clean and upright online environment, protecting the rights and interests of netizens, and safeguarding national security. While generative AI applications have greatly enriched the online ecosystem, abuse and malicious use have given rise to security risks such as disinformation, deepfake fraud, and malicious outputs, infringing upon citizens’ rights, disrupting social cognition and market order, continuously eroding public trust in online information, and threatening social stability and national security. The AI labeling system seeks to build a modern governance mechanism driven by institutional innovation and supported by technical capabilities. By providing society with simple and effective means to distinguish generated and synthesized content, it meets the practical needs of creating a clean and upright cyberspace and strengthening comprehensive online governance.
Establishing an AI labeling system is an effective solution for reinforcing the responsibilities of all actors involved in “content creation, content dissemination, and application distribution.” The creation, dissemination, distribution, and use of generated and synthesized content involve multiple stages and a wide range of stakeholders, including content creation platforms, content dissemination platforms, application distribution platforms, and the general public. The AI labeling system clearly defines the responsibilities and obligations of each party: requiring providers of generated and synthesized services to fulfill source-labeling obligations, content dissemination service providers to record processes at the dissemination stage, application distribution platforms to verify labeling functions, and netizens to proactively declare and label content. This allocation of rights and responsibilities provides clear and concrete behavioral guidance for all parties, promotes a governance model in which responsibilities are fulfilled and coordination is shared, and effectively enhances the overall effectiveness of online ecosystem governance.
Upholding Fundamental Principles While Advancing Innovation: The AI Labeling System Contributes a Chinese Approach to AI Governance
In March 2025, the Cyberspace Administration of China, together with the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, jointly issued the Measures for the Labeling of Artificial Intelligence–Generated and Synthesized Content (hereinafter referred to as the “Labeling Measures”). At the same time, the Standardization Administration of China released the supporting mandatory national standard, Cybersecurity Technology—Methods for Labeling Artificial Intelligence–Generated and Synthesized Content (hereinafter referred to as the “Labeling Standard”). The National Technical Committee on Cybersecurity Standardization subsequently issued a series of supporting practical guidance documents. As of September 1, 2025, the AI labeling system officially came into force. With active responses and implementation by all parties, a Chinese approach to AI security governance has initially taken shape—one that is both systematic, innovative, and inclusive.
The AI labeling system embodies a systematic approach to AI governance. First, it represents a continuation and expansion of the concept of online ecosystem governance, maintaining consistency and coherence over time. In 2021, the Cyberspace Administration of China took the lead in issuing the Regulations on the Administration of Algorithmic Recommendation Services for Internet Information Services, which for the first time introduced labeling requirements for generated and synthesized content at the dissemination stage. In 2022 and 2023, the Regulations on the Administration of Deep Synthesis of Internet Information Services and the Interim Measures for the Administration of Generative Artificial Intelligence Services were successively issued, requiring deep-synthesis and generative AI content to be labeled. With the promulgation and implementation of the Labeling Measures in 2025, the requirements for AI labeling—combining explicit labels with implicit metadata labels—were further clarified, and a formal technical solution for AI labeling was established.
Second, the AI labeling system integrates regulatory requirements with technical measures and refines them layer by layer. From top to bottom, the system is structured into three tiers: the Labeling Measures set out the administrative requirements for AI labeling; the Labeling Standard specifies standardized labeling methods; and the supporting practical guidance documents provide concrete technical solutions for different actors to implement the labeling requirements. Together, these form a “1+1+N” institutional framework consisting of one normative document, one mandatory national standard, and multiple practical guidelines.
Third, the AI labeling system firmly focuses on key stages such as the production and dissemination of generated and synthesized content, enabling governance across the entire content lifecycle. At the content generation stage, production platforms embed labels into the content, providing a foundational basis for end-to-end identification and traceability. At the content dissemination stage, dissemination platforms proactively verify and update labels and provide prominent notices to users, reinforcing the role of labeling initiated by production platforms. Through coordination between production platforms and dissemination platforms under the AI labeling system, core questions surrounding online information—“what content is generated,” “who generated it,” and “where it was generated”—can be effectively addressed.
The AI labeling system also demonstrates innovation in content-labeling governance. Compared with images and videos, text and audio media carry less information density, and adding explicit labels can more easily interfere with users’ reading or listening experiences. To avoid excessive disruption to users’ access to text and audio content, the AI labeling system introduces innovative forms of explicit labeling, such as corner labels and rhythm-based audio cues.
First, a refined corner label is designed for text content. Drawing on established practices in trademark registration (such as the “TM” mark) and mandatory product certification (such as the “CCC” mark), the Labeling Measures innovatively propose an explicit “AI” corner label. This approach provides a clear and prominent notice while minimizing interference with users’ reading experience and secondary editing.
Second, a brief rhythm-based label is designed for audio content. Considering that some audio content is short in duration or unsuitable for lengthy spoken labels, the Labeling Measures propose an audio rhythm label in the form of Morse code—using the “short–long, short–short” signal (corresponding to the Morse code for the letters “A” and “I”). Compared with spoken prompts, this method is more concise and distinctive.
The AI labeling system reflects an inclusive approach that balances development and security. First, experts, scholars, and platform enterprises were deeply involved in the drafting and validation process. The development of the AI labeling system brought together wisdom from multiple stakeholders, with experts in law, technology, and standardization jointly contributing opinions and recommendations. A number of leading internet platforms participated extensively in pilot testing and validation of the labeling system, enabling the research and formulation process to advance in a scientific and orderly manner and significantly enhancing the system’s practicality and implementability.
Second, the technical choices underpinning the AI labeling system fully take into account the technical capabilities of the majority of websites and platforms. Digital watermarking is widely recognized in the industry as a robust implicit labeling solution, but its high complexity requires platforms to possess more advanced technical capabilities. In light of the actual technical capacities of the vast number of small and medium-sized websites and platforms, and following extensive multi-stakeholder consultations, the labeling scheme adopted a combination of explicit labeling and metadata-based implicit labeling with moderate implementation difficulty. By reserving security-protection fields and providing practical technical guidelines for security protection, platforms are allowed to independently choose labeling and protection solutions that match their own technical capacities.
Third, the AI labeling system fully considers the costs associated with platform modification. While implicit labels that record information on all entities along the entire dissemination chain can effectively enable traceability, they also increase transmission, storage, and processing costs for platforms. To reduce the burden on platforms, the AI labeling system only requires implicit labels to retain information related to the final dissemination platform. In addition, taking into account the content recognition capabilities of dissemination platforms, the system does not impose mandatory requirements for platforms to detect generated or synthesized content. Instead, it proposes a low-cost approach that combines users’ proactive declarations with platforms’ identification of generation traces.
Four months after the implementation of the AI labeling system, mainstream content production and dissemination platforms have largely complied with labeling requirements. According to preliminary statistics, major production platforms such as Doubao, DeepSeek, Qianwen, and Wenxin have cumulatively added AI labels to more than 150 billion pieces of generated and synthesized content—including text, images, audio, and video—and have provided users with more than 1 billion content files containing AI labels. Major dissemination platforms such as Douyin, Bilibili, Xiaohongshu, and Weibo have cumulatively added prominent prompts to more than 220 million pieces of generated and synthesized content. At the same time, the AI labeling system has initially formed a closed-loop design of “source labeling–dissemination verification–end-to-end traceability,” effectively narrowing the space for the spread of false information. Its supporting role is becoming increasingly evident in areas such as clarifying intellectual property rights, identifying forgeries in judicial evidence collection, and preventing malicious exploitation of e-commerce refund mechanisms.
The AI labeling system has gained broad public recognition. According to the results of a survey of internet users conducted by the Secretariat of the National Technical Committee on Cybersecurity Standardization, one month after the system’s implementation, 76.4 percent of respondents reported that they had clearly noticed an increase in content labeling on platforms such as social media, news websites, and video platforms. Sixty percent of respondents believed that the labels served an effective alerting function and helped them identify AI-generated and synthesized content. The general public has thus developed awareness of “learning about labels, recognizing labels, and using labels.”
Domestic media outlets and social platforms have actively publicized the AI labeling system, and a broad range of netizens have expressed strong affirmation of its positive role in preventing the spread of false information. Media outlets in multiple countries, including the United States, the United Kingdom, and France, have conducted in-depth coverage of the release and implementation of China’s AI labeling system, recognizing its leading and demonstrative role internationally as well as the tangible results it has achieved.
Sustained Efforts Over Time:Jointly Advancing the Refinement and Effective Implementation of the AI Labeling System
First, continuously accumulate experience and promote the refinement and improvement of the AI labeling system. Closely following trends in the development of artificial intelligence technologies, efforts will continue to establish and improve technical solutions such as digital watermarking, identification of generated and synthesized content, and cross-platform interoperability and mutual recognition of implicit labels, while advancing the development of supporting tools related to AI labeling. In response to new forms of AI applications such as intelligent agents and digital virtual humans, practical guidelines for AI labeling will be researched and formulated. At the same time, implementation experience will be systematically summarized, and new issues raised by stakeholders will be studied, so as to continuously improve the AI labeling system.
Second, strengthen regulatory oversight and law enforcement, and further solidify the responsibilities and obligations of all stakeholders. Ongoing inspections will be carried out to assess the implementation of AI labeling requirements, urging website platforms to fulfill their primary responsibilities and promote the AI labeling system to move from “basic implementation” toward “comprehensive and standardized implementation.” Website platforms that repeatedly refuse to rectify problems despite repeated supervision will be dealt with strictly in accordance with laws and regulations. Firm action will be taken against illegal and non-compliant activities, including the use of AI technologies to generate and synthesize false or harmful information, the perpetration of online fraud, and the provision of tools that undermine AI labeling.
Third, intensify public legal education and awareness-raising to enhance AI literacy across society. Website platforms will be guided to deepen their understanding of the AI labeling system and to respond promptly to questions and concerns. Netizens will be encouraged to “proactively declare and proactively label” generated and synthesized content at the stage of content publication, and to provide feedback on labeling-related issues through various channels, thereby fostering a positive environment in which the whole of society jointly promotes the effective implementation of the AI labeling system.
Fourth, promote the formation of international consensus and actively contribute a Chinese approach to AI governance. In 2015, General Secretary Xi Jinping creatively proposed the concept of building a community with a shared future in cyberspace. In recent years, AI applications have become an increasingly important factor shaping cyberspace. It is therefore necessary to continuously deepen international dialogue and exchanges in cyberspace, promote international cooperation on AI safety governance on a global scale, and share experience from the development and practice of the AI labeling system. By working together with all countries to strike a balance between technological innovation and risk prevention, the global AI governance framework can be guided toward a more beneficial, safe, and equitable direction.
At the beginning of the new year, everything is renewed. Guided by Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era, we will continue to study and implement General Secretary Xi Jinping’s important thinking on building China into a cyber power, firmly uphold the people-centered and “AI for good” philosophy of artificial intelligence development, and persistently promote the effective implementation of the AI labeling system. Through governance enabled by technology, we will provide solid safeguards for the healthy and orderly development of the AI industry and inject strong momentum into the building of a cyber power.


