China's top lawmakers had a lecture on AI
China’s Standing Committee of the National People's Congress closed its 9th session on 26th April. Following the session, a special lecture titled "The Development of Artificial Intelligence and Intelligent Computing" was delivered by Sun Ninghui(孙凝晖), an academician of the Chinese Academy of Engineering and a researcher at the Institute of Computing Technology at the Chinese Academy of Sciences.
Hosting a lecture on AI for lawmakers, a relatively rare event, underscores how seriously China is taking AI and shows a commitment to enhancing its ability.
As China's primary legislative authority, the National People's Congress (NPC) is considering making AI legislation. However, the NPC is proceeding cautiously, primarily because reviving the economy and boosting entrepreneur confidence are China's foremost concerns. During the lecture, Sun highlighted AI’s security risks, primarily content security risks.
The Sino-U.S. technological competition has become a major theme in the lecture. Sun emphasized that AI technology and the intelligent computing industry are at the core of Sino-U.S. technological competition. Despite significant achievements in recent years, China faces many challenges, particularly those arising from U.S. tech suppression policies.
Strategic choices in AI development, which are critical for China’s sustainable development and its position in competition with the U.S., were also raised in the lecture. Sun outlined three possible strategic choices for China’s AI development:
Opting for a closed-source, unified technology system or embracing open-source approaches?
Focusing on algorithm models or investing in new infrastructure?
Empowering the virtual economy or strengthening the real economy?
The full text of Sun's lecture was published on the NPC’s official website on April 30. Below is an English translation. All faults are mine.
Security Risks of AI
The development of AI has propelled technological advancement in today's world. Still, it has also brought about numerous security risks that must be addressed from a technical and regulatory standpoint.
Firstly, misinformation on the internet. Here are a few scenarios: one is digital impersonation. AI Yoon is the first official "candidate" synthesized using DeepFake technology. This digital persona is based on Yoon Suk-yeol, a Korean National Power Party candidate. With over 20 hours of audio and video clips of Yoon Suk-yeol and more than 3000 sentences recorded specifically for researchers, a local DeepFake technology company created the virtual image AI Yoon, which quickly gained popularity online. In reality, the content expressed by AI Yoon was written by the campaign team, not the candidate himself.
Secondly, forged videos, especially those involving leaders, can lead to international disputes, disrupt election order, or trigger sudden public opinion incidents. For example, fabricating Nixon's announcement of the first moon landing failure or Ukrainian President Zelensky's announcement of "surrender" has caused a decline in social trust in the news media industry.
Thirdly, there is the production of fake news, primarily for illicit gains, through generating false news to attract traffic. Using ChatGPT to generate trending news and earn traffic, as of June 30, 2023, there were 277 fake news websites globally, severely disrupting social order.
Fourthly, there is the issue of face-swapping and voice-changing for fraudulent purposes. For instance, a Hong Kong international company was scammed out of $35 million because an AI voice imitated the voice of a company executive.
Fifthly, inappropriate images are created, especially when targeting public figures. For example, the production of pornographic videos involving film and television stars leads to negative social impacts. Therefore, there is an urgent need to develop detection technology for false information.
Furthermore, large AI models face serious credibility issues. These problems include (1) factual errors presented as serious information; (2) narratives biased towards Western values, outputting political biases and erroneous statements; (3) susceptibility to manipulation, resulting in incorrect knowledge and harmful content output; (4) exacerbation of data security issues, with large models becoming traps for important sensitive data. ChatGPT incorporates user input into its training database to enhance itself. It allows the U.S. to access Chinese language data that may not be covered by public channels, gaining "Chinese knowledge" that we may not possess. Therefore, there is an urgent need to develop security monitoring technology for large models and trustworthy large models.
In addition to technical measures, ensuring AI security requires relevant legislative efforts. In 2021, the Ministry of Science and Technology issued the "Ethical Norms for the New Generation of AI." In August 2022, the National Information Security Standardization Technical Committee released the "Information Security Technology Machine Learning Algorithm Security Assessment Specification." From 2022 to 2023, the Cyberspace Administration of China successively issued regulations such as the "Regulations on Algorithm Recommendation Management for Internet Information Services," "Regulations on Deep Synthesis Management for Internet Information Services," and "Management Measures for Generative Artificial Intelligence Services." European and American countries have also enacted regulations. On May 25, 2018, the EU introduced the "General Data Protection Regulation," on October 4, 2022, the U.S. issued the "Blueprint for Artificial Intelligence Rights Act." On March 13, 2024, the European Parliament passed the EU AI Act.
China should expedite the enactment of the "AI Law," establish an AI governance system, ensure that the development and application of AI adhere to common human values, promote harmonious and friendly human-machine interactions, create a policy environment conducive to the research, development, and application of AI technologies, establish reasonable disclosure and audit mechanisms, understand the principles and decision-making processes of AI systems, clarify the security responsibilities and accountability mechanisms of AI systems, trace responsible parties and provide remedies, and promote the formation of fair, reasonable, open, and inclusive international governance rules for AI.
China's Development Dilemma in Intelligent Computing
AI technology and the intelligent computing industry are at the centre of competition between China and the United States in science and technology. Although China has made great progress in the past few years, it still faces many development dilemmas, especially those brought about by the United States' technology suppression policies.
The first dilemma is that the United States has long been a leader in AI core capabilities, while China is in a tracking mode. China has certain gaps with the United States in the number of high-end AI talents, innovation in AI basic algorithms, capabilities of large AI models (such as large language models, text-image models, and text-video models), training data for large models, and training computing power. This gap is expected to persist for a long time.
The second dilemma is the ban on selling high-end computing products and the long-term restriction on high-end chip technology. High-end intelligent computing chips such as A100, H100, and B200 are banned from being sold to China. Companies like Huawei, Loongson, Cambricon, Sugon, and Hygon have been added to the entity list, and their chip manufacturing processes are restricted. The domestic manufacturing processes lag behind the international advanced level by 2-3 generations, and the performance of core computing chips lags behind the international advanced level by 2-3 generations.
The third dilemma is the weak domestic intelligent computing ecosystem and the insufficient penetration of AI development frameworks. NVIDIA's CUDA (Compute Unified Device Architecture) ecosystem is well-established and has formed a monopoly. The weaknesses in the domestic ecosystem include insufficient research and development personnel, lack of development tools, inadequate funding, and dominance of foreign AI development frameworks like TensorFlow and PyTorch over domestic ones like Baidu's PaddlePaddle. Moreover, the fragmentation among domestic companies prevents the formation of a competitive technical system across intelligent applications, development frameworks, system software, and intelligent chips.
The fourth dilemma is the high costs and barriers when applying AI to industries. Currently, AI applications in China are mainly concentrated in the Internet industry and some defence sectors. When AI technology is applied to various industries, especially when transitioning from the internet industry to non-internet sectors, significant customization work is required, leading to high migration difficulty and costs. Additionally, the shortage of AI talent in China is evident compared to the actual demand in the field.
Keep reading with a 7-day free trial
Subscribe to Geopolitechs to keep reading this post and get 7 days of free access to the full post archives.