Top Chinese chip expert:domestic 14 nm logic and 18 nm DRAM processes + software-defined PIM chip tech can deliver performance comparable to NVIDIA’s 4 nm chips
On November 25, 2025, at the Global CEO Leadership Summit, Dr. Wei Shaojun(魏少军), Chairman of the IC Design Branch of the China Semiconductor Industry Association and Professor at Tsinghua University’s Institute of Integrated Circuits, delivered a speech on the theme “The Development Path of China’s Compute Power Semiconductor Companies.”
He emphasized that today’s AI competition ultimately centers on the interplay of algorithms, data, and computing power, with computing power emerging as the most critical bottleneck. Foundation models are pushing AI from “narrow intelligence” toward “general intelligence,” but this shift requires massive datasets and exponentially growing computational resources.
As global data volume surges and model sizes expand, computing power has become the decisive factor in AI competitiveness. Chip architectures have evolved from CPUs/GPUs to domain-specific accelerators and then to software-defined chips, while global AI development remains heavily dependent on NVIDIA’s GPGPU ecosystem. However, due to restrictions on advanced manufacturing processes (≤5nm), China still faces significant shortcomings in computing power and therefore must pursue disruptive technological innovation.
He argued that processing-in-memory (PIM) chips and software-defined PIM chips represent the technological path China should follow. PIM technology integrates memory and logic through 3D hybrid bonding, dramatically improving energy efficiency. Software-defined PIM chips combine the flexibility of software-defined architectures with the high efficiency of PIM, enabling major performance breakthroughs.
Based on domestic 14nm logic processes and 18nm DRAM, software-defined PIM chip technology can achieve performance comparable to NVIDIA’s 4nm chips. Using this technology to build a Z-scale (10²¹ FLOPS) supercomputing center can significantly reduce both energy consumption and cost. For example, compared with traditional architectures, software-defined PIM chips can reduce energy consumption by 85% and lower costs to just 57% of conventional solutions.
Below is the full translation of the speech (all faults are mine).
The Development Path of China’s Compute Power Semiconductor Companies
The Technological Evolution of Artificial Intelligence
The origins of artificial intelligence can be traced back to the 1940s. In 1943, American neurophysiologist Warren McCulloch and mathematician Walter Pitts, inspired by analogies to the brain’s neurons, proposed the concept of artificial neural networks for the first time. This pioneering idea established the foundation for basic computational models and became a critical starting point for the development of AI.
The 1956 Dartmouth Conference marked the formal birth of artificial intelligence as an academic field. Scientists at the conference introduced the term “artificial intelligence,” explicitly defining the research goal as enabling machines to imitate human learning and other forms of intelligence. This meeting set the core direction for AI research and initiated decades of technological exploration.
In 1981, David Hunter Hubel received the Nobel Prize for his research on the visual nervous system. His work revealed how neuronal networks integrate information through connections, laying the biological foundation for the development of deep neural networks. As network structures grew deeper, wider, and more complex, deep learning technologies gradually matured, enabling computers to achieve capabilities approaching human visual cognition.
The Evolution and Meaning of the Three Pillars of AI
Algorithms, data, and computing power constitute the three core elements driving the development of artificial intelligence.
Algorithms function as the methods of production, defining how data is transformed into intelligent outputs. Data serves as the means of production, the raw material on which algorithms operate. Computing power acts as the productive force, the essential resource required to execute algorithms. These three components interact much like production methods, production materials, and productive forces in traditional industrial processes, collectively propelling the advancement of AI technologies.
From an algorithmic perspective, traditional AI algorithms are designed to solve specific problems. They are lightweight, highly interpretable, and suitable for small-scale data and specialized tasks. By contrast, large-scale models—an advanced subset of algorithms—achieve a leap from “narrow” to “general” intelligence through massive parameter counts and highly complex architectures.
More specifically, traditional algorithms typically involve thousands to millions of parameters and rely on limited labeled data, making them well-suited for vertical applications with low resource consumption and good interpretability. Large models, by contrast, contain billions of parameters and require terabytes of unlabeled data for pretraining. They extract generalized features through self-supervised learning and can handle open-ended tasks—but at the cost of high training expenses, reliance on distributed computing, and reduced interpretability.
On the data front, the global volume of data is projected to reach 175 zettabytes by 2025. This explosive growth in data places unprecedented demands on computing capacity. Computational requirements are increasing by roughly an order of magnitude each year. Adequate computing power has become a fundamental prerequisite for AI development. From AlphaFold2’s protein structure predictions to the training of GPT-3, advancements in computing power have reduced tasks that once required months to mere days—leading to the widespread belief that “those who command computing power command the future.”
Computing Power Challenges in the Development of Artificial Intelligence
The implementation of AI involves three key stages—training, deduction, and inference—each mirroring aspects of human cognition but requiring very different levels of computational resources.
For example, the training stage requires large datasets to drive parameter optimization, making it highly computationally intensive and often taking days or even months to complete. The deduction stage, based on knowledge or logical rules, emphasizes interpretability of the reasoning process. The inference stage demands low latency and high efficiency, requiring predictions to be generated within milliseconds.
Current AI algorithms remain in the early stages of development. Representative models such as LeNet, AlexNet, and LRCN specialize respectively in handwriting recognition, image classification, and video understanding. Their widely differing architectures demonstrate that no single algorithm can yet cover all application scenarios. Each model’s specialized design also brings substantial computational challenges.
He further pointed out that addressing these challenges requires a new generation of intelligent computing engines, which must possess three key attributes: programmability to adapt to diverse applications; compute- and memory-intensive design to support large-scale data processing; and high energy efficiency to enable seamless migration from cloud to edge computing. Against this backdrop, AI is giving rise to the sixth generation of electronic computers—AI supercomputers, characterized by ultra-high performance, high precision, and the ability to process massive parameter sets.
The Evolution and Innovation of AI Chip Architectures
In the field of chip development, no technology has yet emerged that can replace semiconductors. The evolution of AI chips has progressed through several stages: from early reliance on existing CPUs, GPUs, and FPGAs, to domain-specific architectures (DSA) such as Google’s TPU and Huawei’s Ascend, then to general-purpose GPU-based architectures like NVIDIA’s H100 and Iluvatar’s DTU, and ultimately toward software-defined chips (SDC), which aim to balance flexibility with efficiency.
Today, global AI development is deeply bound to NVIDIA’s GPGPU architecture and its CUDA ecosystem, forming a triple dependency among models, architectures, and software ecosystems. This dependency means that China’s AI development remains tightly linked to U.S. technology—posing significant challenges under current geopolitical conditions.
Moreover, the comparative advantages of China and the United States across the three pillars of AI are shifting. China holds an advantage in data volume; the United States continues to lead in algorithms; and in computing power, China possesses only about one-seventh of U.S. capacity. This imbalance places China under considerable pressure in the global AI competition.
China’s Path to Independent Innovation in Compute Power Chips
Today, China faces restrictions in advanced semiconductor manufacturing processes (≤5 nm) and must therefore pursue disruptive technological innovations. Processing-in-memory (PIM) chip technology integrates memory and logic through 3D hybrid bonding, significantly improving energy efficiency. Software-defined PIM chips combine the flexibility of software-defined architectures with the high efficiency of PIM, enabling substantial performance breakthroughs.
Based on domestic 14 nm logic processes paired with 18 nm DRAM, software-defined PIM chip technology can achieve performance comparable to NVIDIA’s 4 nm GPUs. Leveraging this technology to build a Z-scale (10²¹ FLOPS) supercomputing center would dramatically reduce both energy consumption and cost. For example, compared with traditional architectures, software-defined PIM chips can reduce energy consumption by 85%, with costs amounting to only 57% of conventional solutions.
Conclusion and Outlook
Intelligence is the defining characteristic of technological development in contemporary human society. AI represents the highest stage of a “people-centered” technological revolution and constitutes a strategic high ground for humanity’s future. Semiconductor technology provides the essential foundation for AI, and in the foreseeable future, no alternative technology is likely to replace semiconductors.
It is essential to recognize that the era of “harmonious” coexistence between China and the United States is over. The most probable scenario ahead is comprehensive competition; the worst-case scenario is decoupling. China must seize the opportunities brought by AI and semiconductor development, decisively abandon the U.S. technological route, and reduce dependence on NVIDIA chips. Innovation is the only path for China to claim the technological high ground and secure strategic initiative. China must focus on disruptive technological breakthroughs and build its own AI technology system. Only in this way can the security of supply chains and the resilience of industrial chains be ensured.
At the same time, it is important to recognize that AI is not omnipotent. Its development must be closely integrated with the country’s culture, institutions, and language to achieve genuine adoption and real-world application. Only by advancing steadfastly along the road of independent innovation can China gain the initiative in the global AI competition and make meaningful contributions to humanity’s scientific and technological progress.


