A CCTV conversation: why are Chinese large models becoming a global AI foundation
On January 25, CCTV Finance’s program Dialogue: Innovation China hosted a discussion around the theme “Why are Chinese large models becoming a global AI foundation?”
The panel brought together experts including Zhou Jingren, CTO of Alibaba Cloud Intelligence Group;Zhang Liang, Vice President of Alibaba Cloud, along with industry pioneers from robotics, smart hardware, AIGC video generation, AI short-form animation, and education. Participants included Qian Dongqi, Chairman of Ecovacs Group; Wang Qian, Founder and CEO of Invariant Robot; Wang Changhu, Founder and CEO of Aishi Technology; Xiong Yihui, Founder of JuriLu Technology; and Wang Qingbang, Founder and CEO of PigaiBang.
The guests agreed that Chinese open-source large models — with Alibaba Cloud’s Qwen as a leading example — are rapidly entering the global mainstream and are increasingly becoming a shared technical foundation for industries and developers worldwide.
Zhou Jingren noted that model capabilities have advanced at remarkable speed over the past two years, with Chinese models moving from followers to the global top tier. Qwen now has around 180,000 derivative models, making it the world’s largest open-source model family. At the same time, Alibaba has turned the idea of “Model as a Service” into reality, delivering models through a stable, flexible, and cost-effective cloud platform, while using open source to bring more developers and companies into a shared innovation ecosystem.
According to the panel, breakthroughs in large models are not the result of a single technical leap, but of full-stack collaboration and joint optimization across models, cloud computing, and infrastructure. Alibaba began early model research in 2018, launched pre-trained and multimodal models from 2019 onward, and has invested in cloud computing since 2009 — building an end-to-end capability that only a handful of companies globally can match.
From the user side, enterprise representatives highlighted why they chose open models: better security, more room for secondary development, flexibility for local deployment, and significantly better cost performance. For embodied-intelligence and robotics companies in particular, open source is essential, because only open models can deeply integrate language and vision to support foundation models for the physical world.
Finally, the discussion showed that an application boom is already underway. Robot vacuum cleaners are evolving from simple automation into intelligent agents that understand environments and interact with users.
In content creation, open-source video generation models have cut production cycles to about a month while dramatically reducing costs. Startups in video generation and education emphasized that open ecosystems lower trial-and-error costs and significantly boost innovation efficiency.
The shared conclusion was that China’s rich application scenarios and massive SME demand are, in turn, accelerating model evolution — and that Alibaba Cloud aims to act as the “water, electricity, and fuel” of the AI era, using openness, open source, and infrastructure investment to turn AI into sustainable productivity across industries.
Wang Qian recalled that when they were preparing to start the company two or three years ago, people constantly questioned whether China lacked talent density, lagged far behind the U.S. in AI capabilities, and whether such work should be done in the United States instead. Today, those doubts have largely disappeared. Most people now accept that, at least in terms of talent density and frontier research, China is operating on a level comparable to the U.S. While gaps remain in areas such as computing power and semiconductors, there is now broad confidence that world-class large models can be developed in China — a significant shift from just a few years ago.
On January 27, Alibaba Cloud’s official WeChat account published selected excerpts from this discussion.
Below is the full transcript of the conversation, shared here for readers. Any translation or transcription errors are entirely my own.
Host:
Welcome to today’s conversation. At the Central Economic Work Conference held at the end of 2025, China emphasized using technological innovation to drive the development of a modern industrial system — and within that, AI is clearly the core engine of this intelligent transformation.
If you’ve been paying attention to global trends in AI, you may have noticed some interesting shifts. Brian Chesky, CEO of Silicon Valley star company Airbnb, has publicly said that their business now relies heavily on a Chinese AI model. Nvidia CEO Jensen Huang has singled it out on earnings calls, calling it one of the best open-source models in the world. Even Singapore’s National AI Programme has moved toward a Chinese open-source solution.
The name that keeps coming up across both Eastern and Western discussions is Alibaba Cloud’s Qwen model. Almost without people realizing it, Chinese open-source models seem to be turning into a shared technical foundation for global industries and developers alike.
That’s why today we’ve invited some of the key drivers and participants of this AI transformation to join us here — let’s give them a warm welcome.
Earlier in the program, we’ve already heard how different sectors around the world are thinking about and observing AI’s development. I’d especially like to ask Mr. Zhou: in your interactions with users and developers across different regions globally, how are they viewing the development of AI technology in China?
Zhou Jingren, Chief Technology Officer, Alibaba Cloud Intelligence Group:
Yes — first of all, over the past two years, the entire AI field has been moving incredibly fast. Models are iterating at a rapid pace, and their capabilities are improving at an astonishing rate.
We’re very encouraged to see that in global benchmarks and rankings, Chinese models are now appearing more and more frequently. In particular, the Tongyi, or Qwen, family of models has begun to rank near the top across a wide range of technical metrics and leaderboards worldwide.
Over the past two years, I’d say we’ve moved from being followers to firmly entering the global first tier. That’s where we are today.
At the same time, the global influence of the Qwen model family continues to grow. Especially when you look at derivative models built on top of our base models, there are now around 180,000 such models worldwide — the largest number of derivatives in any major model ecosystem. In that sense, Qwen has become the world’s largest model family, with developers around the globe actively using it.
Recently, even Wired, the well-known U.S. tech magazine, predicted toward the end of last year that 2026 will be the year of Qwen.
Host:
Behind this recognition, we can see that a lot of foundational work had already been done earlier on. What do you think were the key areas where the groundwork was solid enough to make today’s results possible?
Zhou Jingren, Chief Technology Officer, Alibaba Cloud Intelligence Group:
What we’re seeing today with large models isn’t just a breakthrough in AI itself. More importantly, it’s the result of an entire full-stack technology system working together, and being optimized as a whole.
AI models are obviously critical, but so is the cloud computing infrastructure behind them. None of the progress we’ve made with Qwen happened overnight — it’s the result of many years of sustained investment and accumulation.
On the model side, we actually started doing early research as far back as 2018. From 2019 onward, we released various pre-trained large models, including multimodal models like M6.
At the same time, large models today are completely inseparable from cloud computing. From training to inference, everything depends on cloud infrastructure — and cloud computing has long been one of Alibaba’s strengths. We began building cloud computing capabilities back in 2009, and over the years accumulated deep technical expertise.
What we’re especially proud of today is that Alibaba has both strong cloud computing capabilities and strong large-model capabilities. Globally, only a small number of companies have this kind of end-to-end strength. Because we have this full stack, we can optimize systems and models together — and in this era, that kind of joint optimization is absolutely critical.
Host:
As you said, it’s really the result of long-term, full-stack technology accumulation. Seeing today’s progress feels like a natural outcome. I’d love to ask Mr. Qian and Mr. Wang — as users, what were the key considerations behind your decision to choose these models?
Qian Dongqi, Chairman, Ecovacs Group:
Among Chinese AI and Chinese models, for enterprise use, the one we rely on most is Tongyi (Qwen) — and there are a few reasons for that.
First, Qwen is an open-source model, and that’s extremely important. Open source gives us much better security and flexibility, especially when it comes to secondary development. In many cases, we can even deploy it locally to some extent. That kind of flexibility makes enterprise operations far more customizable and adaptable.
Second, cost-effectiveness really matters. Using a model means paying for token usage, and for enterprises, cost always has to make sense. On this front, Chinese models offer significantly better value for money than foreign models — there’s no question about that.
Host:
Mr. Wang, when you made that choice back then, what were the most important considerations behind it?
Wang Qian, Founder and CEO, Invariant Robot:
Well, there are a couple of things that make our situation a bit special. First, we’re both users of large models and developers of large models ourselves. Second, when people hear about large-scale intelligence or AI engines, they often think of them mainly as AI applications. But we actually see things a bit differently.
What we’re building is really another kind of foundation model — a foundation model for the physical world. In that sense, it runs in parallel with, or independently from, language models that live in the virtual world. It’s a foundation model specifically designed for real-world physical environments.
Because of that, there are certain things we don’t try to build ourselves. For example, we don’t redo the language layer, and we don’t redo some parts of vision either. So where do those capabilities come from? They can really only come from open-source models. With closed-source models, we simply can’t do secondary development or deeply integrate them with our own models.
That’s why, without question, the Qwen series stands out to us as having the strongest foundational capabilities among all open-source models globally. And I’m saying this not just as a user, but as a researcher in this field — in my own judgment, it’s genuinely the strongest open-source model in the world right now.
So naturally, it became our first choice when it came to language and vision capabilities that could help us the most. Our field has been moving incredibly fast — in just two years, we’ve gone through what might have taken ten or even twenty years before. And that progress wouldn’t have been possible without the broader Chinese AI industry and ecosystem, especially models like Qwen.
I remember very clearly when we were just preparing to start the company two or three years ago. Everyone kept asking the same questions: Isn’t China’s talent density much lower than the U.S.? Isn’t China’s overall AI capability far behind America? Shouldn’t you be doing this in the U.S. instead of China?
Today, no one asks those questions anymore.
At this point, most people accept that — at least in terms of talent density and frontier-level exploration — China is on the same playing field as the U.S. Of course, there are still gaps in areas like computing power and semiconductors. But I think people now genuinely believe that world-class large models can be built in China. That’s a huge change from just a couple of years ago.
Host:
I think everything you just described really resonates with many of us, because we’ve all lived through that process together. Beyond the rapid technological progress of Chinese companies, what else is helping turn these ideas into reality?
Zhou Jingren, Chief Technology Officer, Alibaba Cloud Intelligence Group:
If we look at this current AI wave, what we’re really pursuing is general artificial intelligence — in other words, AI that can get closer and closer to human intelligence. That kind of capability naturally has very strong generalization.
And we’ve also seen that China has some unique advantages here. In particular, Chinese companies and developers tend to be more willing to experiment with new models. Because of that, in many areas of business innovation and real-world application scenarios, we’re actually moving ahead of the rest of the world.
Host:
For large models, those rich real-world scenarios are also extremely demanding — almost like one tough exam after another. Today, Qwen already holds more than one-third of China’s large-model market share, and it’s the most widely used model among domestic enterprises.
We’d really like to understand, in the field of embodied intelligence, how this close integration with large models has changed things — especially processes that people wanted to change before but couldn’t.
Wang Qian, Founder and CEO, Invariant Robot:
For us, the most important thing about combining embodied intelligence with large models is that large models bring a completely new methodology. It’s a data-centric approach, a scaling-law mindset, and a methodology that’s tightly integrated with infrastructure.
With that methodology, we’re finally tackling a core problem that hasn’t really been solved in the past 80 years: enabling robots to handle truly open and unpredictable environments. Think about scenarios like elderly care facilities, where situations are constantly changing and full of randomness.
What this gives us is a fundamental shift in how we approach the problem. That’s exactly what we’re trying to do — to move robots beyond being automated machines confined to factories or specialized tasks, and turn them into systems that can genuinely solve real-world physical problems that humans face every day.
Host:
Mr. Qian, robots like robot vacuum cleaners are already in millions of homes. When a small machine like that is combined with AI, what kinds of everyday experiences does it actually change or unlock?
Qian Dongqi, Chairman, Ecovacs Group:
Robot vacuum cleaners actually existed long before the AI boom. In theory, they were automated devices. But in today’s AI era, even products like these have fundamentally changed.
Our robot vacuums now have a built-in intelligent agent called Xiao Ke. When this agent runs, it doesn’t just follow preset rules — it understands the environment on its own. It learns users’ habits, interacts with them, and can even explain how to use the product better. If something goes wrong, it can proactively tell the user what the problem is and how to fix it.
With AI, what used to be a simple automated device has truly become an intelligent device.
Zhou Jingren, Chief Technology Officer, Alibaba Cloud Intelligence Group:
The scenarios the two guests just talked about are actually the best proof of how widely large models are being applied today. That’s also why we call them foundation models. At their core, these models represent a form of human intelligence.
Today, you can put a foundation model into a robot and give it the ability to converse and see. You can also use the same model to help people work, study, and solve problems across all kinds of settings. That kind of general-purpose capability is incredibly important. We’re very pleased to see that, so far, the Qwen model family has already served more than one million enterprise customers.
To make that possible, there are several key technical pieces. First, as a model provider, our goal is to offer a full family of models — covering different sizes, modalities, and application scenarios. Different industries have very different needs, and as we mentioned earlier, cost-effectiveness really matters. Developers and enterprises need the freedom to choose models with the right balance of capability and cost so they can best fit their own use cases.
Once you have these models, their use is tightly linked to cloud computing. We want models to be available anytime, anywhere, and that requires a cloud platform that is stable, elastic, high-performance, and cost-effective. Even before the ChatGPT boom, we introduced the concept of “Model as a Service.” In the AI era, models themselves are a core production resource. What we’ve done is turn that idea from a concept into something real.
It’s been very encouraging to see how Model as a Service has gone from being a new idea to becoming an industry-wide consensus. Based on these models, you need a robust, flexible, high-performance service platform to truly deliver AI capabilities to different industries and allow them to integrate naturally with real business scenarios. Alibaba Cloud has always aimed to be the most open AI cloud in China. That’s why, beyond developing advanced models, we place great importance on open-sourcing them — so enterprises and developers can actively participate in this new wave of technology.
Host:
Open source really is a major strategic choice, and behind it there must be some difficult decisions. We’ve seen Meta move from open source back to closed models, while Qwen has taken a different path by sticking firmly with open source. Its global downloads have now surpassed 700 million. That clearly reflects technical confidence and courage — but beyond that, what else are you aiming for?
Zhou Jingren, Chief Technology Officer, Alibaba Cloud Intelligence Group:
AI development today requires enormous amounts of computing power, and in reality, only a small number of companies can afford that scale of resources. That creates a real risk: the most advanced technologies could end up concentrated in the hands of just a few tech giants.
We believe that a core technology as important as AI shouldn’t be owned by only a few players. Instead, it should enable broad participation across society and across industries. By open-sourcing our models, we’re allowing far more developers to contribute their ideas and creativity to large-model innovation, which naturally expands the entire ecosystem.
For enterprises, open-source models also lower the barrier to experimentation. Companies can try new technologies more easily and integrate them directly into their own business scenarios, accelerating both technological progress and industry adoption.
Over the past two years, since we committed to this open-source strategy, we’ve seen many more Chinese companies join the same path. Open source has gradually become a defining feature of China’s technology landscape. As this ecosystem continues to grow, Chinese models are increasingly gaining the potential to help shape technical standards, not just follow them.
Beyond the models themselves, we also believe a strong open-source community is essential. That’s why over the past two years we built the ModelScope (Modao) community, which now has more than 30 million developers actively participating, sharing, and collaborating. All of these pieces together are critical — they help enterprises innovate, and they help China chart a distinctive and sustainable technological path forward.
Host:
As you mentioned, the power of open source really has helped AI integrate more deeply with many industries. But if AI is going to truly deliver value, it can’t just be for big companies. A key question is whether China’s vast number of small and medium-sized businesses can actually use AI — and use it well. On this, Mr. Zhang Liang definitely has a lot to say.
Zhang Liang, Vice President, Alibaba Cloud Intelligence Group:
We often used to say that the internet democratized information. Now that AI has arrived, we’re seeing something similar — AI is democratizing technology itself. Many technologies that used to be accessible only to large companies can now be used by small businesses and even individual developers, and at much lower cost.
The data clearly shows this trend. Over the past period, the number of companies using AI has grown tenfold within just six months to a year. What we’re seeing is that AI is no longer just a concept — both large enterprises and SMEs are already using it directly in their day-to-day operations.
From our perspective, the first thing we need to do is serve companies with very different technical capabilities in different ways. For customers who have the ability to manage computing resources directly, we provide our flagship platform — the PAI platform — where they can directly access compute power and deploy open-source models.
For customers with some development capability — those who can work with APIs or like to build things themselves — they can use our Bailian platform to quickly build their own enterprise agents and turn ideas into real, working products.
We’re also seeing manufacturers in places like Huaqiangbei, Yiwu, and Chenghai adopting AI at scale. These are traditional manufacturing hubs. Whether it’s smart devices, glasses, toys, headphones, or other products, many of these companies are now embedding AI capabilities into their hardware. These were never “high-tech companies” in the traditional sense.
For hardware companies like these, we’ve packaged Tongyi’s models — including the Bailing speech model, Qwen’s language and dialogue model, and Wanxiang’s video generation model — together with industry-specific agents into a complete solution for intelligent hardware. That allows us to deliver more targeted, vertical solutions.
And finally, there are many companies that simply don’t have technical staff, but still really want to use AI. For them, Alibaba Cloud has launched a range of no-code, plug-and-play AI products. Our goal is to lower the AI barrier step by step — from advanced users to beginners — and make our capabilities accessible to as many customers as possible.
Host:
I agree — lowering the barrier to use is crucial for small and medium-sized businesses. Only when the barrier comes down can AI truly become a real productivity tool. Sitting next to you today are business leaders from different industries, and I hear they’re all using the Qwen model. Let’s have them briefly introduce themselves.
Xiong Yihui, Founder, JuriLu Technology:
Hello everyone, I’m Xiong Yihui, founder of JuriLu Technology. Our product provides AI tools specifically for professionals in short-form dramas and animated series. Using our tools, creators can take a script and produce a complete short drama or animated series. At the moment, we hold the number-one market share in this vertical.
The open-sourcing of Wanxiang has been incredibly helpful for us. In content creation, there’s a big gap between making something and making it well. A lot of that gap comes down to small details.
Take actors as an example: the performance requirements for film, TV dramas, stage plays, and short-form dramas are all different. When we build tools specifically for short-form drama creators, the output needs to naturally fit those requirements. Open-source models allow us to fine-tune and fill in those details.
Host:
Do you still need real actors, or are the characters now entirely AI-generated?
Xiong Yihui, Founder, JuriLu Technology:
To be honest, from a technical standpoint, we can already create content without any real actors at all. In the short-form drama space, at the beginning of 2025, nearly 90% of industry professionals didn’t believe AI could make dramas. By the end of 2025, about 90% of them were using AI to produce complete content. That shift is dramatic.
We’ve been constantly optimizing and adapting. And honestly, we’re very grateful to Alibaba Cloud — over the past year, the speed at which these technologies have evolved is almost hard to believe.
To give you a sense of how extreme this is: using AI, we’ve compressed the production cycle of a series down to about one month. The pace of improvement is so fast that when we start a project at the beginning of the month, by the time we finish episode one and later reach episode eighty, we look back at episode one and feel it’s already outdated. That’s how fast iteration has become.
Host:
So the speed of technological iteration has already surpassed our ability to keep up operationally.
Xiong Yihui, Founder, JuriLu Technology:
Exactly. We’re experiencing what I’d call an epic leap in efficiency — truly epic. We now have users with content teams of just 50 people who can produce 50 animated series in a single month. A year ago, no one would have believed that.
The second major change is cost. In the past, the cost of producing one short drama could now, with AI, produce 10 to 20 pieces of content. That’s a massive shift.
Host:
Mr. Wang, hearing all of this, does it get you excited? What industry are you working in?
Wang Changhu, Founder and CEO, Aishi Technology:
What we’re doing is teaching AI how to create video. We focus on two things. First, we’ve developed a world-leading video generation foundation model of our own. Second, based on that model, we’ve built products that are popular with users around the world.
In China, our app Jipai AI now has over 100 million users globally. Our goal is to let anyone turn a sentence into a video, or bring a single photo to life, easily. We’re not just serving individual users — a lot of enterprises are benefiting too, including companies in animation, short-form dramas, film and TV, advertising, and marketing. We’re helping them upgrade their creative workflows with AI.
Throughout this process, Alibaba Cloud has supported us across the board — from underlying infrastructure to large-model services to global deployment.
Host:
From your own experience, what has open source really brought to innovative companies like yours?
Wang Changhu:
The two biggest things are lower trial-and-error costs and, as a result, much higher innovation efficiency. We’ve felt this very directly. A stronger ecosystem means we can see new scenarios earlier, experiment faster, and upgrade our products more quickly.
Wang Qingbang, Founder and CEO, PigaiBang:
PigaiBang is focused on building simple, easy-to-use AI assistants for educators across China. By combining our own models with Tongyi’s open-source large models, we were able to roll out new capabilities very quickly.
We were among the first to introduce AI-powered grading for essays and open-ended questions based on original handwritten images, as well as teacher-plus-AI collaborative grading scenarios. Today, our platform serves 800,000 teachers nationwide, and processes about 600,000 essays every day.
Host:
On the surface, it sounds like this really frees up teachers’ time — they don’t have to spend so many hours grading assignments anymore, right?
Wang Qingbang:
Exactly. It boosts students’ motivation to write, while also freeing teachers from repetitive grading work.
Host:
And it gives teachers a powerful tool for more personalized instruction.
Wang Qingbang:
Yes. The government has been encouraging AI in classrooms for some time now. Let me give a simple example. In traditional teaching, a teacher might say, “Turn to page X, let’s read a poem by Li Bai or Du Fu.” For many students, that doesn’t really come alive.
With Tongyi’s multimodal models, we can turn historical figures from textbooks into vivid animations that interact with students. From our perspective, large models are opening a completely new path toward personalized education, especially true “teach-to-the-student” learning.
Zhang Liang, Vice President, Alibaba Cloud Intelligence Group:
If I were to sum it up: technology creates possibilities, ecosystems create choices, and services provide assurance. In my view, the biggest beneficiaries of this new industrial revolution will be small and medium-sized enterprises. With AI empowerment, SMEs can become one of the most important driving forces of both China’s and the global economy.
Zhou Jingren, Chief Technology Officer, Alibaba Cloud Intelligence Group:
What we’ve seen today is that many innovations happening in real business scenarios go far beyond what we originally imagined when building foundation models or cloud services. That’s the power of the ecosystem. We provide the foundation — models and cloud — and industries bring their creativity. Together, that’s what truly drives the intelligent transformation forward.
Host:
So is it fair to say that China’s vast market and rich application scenarios are giving large models a unique boost, helping them become smarter faster?
Zhou Jingren:
Yes. We’re still in the early stages of this technological revolution. Models are continuing to iterate, and capabilities are improving rapidly. We hope to fully leverage the combined strengths of cloud-scale computing and large models, through joint optimization, to deliver the most advanced AI capabilities possible.
Host:
The media has reported that Alibaba plans to invest hundreds of billions of yuan in AI in the coming years. Mr. Zhou, could you explain where that investment will go?
Zhou Jingren:
A major focus will be on AI infrastructure — building a globally efficient computing foundation. Many Chinese companies are already operating internationally, and we want to support them wherever they go with strong AI computing capabilities. We’ll continue to follow an open and open-source approach — not just sharing, but building together — to push China’s AI industry forward as a whole.
Host:
If we extend the timeline a bit, looking five or ten years ahead, how do you see global AI competition evolving?
Zhou Jingren:
We believe AI capabilities are almost doubling every six months. In the coming years, we’ll see major advances across general intelligence — not just reasoning, but vision, multimodality, and more. That trend is already clear.
We’ll also see an explosion of AI applications — possibly within five years, AI will be woven into nearly every aspect of daily life. It will dramatically boost productivity and create entirely new forms of value, while also improving quality of life.
Host:
Everyone here today is both a participant and a driver of this transformation. From your own roles, what responsibilities should we each take on to help this transformation move faster?
Qian Dongqi, Chairman, Ecovacs Group:
From our perspective, the key is application — and more application. We focus on two layers. One is product-level applications, like embodied intelligence. The other, which may be even more important, is how AI transforms the entire enterprise.
In the future, companies won’t be run by people alone — they’ll be run by AI capabilities end to end. We call this “carbon–silicon integration”: AI agents do the work, and humans oversee the process.
That means large-scale use of large models, and real costs come with that. In the past three months alone, our spending on large-model calls was about 1 million yuan. This year, even with conservative estimates, that could reach 10 million yuan. Multiply that across tens of millions of enterprises, and you begin to see the enormous scale of AI-driven demand. That sustained, deep application by ordinary companies is what will truly push this era forward.
Wang Qian, Founder and CEO, Invariant Robot:
We really hope to move forward together with foundation models and get to general AI — and eventually even superintelligence — as quickly as possible. The reality is that foundation models follow a kind of logarithmic scaling: to get linear gains in intelligence, you actually need exponential increases in resources.
So far, we haven’t hit a hard wall yet, but we almost certainly will — and probably sooner than we think. In five or ten years, we’re going to run into the same question: where do we get exponential growth in compute, energy, and data? That’s not a new question in the physical world. For hundreds of years, the bottleneck has always been the same — no production process could truly break free from human labor.
That’s why we believe that once we achieve true embodied general intelligence, things change fundamentally. At that point, the physical world itself can start scaling exponentially, much like Moore’s Law, where performance jumps by multiples every year or even every few months.
If that happens, we’ll finally have a real, material foundation to support superintelligence. And for us, the goal isn’t just to improve everyday life — it’s to help open the door, together with language and multimodal foundation models, to an entirely new era for humanity.
Wang Qingbang, Founder and CEO, PigaiBang:
Some people say large models aim for the stars. We’re doing the opposite — we’re focused on the ground. Our mission is to solve the “last mile” problem between large models and everyday users.
Large models don’t lack capability — what they lack is real-world application and clear value for ordinary people. We act as the bridge between models and users, finding practical scenarios in classrooms and daily teaching where these capabilities actually matter.
Wang Changhu, Founder and CEO, Aishi Technology:
Over the past 20 years, every major upgrade in how people interact with content has reshaped the world. Search defined one era. Recommendation defined the next.
With AI-driven video generation evolving so quickly, we may soon reach a point where people interact with video in real time — and the line between creator and audience disappears. That’s what we’re relentlessly exploring: new ways for humans and video to interact.
Xiong Yihui, Founder, JuriLu Technology:
Our challenge going forward is to keep delivering continuous, discontinuous innovation. In just the past year, user habits and expectations have shifted multiple times in fundamental ways.
That’s counterintuitive and uncomfortable, but we have to constantly overcome instinct, iterate rapidly, and deliver better solutions layer by layer, again and again.
Host:
Thank you. After hearing all of this, Mr. Zhou — as a calm, technical thinker — do you feel energized?
Zhou Jingren:
I feel both excited and deeply aware of the responsibility on our shoulders. In this AI era, we want Alibaba Cloud to provide what I’d call the “water, electricity, and fuel” of AI — the basic infrastructure that every industry needs. Our goal is to turn AI into real productivity that improves lives and creates lasting value.
Host:
On today’s China, when the most open technologies meet the richest application scenarios, they can give rise to a force that supports innovation worldwide. When the world chooses Chinese AI, perhaps it’s choosing a future that feels both ambitious and tangible. Thank you for watching — and we’ll see you next time.



Great dialogue, thanks for posting....