Full Transcript: Six Little Dragons Wuzhen Dialogue
Wang Jian (Moderator): I’ve always been curious — what’s the connection between the “Six Little Dragons” and the World Internet Conference?
Wang Xingxing: Our company has been around for over nine years now, almost ten. I’m really impressed to be back in Wuzhen. The first time I attended the World Internet Conference was in late 2017, when I brought our first-generation robot system. I still remember meeting a few well-known entrepreneurs back then — it left a deep impression on me.
Looking back over the past decade, we started in 2016 with just three people, then grew to a dozen, a few dozen, and now we have over a thousand employees. We’ve made some solid progress along the way.
We’re really grateful for the support we’ve had from Hangzhou and from China more broadly. The whole startup ecosystem has been amazing — it’s given us room to pursue our passion, to build what we dreamed of as kids, and to make a real contribution to society.
Han Bicheng: Our company focuses on brain–computer interface technology. Ten years ago, our core team was still studying at Harvard, and we were lucky to witness some incredible technologies early on. For example, some children with autism who couldn’t speak were able to do so through neural modulation, and people suffering from insomnia due to stress could fall asleep quickly with brain–computer interface technology.
We were amazed. We thought, if we could reinvent these traditional, bulky machines that have existed for a century, maybe more people could benefit from them — so we started our own company.
Those early days were pretty tough. I remember we were studying and building the company at the same time, often working late into the night in a basement full of flashing EEG headsets, testing our own brainwaves.
I’ll never forget one night around 3 a.m., an elderly American lady peered through the window, and we were all terrified. The next day, a rumor spread around the neighborhood that a group of Chinese students were “charging their brains” every night to get better grades.
But over time, we proved that this technology could really change lives — helping people without limbs regain movement, helping children who couldn’t speak find their voices.
I eventually made two big decisions: first, after four years in my PhD program, I dropped out of Harvard to work on this full-time; second, we realized progress was too slow in the U.S., so in 2018 we moved our headquarters back to Hangzhou, Zhejiang. With strong local support, we grew fast.
Now, the brain–computer interface field is booming. Even Elon Musk has his own company, releasing new products several times a year. China’s 14th Five-Year Plan has officially listed brain–computer interfaces as one of six key frontier technologies. Things have changed dramatically, and we’re more confident than ever in what we’re building.
Huang Xiaohuang: I did my undergrad at the Chu Kochen Honors College at Zhejiang University, then went to the University of Illinois at Urbana-Champaign on a full NVIDIA scholarship for my PhD in computer science, focusing on GPU-based high-performance computing. After graduating, I joined NVIDIA, developing the parallel programming framework for GPU chips — including CUDA.
Since I was working on CUDA, I started thinking about bringing GPU computing to the cloud — building a real internet company. So I built a GPU cluster, wrote physically accurate rendering software using CUDA, and after returning to China, co-founded Many Core with two partners. The name actually comes from the GPU architecture itself.
Over the past decade, the internet has developed at lightning speed, and our company’s journey mirrors that. We grew a big user base and collected massive amounts of data. And interestingly, hardware companies like NVIDIA — once overlooked in Silicon Valley — are now at the center of the AI boom.
When we look back, all that data accumulated from China’s vast internet user base has become the “fuel” of the AI era. The same goes for companies like OpenAI — their models are built on enormous amounts of online data.
So I think moving from the internet era to the AI era is part of a global tech wave, and also a reflection of China’s internet story. Our company started out wanting to be an internet firm; now we’ve transformed into a spatial intelligence company. We’ve gone from using GPUs for the internet to using GPUs for spatial intelligence — not just to serve people, but also robots. This transformation is huge, and it aligns closely with the spirit of the Wuzhen Summit.
I used to sneak into the conference just to listen and learn. This year, it’s an honor to finally be up here sharing my story. Thank you.
Zhu Qiuguo: We’ve been working on humanoid robots for quite some time. Around 2006, there was this prediction in the robotics world that by 2050, a team of humanoid robots would be able to beat the world’s top soccer players.
But I thought — if we have to wait another 40 years, I’ll be in my seventies or eighties! I didn’t want to wait that long. So in 2015, we started building quadruped robots instead. At that time, Boston Dynamics’ robots were already amazing — they could move outdoors with great agility.
So we asked ourselves, could we make Chinese robots just as capable — able to move outdoors and handle complex motion? That became our goal.
After ten years of effort, China’s quadruped robots have made huge progress. They can now navigate difficult terrains and are being used in many practical scenarios. My hope is that one day, our robot dogs will be able to reach every corner of the land. It’ll take a lot more work, but we’re committed to that goal. Thank you.
Feng Ji: Back in 2016, our company decided to raise funding and wrote a business plan to make a single-player game.
That year, we noticed an interesting data point: on Steam, the world’s biggest PC game platform, Simplified Chinese users had reached the same share as English users — around 32 to 33 percent. That convinced us there was a massive base of Chinese gamers, especially for single-player games.
We also saw a similar pattern in film. A decade earlier, around 2006, China’s movie box office had started growing fast, and by 2016 it had reached U.S. levels. Back in 2006, the number of movie screens in China had caught up with the U.S., just like how PC gamer numbers did in 2016. That was the first sign — a solid starting point.
From 2006 to 2016, audiences in China went from watching imported hits like Titanic, Avatar, and Transformers to watching Chinese-made blockbusters topping the charts.
We realized that once local teams start producing world-class content in any creative industry — especially storytelling — Chinese users will reward them generously.
The rise of China’s gaming industry, and the trust it’s built for homegrown teams, really moves me.
And one last thing: even though we’re a Chinese team making a game rooted in Chinese culture, if the content or quality isn’t there — if it’s subpar or fake — Chinese users will see right through it. They’ve got sharp eyes and won’t hesitate to call you out.
Victor Chen: The pace of China’s technological progress over the past decade has been incredible. DeepSeek was founded in 2023, with the goal of pursuing and achieving artificial general intelligence. From day one, we’ve focused on pushing the boundaries of core technology.
One of our biggest strengths is long-term focus — we’ve stuck to the main line of frontier intelligence research and avoided chasing short-term trends or easy wins.
At the same time, we share the belief in openness, cooperation, and mutual benefit that underpins the idea of a “shared future in cyberspace.” We’ve been committed to open-sourcing our technology to make innovation more accessible. Through active engagement with the developer community, we’ve received valuable feedback that’s helped us grow.
We truly believe that open collaboration and knowledge sharing will remain one of our key advantages going forward.
Wang Jian (Moderator):
A lot has changed over the past decade. As entrepreneurs, you’ve not only witnessed those changes — you’ve helped create them. But now I’m even more curious, because each of your stories reveals something different.
Let’s start with Xingxing. You know, the first time I really took notice of Unitree was actually at the end of 2024. I happened to meet Marc Raibert, the founder of Boston Dynamics, and we talked for over two hours. He kept bringing up one company — Unitree. To be honest, he seemed to know and admire your work even more than I did, and as someone from Hangzhou, I felt a bit embarrassed about that.
So I’d like to ask: what has made developing humanoid robots so technically challenging all these years? Everyone knows Boston Dynamics may no longer be the clear leader in this field, so what technologies do you think will drive the next phase — and how might humanoid robots evolve from here?
Wang Xingxing:
You raised two questions — let me start with the first one. The progress we and many other Chinese robotics companies have made in recent years is really built on China’s strong manufacturing capabilities. The whole industrial base here — especially for key robotic components — is incredibly solid.
Since around 2016, we began developing many of our core parts in-house. Over time, we’ve managed to produce cheaper yet more capable quadruped and humanoid robots. Our robots have since been shipped to leading labs, universities, and companies all over the world. They’ve built software, applications, and open-source AI algorithms on top of our hardware.
In a sense, the rapid progress in robotics over the past few years has been the result of global co-creation.
Wang Jian (Moderator):
Marc himself launched a new company this year, right? And I heard he even bought one of your robots?
Wang Xingxing:
That’s right. And that’s the beauty of it — everyone’s working together on the same kind of hardware platform. It reminds me of the early days of personal computers. Back then, PCs weren’t much use to ordinary people, but researchers and developers kept building new software and functions, and that collective effort created the whole ecosystem.
If you think about it, just last year, a humanoid robot that could walk steadily was already considered impressive. But now — especially over the past few months — we’ve seen robots around the world that can dance and perform fluid movements. Why? Because so many people are contributing to this shared platform, pushing the whole industry forward together.
In AI and robotics, every region has its strengths, and I think Hangzhou has a real advantage here too. These collective efforts are what keep driving the field ahead.
As for the second question — about the future — I think AI is now accelerating the development of embodied intelligence as a whole, so progress will probably come even faster.
In the past six months or so, embodied AI has started to feel almost surreal — like science fiction turning real. Over the next few years, I think we’ll see even more of that happening at a faster pace.
And compared to ultra-frontier areas like nuclear fusion or Mars colonization, embodied intelligence and humanoid robots are actually more achievable. I believe we’re much closer to realizing those dreams. I’d say the next year or two will bring even more surprises than this one.
Wang Jian (Moderator):
Thank you, I’m really looking forward to those surprises.
Now, Bicheng — let’s talk about brain–computer interfaces. Technically speaking, we actually come from the same background. What used to be called human–computer interaction has now become part of artificial intelligence. You’re literally creating a new era — doing something people didn’t even imagine before.
So I’m curious: how did you turn such a narrow research field into something people now understand and accept? And maybe more importantly, how can your technology truly reach ordinary people?
Han Bicheng:
Brain–computer interfaces are an incredibly broad and deep tech stack — and we’re just a group of people who really believe in it. When we founded the company, hardly anyone in China knew what “BCI” even meant, so we were a bit nervous at first. About a year and a half later, we heard that Elon Musk had launched Neuralink.
Our team actually visited their three-story office in San Francisco back then, and it turned out that another company shared the building — OpenAI, which Musk also co-founded.
Over the years, brain–computer interface technology has moved out of the lab and into real life. Neuralink, for instance, is working on products to help blind people see again.
But what’s really fascinating is that someday they might not just help blind people see what we see — they might help them see what we can’t see. Humans only perceive visible light, but those sensors could detect ultraviolet, infrared, maybe even signals through walls.
Our approach is different — we focus on non-invasive brain–computer interfaces — but we’re also trying to bring this tech into everyday life.
Our product roadmap follows what I call the “pain to public” path.
“Pain” means we start by helping the people who need it the most. For example, people who have lost their limbs. When we were developing our prosthetic control systems, we lived with people with disabilities for months. One thing we noticed was that many of them rarely went outside. So we thought, if we could help them use their thoughts to control new hands or legs, they could go back to work, live independently — live normally.
Over the past eight or nine years, we’ve spent nearly every day with these patients and with nonverbal autistic children. Now we’re expanding into broader areas — next year we’ll launch a sleep product to help people with insomnia or poor sleep.
And after that, we’re planning a weight management product. I’ve actually been studying weight management my whole life — you can probably tell from my body shape! laughs
A lot of overeating comes from the feeling of hunger, not actual hunger — and that can be adjusted through neural regulation. Judging from my size, that product’s not ready yet, but hopefully by the year after next it will be.
So that’s our direction — turning brain–computer interface technology into products that genuinely improve people’s lives, one step at a time.
Wang Jian (Moderator):
Thank you. I really hope your technology becomes accessible to everyone. It might take many more years of hard work, but it’s worth it.
Now, Xiaohuang — before I ask my question, let me share something interesting. When people talk about AI startups, they always mention talent. I heard a funny story about Many Core — apparently you have a couple who are both key technical leads, both in Hangzhou, but working at different companies.
No one’s figured out who came to Hangzhou first and who followed the other. I guess talent mobility has its own mysterious ways.
Anyway, back to my question. When I think of Many Core, I can’t help but think of NVIDIA. Ten years ago, no one thought of NVIDIA as an AI company — everyone thought it was a game company. Now it’s the heart of AI.
Many Core started out in CAD and computer graphics, right? So how did you end up working on “physical AI”? What’s the link between what you do and artificial intelligence — or even robotics? And what’s your take on this new direction in AI?
Huang Xiaohuang:
Let me start with the talent story you mentioned.
In 2021, our company hit a major turning point. Before that, I saw it as still the internet era — user traffic and data were growing fast, but I could sense it was nearing a ceiling. I kept thinking: what happens after that?
Actually, back in 2018, we partnered with Imperial College London, the University of Southern California, and Zhejiang University to create InteriorNet, the world’s first open spatial dataset for indoor environment understanding, 3D reconstruction, and robot interaction. It became quite well-known in academic circles.
That project was how we first connected with the husband-and-wife researchers you mentioned — they were professors in the U.S. at the time.
We had already seen models like Transformers emerging, and we realized that these spatial datasets could train something like a “large model for spatial cognition.”
I was their classmate back in the U.S., studying computer graphics too, so I reached out to her husband and said: “Why don’t you join us as Chief Scientist? The internet era is ending, the AI era is coming. Other companies are doing language models — let’s build a spatial model instead. You lead it.”
What I didn’t know was that your lab was trying to recruit his wife at the same time. So yes, fate works in mysterious ways — both companies were trying to hire the same couple! If I’d known how fast the world would change, I probably wouldn’t have sold my NVIDIA stock back then. laughs
That moment was both accidental and inevitable. The internet era had reached its limit, but AI represented a new stage once data accumulation crossed a threshold.
I think spatial intelligence will be the next major field after large language models. It applies to physical AI — everything related to robotics — and also to video generation, since video needs spatial consistency.
We even found that the scaling laws behind large language models — like in DeepSeek or GPT — still hold true for spatial cognition and reasoning models.
We noticed this back in 2021 and 2022, though at the time we didn’t have many use cases, so it stayed more of a research project. But now, seeing companies like Unitree booming, we’re thrilled.
I really believe the future will be full of robots — maybe each person will have ten robot “assistants.” Whether at home, in offices, or in factories, there will be tons of robots, and we’ll need spatial intelligence to coordinate and manage them. That’s what we’re focused on now — building the spatial intelligence systems that can serve humanity, together with others in the ecosystem.
Wang Jian (Moderator):
Thank you — that was very thought-provoking.
We’ve talked so much about AI and large language models, but those are all based on language, which limits how we perceive the world. With spatial intelligence, the boundary of the world itself becomes the real boundary — we need to expand the limits of language into the limits of the physical world.
Now, Qiuguo — let’s talk about your story. You mentioned “2050.” That actually reminds me of an event we organize for young innovators called 2050. I remember your first robot dog was there in 2018 — it couldn’t even stand on its own! You had to use a metal frame and two steel wires to hold it up for the demo.
I’ve always wondered — how did your company have the courage to show something so unfinished back then? Most companies today wouldn’t dare to do that.
But now, you not only have robot dogs but also humanoid robots. I’ve seen your factory in Ningbo — your robots are already working in some really tough, even dangerous, environments. So I’d love to hear your thoughts on that journey — from robot dogs to humanoids — and what kind of impact you think this evolution will have on society.
Zhu Qiuguo:
I do have a special bond with 2050. Back then our robot dog really was hanging from steel wires for two reasons. First, we had not solved how to get up from a prone position, which sounds funny now. Second, its gait was unstable and needed a frame for support. We founded the company at the end of 2017 and took our first robot to the 2050 event in 2018. The event did not look down on us, which is why I say we were really lucky.
Since we target industrial use, we got a lot of tough questions. People asked where a robot dog would be used and how it would be used. From 2018 on, we kept thinking about how to put robot dogs to work. At that time no one in the world had figured it out.
We decided to try power inspection in substations as our first scenario. We spent three months testing on site. A robot dog that ran fine indoors would wear out its rubber pads after a few hours there. The machine overheated. Rain could fry it. One problem after another.
From then on we built products around actual application scenarios. We created a lot of innovation on the market side, including power inspection, emergency firefighting, and what we are working on now, the last mile.
Today we feel the path for industrial applications is basically open, but building a full solution is still hard. We still face issues. The robot dog has a docking kennel. Sometimes the kennel will not open and customers complain. Overall though, people now understand where robot dogs can be useful.
I sometimes joke that we used to make dogs and now we can finally make people. This year we released a humanoid.
The question is the same. What problems are we ultimately trying to solve? As Academician Wang said, we need to replace or assist humans in dangerous, harsh, and complex environments. So we built a protected type. It can go outdoors, handle wind and rain, and even fall into puddles and keep going.
Some people ask if we can ship a simpler intermediate version first. I keep coming back to starting with the end in mind. We build toward the final goal. It is hard, but we built the world’s first protected humanoid that can truly go outdoors and face the elements.
We want to keep the original mission of solving real problems and delivering industrial deployments. We will keep polishing the product. I hope that in the near future, when people choose industrial humanoids, they will choose ours. Thank you.
Wang Jian (Moderator):
Thank you, Qiuguo. This shows something important. Babies are not always cute when they are born. They grow into their looks. Startups are the same.
Now, over to Feng Ji. Your company name is unusual, Game Science. When I first heard it, I wondered if you were being a bit cheeky. Then in 2020 I saw something that changed my mind. An eighties-born woman who had been to the 2050 event saw your Black Myth: Wukong showcase and wrote a reflection afterward. I was very moved by it.
She said she had been a gamer for many years but never felt comfortable telling her family or colleagues. After Black Myth: Wukong, she could finally say openly that she plays games. She was thrilled to share that feeling.
She also talked about what she saw on site. When people who had never played games looked at your art on a 360-degree wraparound screen, many asked if the image came from Dunhuang. That player spent a long time explaining that it did not and walked them through all your scenes.
There is more. She is not only an eighties-born woman, she also has a physics degree from Oxford. I doubt she is your typical user. You brought culture and craft to people who were not reachable before.
From that angle, how did you bring technology innovation and culture together so well?
Feng Ji:
Thank you, Professor Wang. That example is very interesting. It feels like Black Myth: Wukong helped many people shed their gaming shame and made playing it feel legitimate.
Let me offer another way to think about it. People sometimes say the seventh steamed bun is the one that makes you full. Does that mean the seventh bun is especially filling?
If we look back at the development of China’s game industry, we might realize that over ten or more years, China already had the world’s largest user base and market size.
The industry is enormous. Game Science joined a long time ago. Our team has worked together for more than 15 years. Many of us built other titles at big companies, including online and mobile games.
So while Black Myth: Wukong has many unique qualities, it may also be the product of China’s big river and big fish. The industry nurtured massive user bases and talent pools. We were born on that foundation and might just be a small crest on a big wave.
Maybe we chose the right theme, the right time, and the right business model. But we should not forget that many Chinese studios are doing great work without the same luck or timing.
Here is a data point. Among the ten highest-grossing games worldwide last year, not including Black Myth: Wukong, four were developed by Chinese teams and three had Chinese investment or involvement. I think this is the real foundation that allows works like Black Myth to stand out. That is my first point.
You also asked why we are called Game Science. Many people think games have nothing to do with science. I want to clarify a bit.
Video games are highly technical and bring together many achievements in computer science. Historically, many tech giants, like NVIDIA, Microsoft, and Intel, have grown hand in hand with gaming.
Of course games are not only science. They are also art. Games are called the ninth art because there are eight before them. Literature, painting, music, theater, dance, film, sculpture, and architecture.
If you think about it, games feel like the ninth art that emerged when the first eight fused with cutting-edge technology. Games integrate all of those elements. We are proud to keep building in such an integrative industry.
Wang Jian (Moderator):
Thank you, Feng Ji. It is fascinating. From BCIs to robots to games, there is a common thread. All of them take people to places they could not perceive before. That speaks to the spirit of human exploration.
Now, Victor from DeepSeek. At the start of the year DeepSeek became a real phenomenon. What struck me most was this. For the first time, an open-source technology from China received global respect. Developers around the world felt that China had made a major contribution to open-source AI.
As someone who has lived it, tell us how open-sourcing and the pursuit of the best AI models push the field forward. How can openness help AI benefit society more widely?
Victor Chen: Thank you, Academician Wang. From our perspective, AI will bring huge change to society. In the short term there are more opportunities. In the long run the risks may be bigger.
In the short term, over the next three to five years, humans and AI may be in a honeymoon phase. AI cannot do many jobs on its own yet, but people can use it to create more value. One plus one becomes greater than two. We can tackle more complex problems and create bigger wins. In this period, tech companies should act as evangelists. We should make the tech as accessible as possible so the public can reach cutting-edge AI at low cost, use it, learn it, and raise productivity in their work.
In the medium term, five to ten years out, AI will start replacing some human jobs. Society will face unemployment risks. Tech companies should act as whistleblowers then. We should warn the public about which jobs will be automated first and which skills will lose value. That way people can build risk awareness.
In the long term, ten to twenty years out, things could be more dangerous. AI may replace most jobs and our current social order will face major shocks. By then tech companies should act as guardians of humanity. At a minimum we must protect human safety and help reshape social order.
This is not alarmism. This AI revolution is very different from the Industrial Revolution. Back then we invented tools. Tools only assisted human intelligence. Humans remained the unquestioned subject of intelligence. Old jobs disappeared, new ones appeared, and overall productivity rose.
This time we are inventing intelligence itself. AI can become a subject in society and become smarter than humans. More and more jobs will be replaced. In the end humans will be completely “freed” from work. That might sound good, but it will shake society to its core.
Can we slow down or stop development? Not really. There will always be companies that push history forward. If a company wants excess profits from AI, it must capture the value created by replacing human labor. You could even say the mark of success for this AI revolution is that it replaces the vast majority of human jobs.
So I am an optimist on technology and a pessimist on society.
Wang Jian (Moderator): After listening to you, I have been thinking about something. People call you the Six Little Dragons. Whether that label sticks is not important. No matter what people call you or how successful you are, I believe you all face the same thing. You have all hit real technological boundaries and challenges in your fields.
If you share those boundaries and challenges, others can build on your work and push further. So what challenges have you run into? How do you think they should be overcome? What advice would you give other founders or technical contributors?
Wang Xingxing: The biggest challenge in robotics right now is the AI model for embodied intelligence.
It is not the same as a general large language model. Since 2022 when GPT arrived, language models took a huge leap. Their model structures and especially their data are very available. The internet provides massive ready-made data. The modality is mostly one dimension, language, and even multimodal models can pull in video from the internet.
In robotics we have a very different problem. Model architectures and data scale are both insufficient. Every manufacturer’s robot is different. Data collected on different machines does not match. When robots go to work there is no consensus on basics like where to place cameras or how to combine touch and vision. Should the camera be on the hand, the head, or the chest? There is no global agreement.
So the challenge is the model architecture for embodied intelligence and everything around data. How to collect data, how much to collect, how to train, and how to get higher quality data. These questions are extremely valuable.
I also feel that in a sense, while DeepSeek’s dream is AGI, a general model for embodied robotics might itself be AGI. It may be the path most likely to reach the AGI we imagine.
Han Bicheng: Brain–computer interfaces get a lot of attention, but there is a second side to the story. Why does this matter so much?
Roughly 36 percent of current medical spending is related to the brain. Yet for major brain disorders like Alzheimer’s, autism, and chronic insomnia, we do not have drugs that fundamentally cure them.
BCI is seen as a technology that could treat these conditions and enable the next generation of interaction. The obstacles are severe. Data collection and data interpretation are the main ones.
The brain is extremely complex. It has about 86 to 100 billion neurons. Decoding those signals is very hard.
Our products face this difficulty every day. People may not realize how hard it is to make a neurally controlled hand. The hand is the most complex part of the body. A lightning-fast thought must map to very fine motor control.
I remember one case vividly. We were training data for a man who had lost his right hand. I said, imagine moving your thumb. He did. I said, imagine moving your pinky, now your middle finger. He tried all afternoon, but the signals looked identical. We could not tell them apart. I urged him to try harder, then realized he had simply forgotten. He had lived without a hand for decades and no longer remembered what it felt like to move each finger. What do you do then?
We built an AI model that learns like a baby. It imagines having a hand and relearns control. The same goes for legs. Few companies make intelligent prosthetic thighs, even though many people lose legs. If the leg control is not good, the person falls.
Say a person walks 5,000 steps a day, that is 1.75 million steps a year. If each step needs about 100 neural control calculations, that is 175 million calculations per leg per year. One bad calculation can cause a fall. We are using AI to tackle this.
There is something beautiful here. Many famous AI ideas were inspired by neuroscience. Demis Hassabis trained in neuroscience. Geoffrey Hinton studied psychology as an undergrad. Now we have AI that was inspired by the brain helping us crack hard brain science problems. It is a lovely full circle.
Huang Xiaohuang: In spatial intelligence we serve two groups. People and machines. Before the LLM boom we mostly served people. Serving machines was mostly about publishing papers, raising our academic profile, and hiring top talent. Revenue came from serving people.
After AI took off we saw a worrying trend. We thought creative office jobs were very secure. Large models are reducing those roles.
That means our human customers will shrink. The machines that work for us will grow. So from 2022 to 2023 we shifted strategy. Our goal moved from charging people to charging machines.
Machines come in many forms. Humanoids sit at the top and are the hardest. Below that you have AGVs and traditional robotic arms. All are machine species that work for humans and will get smarter.
We think the number of machines will be ten times the number of people. Can we switch from charging each person to charging each machine? That is our transformation.
It is hard, as Academician Wang said. When we served people the client’s R&D ability varied a lot. With machines our clients are companies like Unitree with very strong R&D. They are the best of the best. If you want to be the water seller in a gold rush among top players, your water had better be extraordinary.
As we pivoted to spatial intelligence and invested more in R&D, we found our average counterpart was at the level of a university professor. The bar is high. This year we launched a spatial cognition model called SpatialLM to grow our impact in academia and industry. When we released it, it trended on Hugging Face right behind DeepSeek V3 0324.
On November 6 at the World Internet Conference we officially launched SpatialTwin, a cloud-native industrial AI twin platform for embodied intelligence in industry. Internally we are planning a suite of tools to help the future population of machines, which we think will be ten times the number of humans, serve us better.
Zhu Qiuguo: I see two big challenges in embodied AI.
First is embodied mobility. Can a robot move from one place to another without prior knowledge? This is difficult. Collecting enough data and using massive compute is tough for startups. I think robotics and autonomous driving companies can overcome this together soon, but for us the resource burden of data and compute is very real.
Second is embodied manipulation. Someone mentioned that the last dignity of humans is in our two hands. Musk has also said that making hands is hard. Building a human-like hand is hard, and getting two robot hands to execute long, complex, uncertain tasks is even harder.
Right now the path is not clear. Can current models really solve these problems? I think that is an open question.
We need innovation to reduce the need for compute and data, and we need new model architectures that truly solve the problems. Maybe in five to ten years we can bring robots into factories and homes at scale.
Feng Ji: The previous speakers went deep on their technical fields. I make content, so let me keep it a bit abstract.
This AI wave brings two challenges.
First, who does AI ultimately serve? If progress concentrates advantages in fewer hands and fewer companies, and they use those advantages to monopolize or push others around, that is a huge risk.
DeepSeek’s speaker said he is a pessimist at the social level. I became more optimistic because of DeepSeek. I see a Chinese answer. An open-source company with API costs an order of magnitude cheaper, not focused on maximizing its own interests, has allowed many people around the world to use advanced AI.
Victor Chen: If the outcome of AI development is to be optimistic, it should be because we find ways to empower more ordinary people, not because the technology gets concentrated in the hands of a few companies.
The second issue is panic. After AI arrived, many of us who spent 20 years mastering cognitive work suddenly feel outmatched. Think about radiology or pathology. Reading scans and judging lesions. AI can draw on experience and data that is millions of times larger, so its accuracy can surpass humans.
My suggestion is this. If you browse Bilibili, there is a creator who focuses on Journey to the West themed music. I recommend him. The channel is called “漫游会议室.” He has written more than a dozen AI made songs and turned them into music videos. People joke that it is “AAA music,” like AAA games. What is AAA music? AI writes the lyrics, AI composes the music, AI shoots the video, and it goes straight online.
A video like this used to take a team months. Now one person can publish several each week. I think the quality is a milestone for Chinese AI created content. If you check out his work, you will see the upside. In the past you needed years of training in composition, performance, and filming. One person could not do it all. AI gives people who never had the time or chance a real boost in capability.
So looking ahead, AI can let more people with taste and artistic judgment create far more content that is richer and better than what we have today.
For other founders or anyone who is unsure, my advice is to embrace the newest and strongest AI. You may feel a more optimistic future is possible.
If you only scroll through public accounts and read alarmist headlines, you might think we are in a terrible era. But if AI keeps advancing and we solve the first challenge I mentioned, it could bring a time when everyone has more free hours, does not spend most of life on survival, and can use their skills freely. That is possible.
One small personal tip. Take care of your health and watch how things unfold. As Bicheng said, health problems are still hard to solve and may take time. Keep your body in good shape and let the experts see if they can make us healthier than we are today.
Victor Chen: Thank you all for sharing. Let me add a few points about current bottlenecks in AI.
If we only look at today’s AI, there are many limits. For example, today’s AI does not have stable, cross domain general intelligence like humans. It can excel in very complex fields, yet perform strangely poorly on tasks humans find simple. Its intelligence is incomplete. You could call it jagged intelligence.
Why does this happen? After training, the model parameters are fixed. AI cannot keep learning and evolving in the real world the way people do. Think about the human brain. It provides core learning algorithms and a few instincts. The rest of our knowledge comes from lifelong learning.
So we need AI to have stable, generalizable learning algorithms and to build more links to the real world. That means more multimodality and embodied intelligence. Let the model learn continually and autonomously in realistic environments, the way people do.
Those are near term issues. If we zoom out to a 10 to 20 year horizon for AGI, today’s problems can be solved. Technology tends to accelerate. Three years ago, when ChatGPT first appeared, it often got grade school math wrong. Now it can win a gold medal at the International Mathematical Olympiad. Once we cross certain thresholds, the progress becomes discontinuous.
We should stay optimistic about technological progress. I will say it again. I am an optimist about technology. We are still in the first half of this AI revolution, maybe even the early part of the first half. Thank you.
Wang Jian (Moderator): Thank you all. I felt excited listening to this. Feng Ji gave a great conclusion. He emphasized that AI should help others. Helping others should be a main theme.
In the past people talked about who AI would replace. Today we talked about who AI can help. That is what I have always believed. If AI is clear about who it helps, it will be on the right path.
When Hinton won the Nobel Prize in Physics last year, one report used a subtitle that said this marked AI’s “penicillin and X-ray moment.” Imagine what penicillin and X-rays did for human health. I think that points to a key goal for AI.
Today you are the Six Little Dragons. Looking at the stage, I actually see one dragon. I believe this dragon will make greater contributions to China and to global technology in the future. Thank you.



Some of these questions seem to have been translated twice:
> let’s talk about brain–computer interfaces
> bought one of your robots