- The Hangzhou company is building interactive world models using a fully data-driven approach and domestic AI chips
- Moxin says its technology has already entered film, autonomous driving and robotics applications
Moxin Technology (魔芯科技), an embodied AI and world model startup led by a 25-year-old Zhejiang University doctoral student, has received shy of 100 million yuan ($14.72 million) from a Pre-Series A++ funding round, as investors continue pouring money into next-generation spatial intelligence systems.
The latest financing was backed by Fullhan Microelectronics, Lenovo Capital-affiliated venture fund Legend Star and Zhejiang Venture Capital, with participation from existing shareholders.
Headquartered in Hangzhou, the company said another Series A financing round is already in the pipelines.
The raise comes just months after Moxin announced a separate pre-Series A+ round led by Hubble Technology Investment, Huawei’s investment affiliate.
Hubble now holds a 7.98% stake, making it the company’s largest external shareholder and giving the startup a post-money valuation of 840 million yuan.

Another ‘Zheda’ alumnus
Founded in 2021, Moxin is led by Chief Executive Chen Tianrun, a PhD student at the prestigious Zhejiang University, or Zheda for short, studying under Chinese computer graphics pioneer Pan Yunhe, who is a member of Chinese Academy of Engineering and former president of Zhejiang University.
While global competitors pursue different technical routes for world models — including 3D Gaussian splatting representations at Fei-Fei Li’s World Labs, JEPA-based architectures at Yann LeCun’s AMI Labs, and real-time interactive systems such as Google DeepMind’s Genie 3 — Moxin said it chose a fully implicit, end-to-end data-driven approach that avoids handcrafted intermediate representations.
To support that strategy, the company built a petabyte-scale 3D data library covering dynamic natural environments and complex physical scenes.
It also developed internal data-generation tools and hired designers and digital artists to expand its content pipeline.
‘Scaling Law’ for spatial intelligence?
Moxin said its research led the company to identify what it describes as the “Scaling Law” for spatial intelligence, in which reconstruction quality and spatial consistency improve predictably as datasets, scene diversity, compute resources and model parameters scale simultaneously.
The company later validated the pattern through research on feed-forward 4D foundation models, where systems trained on datasets exceeding one million samples and more than 10 billion parameters began demonstrating long-horizon spatial consistency similar to early scaling patterns seen in large language models.
-1024x460.png)
Based on those findings, Moxin launched KOKONI-World, which it described as China’s first multi-minute interactive world model trained entirely on domestic computing infrastructure powered by Huawei Ascend 910C chips.
The model uses 14 billion parameters — roughly ten times larger than some competing systems — and adopts a cascaded knowledge-distillation framework to reduce inference cost and latency.
The architecture also incorporates camera-aware memory structures designed to preserve geometric consistency and visual stability during viewpoint changes in virtual environments.
KOKONI-World
According to the company, KOKONI-World can generate continuous predictions for up to 2,000 frames, or roughly two minutes, while supporting real-time 1080P interactive output and six-degree-of-freedom camera control.
A prototype debuted at Huawei’s Hangzhou training center in December 2025.

Moxin said it has already deployed its technology in film production, digital twins, autonomous driving and embodied robotics.
In entertainment applications, the company said it reduced video generation costs to below 0.1 yuan ($0.01) per second for projects including AI-generated short dramas, immersive tourism experiences and 3D animation of traditional paintings.
Beyond entertainment
In autonomous driving, Moxin is working with automakers and expects commercial deployment next year. Its edge computing systems can reportedly run on NPUs from Rockchip (瑞芯微电子) and Horizon Robotics (地平线机器人) without relying on high-end GPUs.
The company is also developing embodied AI systems aimed at improving robotic spatial perception and navigation, while opening parts of its large-scale 3D data assets to outside AI research teams through collaborative partnerships.
