Alibaba launches interactive world model as AI race shifts beyond video

  • “HappyOyster” enables real-time generation and manipulation of 3D environments
  • Release signals move from static video tools toward interactive AI content creation

Alibaba group has unveiled a new AI system capable of generating and modifying virtual worlds in real time, marking its latest push into interactive content creation tools.

The product, called HappyOyster, was released on April 16 by the tech titan’s ATH (Alibaba Token Hub) division and is currently in limited closed beta testing.

Unlike conventional AI video generators that rely on prompt-based rendering workflows, the system allows users to continuously adjust scenes as they are being created.

The shift enables creators to modify visual elements, perspectives and motion dynamically, reducing the need to restart rendering processes with each iteration and shortening production cycles.

HappyOyster is designed to generate dynamic three-dimensional environments for use in film production and game development.

In filmmaking, it can support rapid prototyping and previsualization through text or image inputs, while in gaming it allows developers to build interactive environments at early stages with lower upfront design costs.

The system is part of Alibaba’s broader effort to expand beyond video generation into so-called world models—AI systems that simulate environments and enable interaction within them.

It sits alongside the company’s earlier viral video model HappyHorse, developed under the same unit.

Alibaba’s ATH unit, established in March and led by Chief Executive Eddie Wu, consolidates the company’s AI model research, platform services and application development into a unified structure.

This move comes as competition has been intensifying across generative AI and multimodal systems, with Alibaba sparring with established rivals Tencent, ByteDance, SenseTime and several startups in the race to produce a technologically and commercially viable world model.