MagicWorld: Interactive Geometry-driven Video World Exploration

1College of Computer Science and Technology, Zhejiang University
2vivo Mobile Communication Co., Ltd
3National University of Singapore
* Corresponding authors

Abstract

Recent interactive video world model methods generate scene evolution conditioned on user instructions. Although they achieve impressive results, two key limitations remain. First, they fail to fully exploit the correspondence between instruction-driven scene motion and the underlying 3D geometry, which results in structural instability under viewpoint changes. Second, they easily forget historical information during multi-step interaction, resulting in error accumulation and progressive drift in scene semantics and structure. To address these issues, we propose MagicWorld, an interactive video world model that integrates 3D geometric priors and historical retrieval. MagicWorld starts from a single scene image, employs user actions to drive dynamic scene evolution, and autoregressively synthesizes continuous scenes. We introduce the Action-Guided 3D Geometry Module (AG3D), which constructs a point cloud from the first frame of each interaction and the corresponding action, providing explicit geometric constraints for viewpoint transitions and thereby improving structural consistency. We further propose History Cache Retrieval (HCR) mechanism, which retrieves relevant historical frames during generation and injects them as conditioning signals, helping the model utilize past scene information and mitigate error accumulation. Experimental results demonstrate that MagicWorld achieves notable improvements in scene stability and continuity across interaction iterations.

Method

Description of image

Overview of the MagicWorld inference pipeline. Given a single scene image and keyboard actions, MagicWorld interactively generates a dynamic world. At each interaction step, the action-guided 3D geometry module produces action-driven point clouds, which are rendered into a point-cloud video and concatenated with the first frame of the current interaction and noise as inputs to the camera-based video DiT. Meanwhile, the current frame latent retrieves the three most similar historical latents from the cache, which are concatenated as history references. The generated frames are finally decoded into video, and the history cache is updated accordingly.