Papers
arxiv:2602.11564

LUVE : Latent-Cascaded Ultra-High-Resolution Video Generation with Dual Frequency Experts

Published on Feb 12
Authors:
,
,
,
,
,
,
,
,

Abstract

Latent-cascaded ultra-high-resolution video generation framework using dual frequency experts for motion modeling, latent upsampling, and content refinement.

AI-generated summary

Recent advances in video diffusion models have significantly improved visual quality, yet ultra-high-resolution (UHR) video generation remains a formidable challenge due to the compounded difficulties of motion modeling, semantic planning, and detail synthesis. To address these limitations, we propose LUVE, a Latent-cascaded UHR Video generation framework built upon dual frequency Experts. LUVE employs a three-stage architecture comprising low-resolution motion generation for motion-consistent latent synthesis, video latent upsampling that performs resolution upsampling directly in the latent space to mitigate memory and computational overhead, and high-resolution content refinement that integrates low-frequency and high-frequency experts to jointly enhance semantic coherence and fine-grained detail generation. Extensive experiments demonstrate that our LUVE achieves superior photorealism and content fidelity in UHR video generation, and comprehensive ablation studies further validate the effectiveness of each component. The project is available at https://unicornanrocinu.github.io/LUVE_web/{https://github.io/LUVE/}.

Community

Amazing paper, people should read it

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.11564 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.11564 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.11564 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.