---
license: gpl-3.0
---
# Frame In-N-Out: Unbounded Controllable Image-to-Video Generation
## Intro Video
Frame In-N-Out is a controllable Image-to-Video generation Diffusion Transformer model where objects can enter or exit the scene along user-specified motion trajectories and ID reference. Our method introduces a new dataset curation pattern recognition, evaluation protocol, and a motion-controllable, identity-preserving, unbounded canvas Video Diffusion Transformer, to achieve Frame In and Frame Out in the cinematic domain.
## Model Zoo 🤗
| Model | Description | Huggingface |
|--------------------------------------------------------------- | -------------------------------| ------------------------------------------------------------------------------------------------|
| CogVideoX-I2V-5B V1.0 (Stage 1 - Motion Control) | Paper Weight v1.0 | [Download](https://huggingface.co/uva-cv-lab/FrameINO_CogVideoX_Stage1_Motion_v1.0) |
| CogVideoX-I2V-5B (Stage 2 - Motion + In-N-Out Control) | Paper Weight v1.0 | [Download](https://huggingface.co/uva-cv-lab/FrameINO_CogVideoX_Stage2_MotionINO_v1.0) |
| Wan2.2-TI2V-5B (Stage 1 - Motion Control) | New Weight v1.5 on 704P | [Download](https://huggingface.co/uva-cv-lab/FrameINO_Wan2.2_5B_Stage1_Motion_v1.5) |
| Wan2.2-TI2V-5B (Stage 2 - Motion + In-N-Out Control) | New Weight v1.5 on 704P | [Download](https://huggingface.co/uva-cv-lab/FrameINO_Wan2.2_5B_Stage2_MotionINO_v1.5) |
| Wan2.2-TI2V-5B (Stage 2 - Motion + In-N-Out Control) | New Weight v1.6 on Arbitrary Resolution | [Download](https://huggingface.co/uva-cv-lab/FrameINO_Wan2.2_5B_Stage2_MotionINO_v1.6) |
## Data
This is a mini sample of 300 instances for sample training purposes. It also contains our testing benchmark for Frame In and Frame Out. Check our [github](https://github.com/UVA-Computer-Vision-Lab/FrameINO) for more details.
## 📚 Citation
```bibtex
@article{wang2025frame,
title={Frame In-N-Out: Unbounded Controllable Image-to-Video Generation},
author={Wang, Boyang and Chen, Xuweiyi and Gadelha, Matheus and Cheng, Zezhou},
journal={arXiv preprint arXiv:2505.21491},
year={2025}
}
```