Xiuli Bi ยท Jianfei Yuan ยท Bo Liu ยท Yong Zhang ยท Xiaodong Cun ๐ช ยท Chi-Man Pun ยท Bin Xiao
- [17/3/2025] ๐๐๐ We released mobius code based on CogVideoX. We also released mobius-vc2 code based on VideoCrafter2 and more cases!
- [27/2/2025] ๐ฅ๐ฅ๐ฅ We released the Mobius Paper. Mobius is a novel method to generate seamlessly looping videos from text descriptions directly without any user annotations.
git clone [email protected]:YisuiTT/Mobius.git
cd Mobius
โ๏ธ 2. Start with CogVideoX
conda create -n mobius python=3.10
conda activate mobius
pip install -r requirements.txt
Model | Resolution | Checkpoint |
---|---|---|
CogVideoX-5b | 720x480 | Hugging Face |
python mobius_cli_demo.py --prompts_path "./prompts/samples.txt" --model_path /path/to/your_models_path/CogVideoX-5b --output_path "./results/samples" --shift_skip 6 --frame_invariance_decoding
๐Parameter:
prompts_path
(str): The path to the description of the video to generate.model_path
(str): The path of the pre-trained model to be used.output_path
(str): The path where the generated video will be saved.shift_skip
(int): Set the skip step of latent shift.frame_invariance_decoding
(bool): Enable or disable frame-invariance decoding.
python gradio_web_demo.py
๐Tips:
Set the following environment variables in your system:
- MODEL_PATH = your_models_path
๐ 3. Start with VideoCrafter2
conda create -n mobius_vc2 python=3.8.5
conda activate mobius_vc2
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements_vc2.txt
Model | Resolution | Checkpoint |
---|---|---|
VideoCrafter2 (Text2Video) | 512x320 | Hugging Face |
python mobius_vc2_demo.py --prompt_file "prompts/samples.txt" --ckpt_path /path/to/your_models_path/VideoCrafter2/model.ckpt --savedir "results_vc2/samples" --shift_skip 9
๐Parameter:
prompt_file
(str): The path to the description of the video to generate.ckpt_path
(str): The path of the pre-trained model to be used.savedir
(str): The path where the generated video will be saved.shift_skip
(int): Set the skip step of latent shift.
- Release the paper.
- Release the code based on CogVideoX-5b.
- Release the code based on VideoCrafter2
- Longer looping and RoPE-interp code.
If you use this code for your research, please cite the following work:
@article{2025mobius,
author = {Bi, Xiuli and Yuan, Jianfei and Liu, Bo and Zhang, Yong and Cun, Xiaodong and Pen, Chi-Man and Xiao, Bin},
title = {Mobius: Text to Seamless Looping Video Generation via Latent Shift},
booktitle = {arxiv},
year = {2025},
The project is built based on the following repository:
Thanks to the authors for sharing their awesome codebases!
This work is licensed under a MIT License.