AI prompts
base on StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text <div align="center">
<h1>
StreamingSVD
</h1>
<h3>An Enhanced Autoregressive Method Turning SVD Into A High-Quality Long Video Generator </h3>
<strong> <a href="#news"> 📰 News </a> | <a href="#results"> ✨ Results </a> | <a href="#Setup">🔧 Setup</a> | <a href="#Inference">🚀 Inference</a> </strong>
</div>
<br>
<!---
[![X (formerly Twitter) URL](https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2Fhumphrey_shi%2Fstatus%2F1806731418686591142)](https://x.com/humphrey_shi/status/1806731418686591142)--->
[![Project Page](https://img.shields.io/badge/Project-Website-orange)](https://streamingt2v.github.io/) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/md4lp42vOGU)
<h2 id="meet-streamingi2v"> 🔥 Meet StreamingSVD - A StreamingT2V Method </h2>
StreamingSVD is an advanced autoregressive technique for text-to-video and image-to-video generation, generating long hiqh-quality videos with rich motion dynamics, turning [SVD](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets) into a long video generator. Our method ensures temporal consistency throughout the video, aligns closely to the input text/image, and maintains high frame-level image quality. Our demonstrations include successful examples of videos up to 200 frames, spanning 8 seconds, and can be extended for even longer durations.
The effectiveness of the underlying autoregressive approach is not limited to the specific base model used, indicating that improvements in base models can yield even higher-quality videos. StreamingSVD is part of the [StreamingT2V](https://arxiv.org/abs/2403.14773) family. Another successful implementation is [StreamingModelscope](https://github.com/Picsart-AI-Research/StreamingT2V/tree/StreamingModelscope), which is turning [Modelscope](https://arxiv.org/abs/2308.06571) into a long-video generator. This approach enables to generate videos of up to 2 minutes length, featuring high motion amount and no stagnation.
<h2 id="news">📰 NEWS</h2>
* [11/28/2024] Memory optimized version released!</br>
* [08/30/2024] Code and model released! The model weights are available on <a href="https://huggingface.co/PAIR/StreamingSVD">🤗HuggingFace</a>
<h2 id="results">✨ Results</h2>
Detailed results can be found in the [Project page](https://streamingt2v.github.io/).
## Requirements
Our code requires 60 GB of VRAM in the default configuration when generating 200 frames. The memory-optimized version significantly reduces this to 24 GB of VRAM but operates approximately 50% slower than the default. To further reduce memory usage, consider lowering the number of frames or enabling randomized blending. Our code was tested on linux, using Python 3.9 and CUDA >= 11.8.
<h2 id="Setup">🔧 Setup</h2>
1. Clone this repository and install requirements using CUDA >= 11.8:
``` shell
git clone https://github.com/Picsart-AI-Research/StreamingT2V.git
cd StreamingT2V/
virtualenv -p python3.9 venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
```
2. Make sure [FFMPEG](https://www.ffmpeg.org) is installed.
<h2 id="Inference"> 🚀 Inference </h2>
## Image-To-Video
To run the entire pipeline consisting of image-to-video, video enhancement (including our randomized blending) and video-frame interpolation do from the `StreamingT2V` folder:
``` shell
cd code
python inference_i2v.py --input $INPUT --output $OUTPUT
```
`$INPUT` must be the path to an image file or a folder containing images. Each image is expected to have the aspect ratio 16:9.
`$OUTPUT` must be the path to a folder where the results will be stored.
### Adjust Hyperparameters
* number of generated frames
Add `--num_frames $FRAMES` to the call to define the number of frames to be generated. Default value: `$FRAMES=200`
* use randomized blending
Add `--use_randomized_blending $RB` to the call to define whether to use randomized blending. Default value: `$RB=False`. When using randomized blending, the recommended values for `chunk_size` and `overlap_size` parameters are `--chunk_size 38` and `--overlap_size 12`, respectively. Please be aware that randomized blending will slow down the generation process, so try to avoid it if you have enough GPU memory.
* output FPS
Add `--out_fps $FPS` to the call to define the FPS of the output video. Default value: `$FPS=24`
Use `--use_memopt` to enable memory optimizations for hardware with 24GB VRAM. If using a previously cloned repository, update the environment and delete the `code/checkpoint/i2v_enhance` folder to ensure the correct version is used.
## 💡 Future Plans
* Technical report describing StreamingSVD.
* Release of StreamingSVD for text-to-video.
* <s>VRAM memory reduction.</s>
## MAWE (Motion Aware Warp Error)
Our proposed **Motion Aware Warp Error** (see our [paper](https://arxiv.org/abs/2403.14773)) is provided [here](https://github.com/Picsart-AI-Research/StreamingT2V/tree/StreamingModelscope?tab=readme-ov-file#mawe-motion-aware-warp-error).
## StreamingModelscope
The code for the StreamingT2V model based on Modelscope, as described in our [paper](https://arxiv.org/abs/2403.14773), can be now found [here](https://github.com/Picsart-AI-Research/StreamingT2V/tree/StreamingModelscope).
## License
Our code and model is published under the MIT license.
We include codes and model weights of [SVD](https://github.com/Stability-AI/generative-models), [EMA-VFI](https://github.com/MCG-NJU/EMA-VFI) and [I2VGen-XL](https://i2vgen-xl.github.io). Please refer to their original licenses regarding their codes and weights. Due to these dependencies, StreamingSVD can be used only for non-commercial, research purposes.
## Acknowledgments
* [SVD](https://github.com/Stability-AI/generative-models): An image-to-video method.
* [Align your steps](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps): A method for optimizing sampling schedules.
* [I2VGen-XL](https://i2vgen-xl.github.io): An image-to-video method.
* [EMA-VFI](https://github.com/MCG-NJU/EMA-VFI): A state-of-the-art video-frame interpolation method.
* [Diffusers](https://github.com/huggingface/diffusers): A framework for diffusion models.
## BibTex
If you use our work in your research, please cite our publication:
```
@article{henschel2024streamingt2v,
title={StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text},
author={Henschel, Roberto and Khachatryan, Levon and Hayrapetyan, Daniil and Poghosyan, Hayk and Tadevosyan, Vahram and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2403.14773},
year={2024}
}
```
", Assign "at most 3 tags" to the expected json: {"id":"9177","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"