AI prompts
base on Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR 2024) # TokenFlow: Consistent Diffusion Features for Consistent Video Editing (ICLR 2024)
## [<a href="https://diffusion-tokenflow.github.io/" target="_blank">Project Page</a>]
[![arXiv](https://img.shields.io/badge/arXiv-TokenFlow-b31b1b.svg)](https://arxiv.org/abs/2307.10373) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/tokenflow)
![Pytorch](https://img.shields.io/badge/PyTorch->=1.10.0-Red?logo=pytorch)
[//]: # ([![Replicate](https://replicate.com/cjwbw/multidiffusion/badge)](https://replicate.com/cjwbw/multidiffusion))
[//]: # ([![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/text2live))
https://github.com/omerbt/TokenFlow/assets/52277000/93dccd63-7e9a-4540-a941-31962361b0bb
**TokenFlow** is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model, without any further training or finetuning.
[//]: # (as described in <a href="https://arxiv.org/abs/2302.08113" target="_blank">(link to paper)</a>.)
[//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects (e.g. smoke, fire, snow).)
[//]: # (### Abstract)
>The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod- els are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial lay- out and dynamics of the input video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in con- junction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos.
For more see the [project webpage](https://diffusion-tokenflow.github.io).
## Sample results
<td><img src="assets/videos.gif"></td>
## Environment
```
conda create -n tokenflow python=3.9
conda activate tokenflow
pip install -r requirements.txt
```
## Preprocess
Preprocess you video by running using the following command:
```
python preprocess.py --data_path <data/myvideo.mp4> \
--inversion_prompt <'' or a string describing the video content>
```
Additional arguments:
```
--save_dir <latents>
--H <video height>
--W <video width>
--sd_version <Stable-Diffusion version>
--steps <number of inversion steps>
--save_steps <number of sampling steps that will be used later for editing>
--n_frames <number of frames>
```
more information on the arguments can be found here.
### Note:
The video reconstruction will be saved as inverted.mp4. A good reconstruction is required for successfull editing with our method.
## Editing
- TokenFlow is designed for video for structure-preserving edits.
- Our method is built on top of an image editing technique (e.g., Plug-and-Play, ControlNet, etc.) - therefore, it is important to ensure that the edit works with the chosen base technique.
- The LDM decoder may introduce some jitterness, depending on the original video.
To edit your video, first create a yaml config as in ``configs/config_pnp.yaml``.
Then run
```
python run_tokenflow_pnp.py
```
Similarly, if you want to use ControlNet or SDEedit, create a yaml config as in ``config/config_controlnet.yaml`` or ```configs/config_SDEdit.yaml``` and run ```python run_tokenflow_controlnet.py``` or ``python run_tokenflow_SDEdit.py`` respectivly.
## Citation
```
@article{tokenflow2023,
title = {TokenFlow: Consistent Diffusion Features for Consistent Video Editing},
author = {Geyer, Michal and Bar-Tal, Omer and Bagon, Shai and Dekel, Tali},
journal={arXiv preprint arxiv:2307.10373},
year={2023}
}
```
", Assign "at most 3 tags" to the expected json: {"id":"175","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"