AI prompts
base on
# LVM: Sequential Modeling Enables Scalable Learning for Large Vision Models
[LVM](https://arxiv.org/abs/2312.00785) is a vision pretraining model that converts various kinds of visual data into visual sentences and performs next-token prediction autoregressively. It is compatible with both GPU and TPU.
LVM is built on top of [OpenLLaMA](https://github.com/openlm-research/open_llama) (an autoregressive model) and [OpenMuse](https://github.com/huggingface/open-muse) (a VQGAN that converts images into visual tokens).
This was trained in collaboration with HuggingFace. Thanks [Victor Sanh](https://github.com/VictorSanh) for the support in this project.
## Abstract:
We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data.
To do this, we define a common format, ``visual sentences", in which we can represent raw images and videos as well as annotated data sources such as semantic segmentations and depth reconstructions without needing any meta-knowledge beyond the pixels. Once this wide variety of visual data (comprising 420 billion tokens) is represented as sequences, the model can be trained to minimize a cross-entropy loss for next token prediction. By training across various scales of model architecture and data diversity, we provide empirical evidence that our models scale effectively. Many different vision tasks can be solved by designing suitable visual prompts at test time.
## Visual Sentence
<div align="center">
<img src="images/visual_sentences.jpg"/>
</div>
## Key Differences from the Original Paper Version
1. We are currently releasing the 7B model (previously 3B). Additional model size variants will be available later.
2. Deep filtering (including quality filters, deduplication, and known CSAM content removal) has been applied to the LAION dataset, reducing the dataset size from 1.5B to 1.2B images.
3. The tokenizer has been improved for better performance.
## License
LVM is licensed under the Apache 2.0 License.
## Installation
```shell
git clone https://github.com/ytongbai/LVM
cd LVM
export PYTHONPATH="\${PWD}:\$PYTHONPATH"
```
## Environment Setup
```shell
conda env create -f scripts/gpu_environment.yml
conda activate LVM
```
## Dataset Preparation
Please refer to \`DATASET.md\` for detailed instructions on preparing the dataset.
After preparing the dataset, you will get a pretokenized file \`dataset.jsonl\`.
## Training Script
We provide an example training script for 7B model, for more details about the distributed training setting, please refer to [EasyLM](https://github.com/young-geng/EasyLM).
For other model size, we provide the model definition from 100M, 300M, 600M, 1B, 3B, 7B, 13B, 20B to 30B in './EasyLM/models/llama/llama_model.py' .
```shell
python -u -m EasyLM.models.llama.llama_train \
--jax_distributed.initialize_jax_distributed=True \
--jax_distributed.coordinator_address='$MASTER_ADDR:$MASTER_PORT' \
--jax_distributed.local_device_ids='0,1,2,3,4,5,6,7' \
--mesh_dim='$SLURM_NNODES,-1,1' \
--dtype='bf16' \
--total_steps=400000 \ # change according to the number of data
--log_freq=10 \
--save_model_freq=1000 \
--save_milestone_freq=2000 \
--load_llama_config='vqlm_7b' \
--optimizer.type='adamw' \
--optimizer.adamw_optimizer.weight_decay=0.1 \
--optimizer.adamw_optimizer.lr=1.5e-4 \
--optimizer.adamw_optimizer.end_lr=3e-5 \
--optimizer.adamw_optimizer.lr_warmup_steps=8000 \
--optimizer.adamw_optimizer.lr_decay_steps=288000 \
--optimizer.accumulate_gradient_steps=4 \
--train_dataset.type='json' \
--train_dataset.text_processor.fields=',{tokens},' \
--train_dataset.json_dataset.path='/path/to/dataset.jsonl' \
--train_dataset.json_dataset.seq_length=4096 \
--train_dataset.json_dataset.batch_size=32 \
--train_dataset.json_dataset.tokenizer_processes=16 \
--checkpointer.save_optimizer_state=True \
--logger.online=True \
--logger.output_dir='/path/to/checkpoint/$RUN_NAME' \
--logger.wandb_dir='/path/to/wandb' \
--logger.notes='' \
--logger.experiment_id=$EXPERIMENT_ID
```
## Convert to Huggingface checkpoint
```shell
python -m EasyLM.models.llama.convert_easylm_to_hf --load_checkpoint='trainstate_params::/path/to/checkpoint/streaming_train_state' --model_size='vqlm_7b' --output_dir='/path/to/output/checkpoint/'
```
## Demo & Inference
Download the [few-shot examples dataset](https://livejohnshopkins-my.sharepoint.com/:f:/g/personal/ybai20_jh_edu/Ei0xiLdFFqJPnwAlFWar29EBUAvB0O3CVaJykZl-f11KDQ?e=Bx9SXZ).
There are mainly two visual prompting: sequential prompting and analogy prompting.
### Analogy Prompting:
Describe the task with few-shot examples, which is pairs of (x, y) inputs where x is the input image and y the "annotated" image. And add one query image in the end. We provide more few-shot examples at [this link](https://livejohnshopkins-my.sharepoint.com/:f:/g/personal/ybai20_jh_edu/Ei0xiLdFFqJPnwAlFWar29EBUAvB0O3CVaJykZl-f11KDQ?e=Bx9SXZ), and you can simply change the query image in the end for testing.
### Sequential Prompting:
Input a sequence of continuous frames and let the model generate the next one.
Check out our demo and additionaly inference code on HuggingFace Spaces: [LVM Demo](https://huggingface.co/spaces/Emma02/LVM)
## Evaluation
Check evaluation/EVAL.md
## Models
- [LVM Checkpoints](https://huggingface.co/Emma02/LVM_ckpts)
- [VQ-VAE Checkpoints](https://huggingface.co/Emma02/vqvae_ckpts)
## Finetuning
LVM is a pretraining model, without instruction tuning or other kinds of post-training. If you want a specific task, we recommend organizing the data into visual sentence format, then finetune with a smaller learning rate using the training script we provide.
## Citation
If you found LVM useful in your research or applications, please cite our work using the following BibTeX:
```bibtex
@article{bai2023sequential,
title={Sequential modeling enables scalable learning for large vision models},
author={Bai, Yutong and Geng, Xinyang and Mangalam, Karttikeya and Bar, Amir and Yuille, Alan and Darrell, Trevor and Malik, Jitendra and Efros, Alexei A},
journal={arXiv preprint arXiv:2312.00785},
year={2023}
}
```
", Assign "at most 3 tags" to the expected json: {"id":"5658","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"