AI prompts
base on ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials] <h1 align="center">awesome foundation and multimodal models</h1>
## ποΈ + π¬ + π§ = π€
**foundation model** - a pre-trained machine learning model that serves as a base for a wide range of downstream tasks. It captures general knowledge from a large dataset and can be fine-tuned to perform specific tasks more effectively.
**multimodal model** - a model that can process multiple modalities (e.g. text, image,
video, audio, etc.) at the same time.
## π€ models
<!--- AUTOGENERATED_PAPERS_LIST -->
<!---
WARNING: DO NOT EDIT THIS LIST MANUALLY. IT IS AUTOMATICALLY GENERATED.
HEAD OVER TO CONTRIBUTING.MD FOR MORE DETAILS ON HOW TO MAKE CHANGES PROPERLY.
-->
### YOLO-World: Real-Time Open-Vocabulary Object Detection
[![arXiv](https://img.shields.io/badge/arXiv-2401.17270-b31b1b.svg)](https://arxiv.org/abs/2401.17270) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/AILab-CVC/YOLO-World) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/X7gKBGVz4vs) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SkalskiP/YOLO-World) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-yolo-world.ipynb)
Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, Ying Shan
- **Date:** 2024-01-30
- **Modalities:** ποΈ + π¬
- **Tasks:** Zero-Shot Object Detection
### Depth Anything
[![arXiv](https://img.shields.io/badge/arXiv-2401.10891-b31b1b.svg)](https://arxiv.org/abs/2401.10891) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/LiheYoung/Depth-Anything) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/LiheYoung/Depth-Anything) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/LiheYoung/depth_anything_vitl14) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb)
Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao
- **Date:** 2024-01-19
- **Modalities:** π
- **Tasks:** Depth Estimation
### EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
[![arXiv](https://img.shields.io/badge/arXiv-2312.00863-b31b1b.svg)](https://arxiv.org/abs/2312.00863) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/yformer/EfficientSAM) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SkalskiP/EfficientSAM) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/merve/EfficientSAM)
Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra
- **Date:** 2023-12-01
- **Modalities:** ποΈ
- **Tasks:** Zero-Shot Object Segmentation
### Qwen-VL-Plus / Max
[![arXiv](https://img.shields.io/badge/arXiv-2308.12966-b31b1b.svg)](https://arxiv.org/abs/2308.12966) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/QwenLM/Qwen-VL#qwen-vl-plus) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Qwen/Qwen-VL-Plus) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/Qwen/Qwen-VL)
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
- **Date:** 2023-11-28
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Captioning, VQA, Zero-Shot Object Detection
### CogVLM: Visual Expert for Pretrained Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2311.03079-b31b1b.svg)](https://arxiv.org/abs/2311.03079) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/THUDM/CogVLM) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/lykeven/CogVLM) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/THUDM/CogVLM)
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, Jie Tang
- **Date:** 2023-11-06
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Captioning, VQA
### Fuyu-8B: A Multimodal Architecture for AI Agents
[![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/adept/fuyu-8b-demo) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/adept/fuyu-8b)
Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, SaΔnak TaΕΔ±rlar
- **Date:** 2023-10-17
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Classification, Image Captioning, VQA, Find Text in Image
### Ferret: Refer and Ground Anything Anywhere at Any Granularity
[![arXiv](https://img.shields.io/badge/arXiv-2310.07704-b31b1b.svg)](https://arxiv.org/abs/2310.07704) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/apple/ml-ferret)
Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, Yinfei Yang
- **Date:** 2023-10-11
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Captioning, VQA, Phrase Grounding, Object Detection
### MetaCLIP: Demystifying CLIP Data
[![arXiv](https://img.shields.io/badge/arXiv-2309.16671-b31b1b.svg)](https://arxiv.org/abs/2309.16671) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/MetaCLIP) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SkalskiP/SAM_and_MetaCLIP) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/facebook/metaclip-b32-400m) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1V0Rv1QQJkcolTjiwJuRsqWycROvYjOwg?usp=sharing)
Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer
- **Date:** 2023-09-28
- **Modalities:** ποΈ + π¬
- **Tasks:** Zero-Shot Classification
### Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
[![arXiv](https://img.shields.io/badge/arXiv-2308.12966-b31b1b.svg)](https://arxiv.org/abs/2308.12966) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/QwenLM/Qwen-VL) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Qwen/Qwen-VL-Max) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/Qwen/Qwen-VL)
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
- **Date:** 2023-09-24
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Captioning, VQA
### SigLIP: Sigmoid Loss for Language Image Pre-Training
[![arXiv](https://img.shields.io/badge/arXiv-2303.15343-b31b1b.svg)](https://arxiv.org/abs/2303.15343) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/google-research/big_vision) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/merve/compare_clip_siglip) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openai/clip-vit-base-patch16) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/SigLIP/Inference_with_(multilingual)_SigLIP%2C_a_better_CLIP_model.ipynb)
Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer
- **Date:** 2023-08-27
- **Modalities:** ποΈπ¬
- **Tasks:** Zero-Shot Image Classification
### Nougat: Neural Optical Understanding for Academic Documents
[![arXiv](https://img.shields.io/badge/arXiv-2308.13418-b31b1b.svg)](https://arxiv.org/abs/2308.13418) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/nougat) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/hf-vision/nougat-transformers) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/facebook/nougat-small) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Nougat/Inference_with_Nougat_to_read_scientific_PDFs.ipynb)
Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic
- **Date:** 2023-08-25
- **Modalities:** ποΈπ¬
- **Tasks:** Visual Question Answering
### AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining
[![arXiv](https://img.shields.io/badge/arXiv-2308.05734-b31b1b.svg)](https://arxiv.org/abs/2308.05734) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/haoheliu/AudioLDM2) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/haoheliu/AudioLDM_48K_Text-to-HiFiAudio_Generation) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/spaces/haoheliu/audioldm2-text2audio-text2music)
Haohe Liu, Qiao Tian, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Yuping Wang, Wenwu Wang, Yuxuan Wang, Mark D. Plumbley
- **Date:** 2023-08-10
- **Modalities:** π¬οΈ + π§
- **Tasks:** Text-to-Audio, Text-to-Speech
### OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2308.01390-b31b1b.svg)](https://arxiv.org/abs/2308.01390) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/mlfoundations/open_flamingo) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/openflamingo/OpenFlamingo) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openflamingo/OpenFlamingo-9B-vitl-mpt7b)
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
- **Date:** 2023-08-02
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Classification, Image Captioning, VQA
### Kosmos-2: Grounding Multimodal Large Language Models to the World
[![arXiv](https://img.shields.io/badge/arXiv-2306.14824-b31b1b.svg)](https://arxiv.org/abs/2306.14824) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/microsoft/unilm/tree/master/kosmos-2) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ydshieh/Kosmos-2) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/microsoft/kosmos-2-patch14-224)
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
- **Date:** 2023-07-26
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Captioning, VQA, Phrase Grounding
### OWLv2: Scaling Open-Vocabulary Object Detection
[![arXiv](https://img.shields.io/badge/arXiv-2306.09683-b31b1b.svg)](https://arxiv.org/abs/2306.09683) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/merve/owlv2) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/google/owlv2-base-patch16-ensemble) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/OWLv2/Zero_and_one_shot_object_detection_with_OWLv2.ipynb)
Matthias Minderer, Alexey Gritsenko, Neil Houlsby
- **Date:** 2023-06-17
- **Modalities:** ποΈ
- **Tasks:** Zero-Shot Object Detection
### ImageBind: One Embedding Space To Bind Them All
[![arXiv](https://img.shields.io/badge/arXiv-2305.05665-b31b1b.svg)](https://arxiv.org/abs/2305.05665) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/ImageBind) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/JustinLin610/ImageBind_zeroshot_demo)
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra
- **Date:** 2023-05-09
- **Modalities:** ποΈ + π¬ + π§
- **Tasks:**
### LLaVA: Large Language and Vision Assistant
[![arXiv](https://img.shields.io/badge/arXiv-2304.08485-b31b1b.svg)](https://arxiv.org/abs/2304.08485) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/haotian-liu/LLaVA) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/badayvedat/LLaVA) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/liuhaotian/llava-v1.6-34b)
Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee
- **Date:** 2023-04-17
- **Modalities:** ποΈ + π¬
- **Tasks:** Vision Language Modeling
### Segment Anything
[![arXiv](https://img.shields.io/badge/arXiv-2304.02643-b31b1b.svg)](https://arxiv.org/abs/2304.02643) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/segment-anything) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/D-D6ZmadzPE) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/radames/candle-segment-anything-wasm) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/facebook/sam-vit-base) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb)
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr DollΓ‘r, Ross Girshick
- **Date:** 2023-04-05
- **Modalities:** ποΈ
- **Tasks:** Zero-Shot Object Segmentation
### Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
[![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/IDEA-Research/GroundingDINO) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/spaces/merve/Grounding_DINO_demo) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb)
Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang
- **Date:** 2023-03-09
- **Modalities:** ποΈ + π¬
- **Tasks:** Phrase Grounding, Zero-Shot Object Detection
### BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2301.12597-b31b1b.svg)](https://arxiv.org/abs/2301.12597) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/salesforce/LAVIS/tree/main/projects/blip2) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/merve/BLIP2-with-transformers) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/Salesforce/blip2-opt-6.7b) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/examples/blip2_instructed_generation.ipynb)
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi
- **Date:** 2023-01-30
- **Modalities:** ποΈ + π¬
- **Tasks:** Image Captioning, Visual Question Answering
### Whisper: Robust Speech Recognition via Large-Scale Weak Supervision
[![arXiv](https://img.shields.io/badge/arXiv-2212.04356-b31b1b.svg)](https://arxiv.org/abs/2212.04356) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/openai/whisper) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/openai/whisper) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openai/whisper-large-v3) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever
- **Date:** 2022-12-06
- **Modalities:** π¬οΈ + π§
- **Tasks:** Speech-to-Text
### OWL-ViT: Simple Open-Vocabulary Object Detection with Vision Transformers
[![arXiv](https://img.shields.io/badge/arXiv-2205.06230-b31b1b.svg)](https://arxiv.org/abs/2205.06230) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/adirik/OWL-ViT) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/google/owlvit-base-patch32)
Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby
- **Date:** 2022-05-12
- **Modalities:** ποΈ + π¬
- **Tasks:** Zero-Shot Object Detection
### CLIP: Learning Transferable Visual Models From Natural Language Supervision
[![arXiv](https://img.shields.io/badge/arXiv-2103.00020-b31b1b.svg)](https://arxiv.org/abs/2103.00020) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/openai/CLIP) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openai/clip-vit-large-patch14) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-use-openai-clip-classification.ipynb)
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever
- **Date:** 2021-02-26
- **Modalities:** ποΈ + π¬
- **Tasks:** Zero-Shot Classification
<!--- AUTOGENERATED_PAPERS_LIST -->
## π¦Έ contribution
We would love your help in making this repository even better! If you know of an
amazing paper that isn't listed here, or if you have any suggestions for improvement,
feel free to open an [issue](https://github.com/SkalskiP/awesome-foundation-and-multimodal-models/issues)
or submit a [pull request](https://github.com/SkalskiP/awesome-foundation-and-multimodal-models/pulls).
", Assign "at most 3 tags" to the expected json: {"id":"5244","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"