base on MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX. [![Upload Python Package](https://github.com/Blaizzy/mlx-vlm/actions/workflows/python-publish.yml/badge.svg)](https://github.com/Blaizzy/mlx-vlm/actions/workflows/python-publish.yml) # MLX-VLM MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) and Omni Models (VLMs with audio and video support) on your Mac using MLX. ## Table of Contents - [Installation](#installation) - [Usage](#usage) - [Command Line Interface (CLI)](#command-line-interface-cli) - [Chat UI with Gradio](#chat-ui-with-gradio) - [Python Script](#python-script) - [Multi-Image Chat Support](#multi-image-chat-support) - [Supported Models](#supported-models) - [Usage Examples](#usage-examples) - [Fine-tuning](#fine-tuning) ## Installation The easiest way to get started is to install the `mlx-vlm` package using pip: ```sh pip install -U mlx-vlm ``` ## Usage ### Command Line Interface (CLI) Generate output from a model using the CLI: ```sh # Text generation mlx_vlm.generate --model mlx-community/Qwen2-VL-2B-Instruct-4bit --max-tokens 100 --prompt "Hello, how are you?" # Image generation mlx_vlm.generate --model mlx-community/Qwen2-VL-2B-Instruct-4bit --max-tokens 100 --temperature 0.0 --image http://images.cocodataset.org/val2017/000000039769.jpg # Audio generation (New) mlx_vlm.generate --model mlx-community/gemma-3n-E2B-it-4bit --max-tokens 100 --prompt "Describe what you hear" --audio /path/to/audio.wav # Multi-modal generation (Image + Audio) mlx_vlm.generate --model mlx-community/gemma-3n-E2B-it-4bit --max-tokens 100 --prompt "Describe what you see and hear" --image /path/to/image.jpg --audio /path/to/audio.wav ``` ### Chat UI with Gradio Launch a chat interface using Gradio: ```sh mlx_vlm.chat_ui --model mlx-community/Qwen2-VL-2B-Instruct-4bit ``` ### Python Script Here's an example of how to use MLX-VLM in a Python script: ```python import mlx.core as mx from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model_path = "mlx-community/Qwen2-VL-2B-Instruct-4bit" model, processor = load(model_path) config = load_config(model_path) # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] # image = [Image.open("...")] can also be used with PIL.Image.Image objects prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=len(image) ) # Generate output output = generate(model, processor, formatted_prompt, image, verbose=False) print(output) ``` #### Audio Example ```python from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load model with audio support model_path = "mlx-community/gemma-3n-E2B-it-4bit" model, processor = load(model_path) config = model.config # Prepare audio input audio = ["/path/to/audio1.wav", "/path/to/audio2.mp3"] prompt = "Describe what you hear in these audio files." # Apply chat template with audio formatted_prompt = apply_chat_template( processor, config, prompt, num_audios=len(audio) ) # Generate output with audio output = generate(model, processor, formatted_prompt, audio=audio, verbose=False) print(output) ``` #### Multi-Modal Example (Image + Audio) ```python from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load multi-modal model model_path = "mlx-community/gemma-3n-E2B-it-4bit" model, processor = load(model_path) config = model.config # Prepare inputs image = ["/path/to/image.jpg"] audio = ["/path/to/audio.wav"] prompt = "" # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=len(image), num_audios=len(audio) ) # Generate output output = generate(model, processor, formatted_prompt, image, audio=audio, verbose=False) print(output) ``` ### Server (FastAPI) Start the server: ```sh mlx_vlm.server ``` The server provides multiple endpoints for different use cases and supports dynamic model loading/unloading with caching (one model at a time). #### Available Endpoints - `/models` - List models available locally - `/chat/completions` - OpenAI-compatible chat-style interaction endpoint with support for images, audio, and text - `/responses` - OpenAI-compatible responses endpoint - `/health` - Check server status - `/unload` - Unload current model from memory #### Usage Examples ##### List available models ```sh curl "http://localhost:8080/models" ``` ##### Text Input ```sh curl -X POST "http://localhost:8080/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/Qwen2-VL-2B-Instruct-4bit", "messages": [ { "role": "user", "content": "Hello, how are you", } ], "stream": true, "max_tokens": 100 }' ``` ##### Image Input ```sh curl -X POST "http://localhost:8080/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/Qwen2.5-VL-32B-Instruct-8bit", [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": [ { "type": "text", "text": This is today's chart for energy demand in California. Can you provide an analysis of the chart and comment on the implications for renewable energy in California?" }, { "type": "input_image", "image_url": "/path/to/repo/examples/images/renewables_california.png" } ] } ], "stream": true, "max_tokens": 1000 }' ``` ##### Audio Support (New) ```sh curl -X POST "http://localhost:8080/generate" \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/gemma-3n-E2B-it-4bit", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe what you hear in these audio files" }, {"type": "input_audio", "input_audio": "/path/to/audio1.wav"} {"type": "input_audio", "input_audio": "https://example.com/audio2.mp3"} ] } ], "stream": true, "max_tokens": 500 }' ``` ##### Multi-Modal (Image + Audio) ```sh curl -X POST "http://localhost:8080/generate" \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/gemma-3n-E2B-it-4bit", "messages": [ { "role": "user", "content": [ {"type": "input_image", "image_url": "/path/to/image.jpg"}, {"type": "input_audio", "input_audio": "/path/to/audio.wav"} ] } ], "max_tokens": 100 }' ``` ##### Responses Endpoint ```sh curl -X POST "http://localhost:8080/responses" \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/Qwen2-VL-2B-Instruct-4bit", "messages": [ { "role": "user", "content": [ {"type": "input_text", "text": "What is in this image?"}, {"type": "input_image", "image_url": "/path/to/image.jpg"} ] } ], "max_tokens": 100 }' ``` #### Request Parameters - `model`: Model identifier (required) - `messages`: Chat messages for chat/OpenAI endpoints - `max_tokens`: Maximum tokens to generate - `temperature`: Sampling temperature - `top_p`: Top-p sampling parameter - `stream`: Enable streaming responses ## Multi-Image Chat Support MLX-VLM supports analyzing multiple images simultaneously with select models. This feature enables more complex visual reasoning tasks and comprehensive analysis across multiple images in a single conversation. ### Usage Examples #### Python Script ```python from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config model_path = "mlx-community/Qwen2-VL-2B-Instruct-4bit" model, processor = load(model_path) config = model.config images = ["path/to/image1.jpg", "path/to/image2.jpg"] prompt = "Compare these two images." formatted_prompt = apply_chat_template( processor, config, prompt, num_images=len(images) ) output = generate(model, processor, formatted_prompt, images, verbose=False) print(output) ``` #### Command Line ```sh mlx_vlm.generate --model mlx-community/Qwen2-VL-2B-Instruct-4bit --max-tokens 100 --prompt "Compare these images" --image path/to/image1.jpg path/to/image2.jpg ``` ## Video Understanding MLX-VLM also supports video analysis such as captioning, summarization, and more, with select models. ### Supported Models The following models support video chat: 1. Qwen2-VL 2. Qwen2.5-VL 3. Idefics3 4. LLaVA With more coming soon. ### Usage Examples #### Command Line ```sh mlx_vlm.video_generate --model mlx-community/Qwen2-VL-2B-Instruct-4bit --max-tokens 100 --prompt "Describe this video" --video path/to/video.mp4 --max-pixels 224 224 --fps 1.0 ``` These examples demonstrate how to use multiple images with MLX-VLM for more complex visual reasoning tasks. # Fine-tuning MLX-VLM supports fine-tuning models with LoRA and QLoRA. ## LoRA & QLoRA To learn more about LoRA, please refer to the [LoRA.md](./mlx_vlm/LORA.MD) file. ", Assign "at most 3 tags" to the expected json: {"id":"12887","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"