base on Stop juggling AI SDKs! RubyLLM offers one delightful Ruby interface for OpenAI, Anthropic, Gemini, Bedrock, OpenRouter, DeepSeek, Ollama & compatible APIs. Chat, Vision, Audio, PDF, Images, Embeddings, Tools, Streaming & Rails integration. <img src="/docs/assets/images/logotype.svg" alt="RubyLLM" height="120" width="250"> **A delightful Ruby way to work with AI.** RubyLLM provides **one** beautiful, Ruby-like interface to interact with modern AI models. Chat, generate images, create embeddings, and use tools – all with clean, expressive code that feels like Ruby, not like patching together multiple services. <div class="provider-icons"> <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" alt="Anthropic" class="logo-small"> &nbsp; <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-color.svg" alt="Bedrock" class="logo-medium"> <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-text.svg" alt="Bedrock" class="logo-small"> &nbsp; <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" alt="DeepSeek" class="logo-medium"> <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-text.svg" alt="DeepSeek" class="logo-small"> &nbsp; <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-brand-color.svg" alt="Gemini" class="logo-large"> &nbsp; <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" alt="Ollama" class="logo-medium"> <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama-text.svg" alt="Ollama" class="logo-medium"> &nbsp; <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" alt="OpenAI" class="logo-medium"> <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai-text.svg" alt="OpenAI" class="logo-medium"> &nbsp; <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" alt="OpenRouter" class="logo-medium"> <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter-text.svg" alt="OpenRouter" class="logo-small"> &nbsp; </div> <div class="badge-container"> <a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg" alt="Gem Version" /></a> <a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a> <a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a> <a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a> </div> 🤺 Battle tested at [šŸ’¬ Chat with Work](https://chatwithwork.com) ## The problem with AI libraries Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies. RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday, Zeitwerk, and Marcel. Because working with AI should be a joy, not a chore. ## What makes it great ```ruby # Just ask questions chat = RubyLLM.chat chat.ask "What's the best way to learn Ruby?" # Analyze images, audio, documents, and text files chat.ask "What's in this image?", with: "ruby_conf.jpg" chat.ask "Describe this meeting", with: "meeting.wav" chat.ask "Summarize this document", with: "contract.pdf" chat.ask "Explain this code", with: "app.rb" # Multiple files at once - types automatically detected chat.ask "Analyze these files", with: ["diagram.png", "report.pdf", "notes.txt"] # Stream responses in real-time chat.ask "Tell me a story about a Ruby programmer" do |chunk| print chunk.content end # Generate images RubyLLM.paint "a sunset over mountains in watercolor style" # Create vector embeddings RubyLLM.embed "Ruby is elegant and expressive" # Let AI use your code class Weather < RubyLLM::Tool description "Gets current weather for a location" param :latitude, desc: "Latitude (e.g., 52.5200)" param :longitude, desc: "Longitude (e.g., 13.4050)" def execute(latitude:, longitude:) url = "https://api.open-meteo.com/v1/forecast?latitude=#{latitude}&longitude=#{longitude}&current=temperature_2m,wind_speed_10m" response = Faraday.get(url) data = JSON.parse(response.body) rescue => e { error: e.message } end end chat.with_tool(Weather).ask "What's the weather in Berlin? (52.5200, 13.4050)" ``` ## Core Capabilities * šŸ’¬ **Unified Chat:** Converse with models from OpenAI, Anthropic, Gemini, Bedrock, OpenRouter, DeepSeek, Ollama, or any OpenAI-compatible API using `RubyLLM.chat`. * šŸ‘ļø **Vision:** Analyze images within chats. * šŸ”Š **Audio:** Transcribe and understand audio content. * šŸ“„ **Document Analysis:** Extract information from PDFs, text files, and other documents. * šŸ–¼ļø **Image Generation:** Create images with `RubyLLM.paint`. * šŸ“Š **Embeddings:** Generate text embeddings for vector search with `RubyLLM.embed`. * šŸ”§ **Tools (Function Calling):** Let AI models call your Ruby code using `RubyLLM::Tool`. * šŸš‚ **Rails Integration:** Easily persist chats, messages, and tool calls using `acts_as_chat` and `acts_as_message`. * 🌊 **Streaming:** Process responses in real-time with idiomatic Ruby blocks. ## Installation Add to your Gemfile: ```ruby gem 'ruby_llm' ``` Then `bundle install`. Configure your API keys (using environment variables is recommended): ```ruby # config/initializers/ruby_llm.rb or similar RubyLLM.configure do |config| config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil) # Add keys ONLY for providers you intend to use # config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil) # ... see Configuration guide for all options ... end ``` See the [Installation Guide](https://rubyllm.com/installation) for full details. ## Rails Integration Add persistence to your chat models effortlessly: ```ruby # app/models/chat.rb class Chat < ApplicationRecord acts_as_chat # Automatically saves messages & tool calls # ... your other model logic ... end # app/models/message.rb class Message < ApplicationRecord acts_as_message # ... end # app/models/tool_call.rb (if using tools) class ToolCall < ApplicationRecord acts_as_tool_call # ... end # Now interacting with a Chat record persists the conversation: chat_record = Chat.create!(model_id: "gpt-4.1-nano") chat_record.ask("Explain Active Record callbacks.") # User & Assistant messages saved # Works seamlessly with file attachments - types automatically detected chat_record.ask("What's in this file?", with: "report.pdf") chat_record.ask("Analyze these", with: ["image.jpg", "data.csv", "notes.txt"]) ``` Check the [Rails Integration Guide](https://rubyllm.com/guides/rails) for more. ## Learn More Dive deeper with the official documentation: - [Installation](https://rubyllm.com/installation) - [Configuration](https://rubyllm.com/configuration) - **Guides:** - [Getting Started](https://rubyllm.com/guides/getting-started) - [Chatting with AI Models](https://rubyllm.com/guides/chat) - [Using Tools](https://rubyllm.com/guides/tools) - [Streaming Responses](https://rubyllm.com/guides/streaming) - [Rails Integration](https://rubyllm.com/guides/rails) - [Image Generation](https://rubyllm.com/guides/image-generation) - [Embeddings](https://rubyllm.com/guides/embeddings) - [Working with Models](https://rubyllm.com/guides/models) - [Error Handling](https://rubyllm.com/guides/error-handling) - [Available Models](https://rubyllm.com/guides/available-models) ## Contributing We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on setup, testing, and contribution guidelines. ## License Released under the MIT License.", Assign "at most 3 tags" to the expected json: {"id":"13640","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"