AI prompts
base on Deploy your agentic worfklows to production [![PyPI - Version](https://img.shields.io/pypi/v/llama-deploy.svg)](https://pypi.org/project/llama-deploy)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llama-deploy.svg)](https://pypi.org/project/llama-deploy)
[![Static Badge](https://img.shields.io/badge/docs-latest-blue)](https://docs.llamaindex.ai/en/latest/module_guides/llama_deploy/)
[![Unit Testing](https://github.com/run-llama/llama_deploy/actions/workflows/unit_test.yml/badge.svg)](https://github.com/run-llama/llama_deploy/actions/workflows/unit_test.yml)
[![E2E Testing](https://github.com/run-llama/llama_deploy/actions/workflows/e2e_test.yml/badge.svg)](https://github.com/run-llama/llama_deploy/actions/workflows/e2e_test.yml)
[![Coverage Status](https://coveralls.io/repos/github/run-llama/llama_deploy/badge.svg?branch=main)](https://coveralls.io/github/run-llama/llama_deploy?branch=main)
# 🦙 LlamaDeploy 🤖
LlamaDeploy (formerly `llama-agents`) is an async-first framework for deploying, scaling, and productionizing agentic
multi-service systems based on [workflows from `llama_index`](https://docs.llamaindex.ai/en/stable/understanding/workflows/).
With LlamaDeploy, you can build any number of workflows in `llama_index` and then run them as services, accessible
through a HTTP API by a user interface or other services part of your system.
The goal of LlamaDeploy is to easily transition something that you built in a notebook to something running on the
cloud with the minimum amount of changes to the original code, possibly zero. In order to make this transition a
pleasant one, you can interact with LlamaDeploy in two ways:
- Using the [`llamactl`](https://docs.llamaindex.ai/en/latest/module_guides/llama_deploy/50_llamactl/) CLI from a shell.
- Through the [_LlamaDeploy SDK_](https://docs.llamaindex.ai/en/latest/module_guides/llama_deploy/40_python_sdk/) from a Python application or script.
Both the SDK and the CLI are part of the LlamaDeploy Python package. To install, just run:
```bash
pip install llama_deploy
```
> [!TIP]
> For a comprehensive guide to LlamaDeploy's architecture and detailed descriptions of its components, visit our
[official documentation](https://docs.llamaindex.ai/en/latest/module_guides/llama_deploy/).
## Why LlamaDeploy?
1. **Seamless Deployment**: It bridges the gap between development and production, allowing you to deploy `llama_index`
workflows with minimal changes to your code.
2. **Scalability**: The microservices architecture enables easy scaling of individual components as your system grows.
3. **Flexibility**: By using a hub-and-spoke architecture, you can easily swap out components (like message queues) or
add new services without disrupting the entire system.
4. **Fault Tolerance**: With built-in retry mechanisms and failure handling, LlamaDeploy adds robustness in
production environments.
5. **State Management**: The control plane manages state across services, simplifying complex multi-step processes.
6. **Async-First**: Designed for high-concurrency scenarios, making it suitable for real-time and high-throughput
applications.
> [!NOTE]
> This project was initially released under the name `llama-agents`, but the introduction of [Workflows](https://docs.llamaindex.ai/en/stable/module_guides/workflow/#workflows) in `llama_index` turned out to be the most intuitive way for our users to develop agentic applications. We then decided to add new agentic features in `llama_index` directly, and focus LlamaDeploy on closing the gap between local development and remote execution of agents as services.
## Getting Started
The fastest way to start using LlamaDeploy is playing with a practical example. This repository contains a few applications you can use as a reference:
- [Quick start](examples/quick_start)
- [Use a deployment from a web-based user interface](examples/python_fullstack)
- [Message queue examples](examples/message-queue-integrations)
We recommend to start from the [Quick start](examples/quick_start) example and move
to [Use a deployment from a web-based user interface](examples/python_fullstack)
immediately after. Each folder contains a README file that will guide you through
the process.
", Assign "at most 3 tags" to the expected json: {"id":"11722","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"