AI prompts
base on PraisonAI is a production-ready Multi AI Agents framework, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. It provides a low-code solution to streamline the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration. <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/logo/dark.png" />
<source media="(prefers-color-scheme: light)" srcset="docs/logo/light.png" />
<img alt="PraisonAI Logo" src="docs/logo/light.png" />
</picture>
</p>
<!-- mcp-name: io.github.MervinPraison/praisonai -->
<p align="center">
<a href="https://github.com/MervinPraison/PraisonAI"><img src="https://static.pepy.tech/badge/PraisonAI" alt="Total Downloads" /></a>
<a href="https://github.com/MervinPraison/PraisonAI"><img src="https://img.shields.io/github/v/release/MervinPraison/PraisonAI" alt="Latest Stable Version" /></a>
<a href="https://github.com/MervinPraison/PraisonAI"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License" /></a>
<a href="https://registry.modelcontextprotocol.io/servers/io.github.MervinPraison/praisonai"><img src="https://img.shields.io/badge/MCP-Registry-blue" alt="MCP Registry" /></a>
</p>
<div align="center">
# Praison AI
<a href="https://trendshift.io/repositories/9130" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9130" alt="MervinPraison%2FPraisonAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
PraisonAI is a production-ready Multi-AI Agents framework with self-reflection, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. By integrating PraisonAI Agents, AG2 (Formerly AutoGen), and CrewAI into a low-code solution, it streamlines the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration.
<div align="center">
<a href="https://docs.praison.ai">
<p align="center">
<img src="https://img.shields.io/badge/๐_Documentation-Visit_docs.praison.ai-blue?style=for-the-badge&logo=bookstack&logoColor=white" alt="Documentation" />
</p>
</a>
</div>
---
> **Quick Paths:**
> - ๐ **New here?** โ [Quick Start](#-quick-start)
> - ๐ฆ **Installing?** โ [Installation](#-installation)
> - ๐ป **Python SDK?** โ [Python Examples](#-using-python-code)
> - ๐ฏ **CLI user?** โ [CLI Reference](#-cli--no-code-interface)
> - ๐ง **Need config?** โ [Configuration](#-configuration--integration)
> - ๐ค **Contributing?** โ [Development](#-development)
---
## ๐ Table of Contents
### Getting Started
- [โก Performance](#-performance)
- [๐ Quick Start](#-quick-start)
- [๐ฆ Installation](#-installation)
- [Python SDK](#python-sdk)
- [JavaScript SDK](#javascript-sdk)
- [Environment Variables](#environment-variables)
### Core Concepts
- [โจ Key Features](#-key-features)
- [๐ Supported Providers](#-supported-providers)
### Python SDK
- [๐ Python Examples](#-using-python-code)
- [1. Single Agent](#1-single-agent)
- [2. Multi Agents](#2-multi-agents)
- [3. Planning Mode](#3-agent-with-planning-mode)
- [4. Deep Research](#4-deep-research-agent)
- [5. Query Rewriter](#5-query-rewriter-agent)
- [6. Agent Memory](#6-agent-memory-zero-dependencies)
- [7. Rules & Instructions](#7-rules--instructions)
- [8. Auto-Generated Memories](#8-auto-generated-memories)
- [9. Agentic Workflows](#9-agentic-workflows)
- [10. Hooks](#10-hooks)
- [11. Shadow Git Checkpoints](#11-shadow-git-checkpoints)
- [12. Background Tasks](#12-background-tasks)
- [13. Policy Engine](#13-policy-engine)
- [14. Thinking Budgets](#14-thinking-budgets)
- [15. Output Styles](#15-output-styles)
- [16. Context Compaction](#16-context-compaction)
- [17. Field Names Reference](#17-field-names-reference-a-i-g-s)
- [18. Extended agents.yaml](#18-extended-agentsyaml-with-workflow-patterns)
- [19. MCP Protocol](#19-mcp-model-context-protocol)
- [20. A2A Protocol](#20-a2a-agent2agent-protocol)
- [๐ ๏ธ Custom Tools](#๏ธ-custom-tools)
### JavaScript SDK
- [๐ป JavaScript Examples](#-using-javascript-code)
### CLI Reference
- [๐ฏ CLI Overview](#-cli--no-code-interface)
- [Auto Mode](#auto-mode)
- [Interactive Mode](#interactive-mode-cli)
- [Deep Research CLI](#deep-research-cli)
- [Planning Mode CLI](#planning-mode-cli)
- [Memory CLI](#memory-cli)
- [Workflow CLI](#workflow-cli)
- [Knowledge CLI](#knowledge-cli)
- [Session CLI](#session-cli)
- [Tools CLI](#tools-cli)
- [MCP Config CLI](#mcp-config-cli)
- [External Agents CLI](#external-agents-cli)
- [CLI Features Summary](#cli-features)
### Configuration
- [๐ง Configuration & Integration](#-configuration--integration)
- [Ollama Integration](#ollama-integration)
- [Groq Integration](#groq-integration)
- [100+ Models Support](#100-models-support)
- [๐ Agents Playbook](#-agents-playbook)
### Advanced Features
- [๐ฌ Advanced Features](#-advanced-features)
- [Research & Intelligence](#research--intelligence)
- [Memory & Caching](#memory--caching)
- [Planning & Workflows](#planning--workflows)
- [Specialised Agents](#specialised-agents)
- [MCP Protocol](#mcp-protocol)
- [A2A & A2UI Protocols](#a2a--a2ui-protocols)
- [Safety & Control](#safety--control)
- [Developer Tools](#developer-tools)
### Architecture
- [๐ Process Types & Patterns](#-process-types--patterns)
- [Sequential Process](#sequential-process)
- [Hierarchical Process](#hierarchical-process)
- [Workflow Process](#workflow-process)
- [Agentic Patterns](#agentic-patterns)
### Data & Persistence
- [๐พ Persistence (Databases)](#-persistence-databases)
- [๐ Knowledge & Retrieval (RAG)](#-knowledge--retrieval-rag)
- [๐ง Tools Table](#-tools-table)
### Learning & Community
- [๐ Video Tutorials](#-video-tutorials)
- [โญ Star History](#-star-history)
- [๐ฅ Contributing](#-contributing)
- [๐ง Development](#-development)
- [โ FAQ & Troubleshooting](#-faq--troubleshooting)
---
## โก Performance
PraisonAI Agents is the **fastest AI agent framework** for agent instantiation.
| Framework | Avg Time (ฮผs) | Relative |
|-----------|---------------|----------|
| **PraisonAI** | **3.77** | **1.00x (fastest)** |
| OpenAI Agents SDK | 5.26 | 1.39x |
| Agno | 5.64 | 1.49x |
| PraisonAI (LiteLLM) | 7.56 | 2.00x |
| PydanticAI | 226.94 | 60.16x |
| LangGraph | 4,558.71 | 1,209x |
| CrewAI | 15,607.92 | 4,138x |
<details>
<summary>Run benchmarks yourself</summary>
```bash
cd praisonai-agents
python benchmarks/simple_benchmark.py
```
</details>
---
## ๐ Quick Start
Get started with PraisonAI in under 1 minute:
```bash
# Install
pip install praisonaiagents
# Set API key
export OPENAI_API_KEY=your_key_here
# Create a simple agent
python -c "from praisonaiagents import Agent; Agent(instructions='You are a helpful AI assistant').start('Write a haiku about AI')"
```
---
## ๐ฆ Installation
### Python SDK
Lightweight package dedicated for coding:
```bash
pip install praisonaiagents
```
For the full framework with CLI support:
```bash
pip install praisonai
```
### JavaScript SDK
```bash
npm install praisonai
```
### Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `OPENAI_API_KEY` | Yes* | OpenAI API key |
| `ANTHROPIC_API_KEY` | No | Anthropic Claude API key |
| `GOOGLE_API_KEY` | No | Google Gemini API key |
| `GROQ_API_KEY` | No | Groq API key |
| `OPENAI_BASE_URL` | No | Custom API endpoint (for Ollama, Groq, etc.) |
> *At least one LLM provider API key is required.
```bash
# Set your API key
export OPENAI_API_KEY=your_key_here
# For Ollama (local models)
export OPENAI_BASE_URL=http://localhost:11434/v1
# For Groq
export OPENAI_API_KEY=your_groq_key
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
```
---
## ๐ป Usage
### Python Code Examples
- [๐ Complete Python Examples](#using-python-code)
- [๐ง Custom Tools](#๏ธ-custom-tools)
- [๐ง Memory Configuration](#-memory--context)
### CLI / No-Code Interface
- [๐ฏ Auto Mode](#auto-mode)
- [๐ฌ Deep Research](#deep-research-cli)
- [๐ Planning Mode](#planning-mode-cli)
- [๐พ Memory Management](#memory-cli)
- [๐ Workflow Execution](#workflow-cli)
### JavaScript Code Examples
- [๐ป JavaScript Usage](#using-javascript-code)
---
## โจ Key Features
<details open>
<summary><strong>๐ค Core Agents</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Single Agent | [Example](examples/python/agents/single-agent.py) | [๐](https://docs.praison.ai/agents/single) |
| Multi Agents | [Example](examples/python/general/mini_agents_example.py) | [๐](https://docs.praison.ai/concepts/agents) |
| Auto Agents | [Example](examples/python/general/auto_agents_example.py) | [๐](https://docs.praison.ai/features/autoagents) |
| Self Reflection AI Agents | [Example](examples/python/concepts/self-reflection-details.py) | [๐](https://docs.praison.ai/features/selfreflection) |
| Reasoning AI Agents | [Example](examples/python/concepts/reasoning-extraction.py) | [๐](https://docs.praison.ai/features/reasoning) |
| Multi Modal AI Agents | [Example](examples/python/general/multimodal.py) | [๐](https://docs.praison.ai/features/multimodal) |
</details>
<details>
<summary><strong>๐ Workflows</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Simple Workflow | [Example](examples/python/workflows/simple_workflow.py) | [๐](https://docs.praison.ai/features/workflows) |
| Workflow with Agents | [Example](examples/python/workflows/workflow_with_agents.py) | [๐](https://docs.praison.ai/features/workflows) |
| Agentic Routing (`route()`) | [Example](examples/python/workflows/workflow_routing.py) | [๐](https://docs.praison.ai/features/routing) |
| Parallel Execution (`parallel()`) | [Example](examples/python/workflows/workflow_parallel.py) | [๐](https://docs.praison.ai/features/parallelisation) |
| Loop over List/CSV (`loop()`) | [Example](examples/python/workflows/workflow_loop_csv.py) | [๐](https://docs.praison.ai/features/repetitive) |
| Evaluator-Optimizer (`repeat()`) | [Example](examples/python/workflows/workflow_repeat.py) | [๐](https://docs.praison.ai/features/evaluator-optimiser) |
| Conditional Steps | [Example](examples/python/workflows/workflow_conditional.py) | [๐](https://docs.praison.ai/features/workflows) |
| Workflow Branching | [Example](examples/python/workflows/workflow_branching.py) | [๐](https://docs.praison.ai/features/workflows) |
| Workflow Early Stop | [Example](examples/python/workflows/workflow_early_stop.py) | [๐](https://docs.praison.ai/features/workflows) |
| Workflow Checkpoints | [Example](examples/python/workflows/workflow_checkpoints.py) | [๐](https://docs.praison.ai/features/workflows) |
</details>
<details>
<summary><strong>๐ป Code & Development</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Code Interpreter Agents | [Example](examples/python/agents/code-agent.py) | [๐](https://docs.praison.ai/features/codeagent) |
| AI Code Editing Tools | [Example](examples/python/code/code_editing_example.py) | [๐](https://docs.praison.ai/code/editing) |
| External Agents (All) | [Example](examples/python/code/external_agents_example.py) | [๐](https://docs.praison.ai/code/external-agents) |
| Claude Code CLI | [Example](examples/python/code/claude_code_example.py) | [๐](https://docs.praison.ai/code/claude-code) |
| Gemini CLI | [Example](examples/python/code/gemini_cli_example.py) | [๐](https://docs.praison.ai/code/gemini-cli) |
| Codex CLI | [Example](examples/python/code/codex_cli_example.py) | [๐](https://docs.praison.ai/code/codex-cli) |
| Cursor CLI | [Example](examples/python/code/cursor_cli_example.py) | [๐](https://docs.praison.ai/code/cursor-cli) |
</details>
<details>
<summary><strong>๐ง Memory & Knowledge</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Memory (Short & Long Term) | [Example](examples/python/general/memory_example.py) | [๐](https://docs.praison.ai/concepts/memory) |
| File-Based Memory | [Example](examples/python/general/memory_example.py) | [๐](https://docs.praison.ai/concepts/memory) |
| Claude Memory Tool | [Example](#claude-memory-tool-cli) | [๐](https://docs.praison.ai/features/claude-memory-tool) |
| Add Custom Knowledge | [Example](examples/python/concepts/knowledge-agents.py) | [๐](https://docs.praison.ai/features/knowledge) |
| RAG Agents | [Example](examples/python/concepts/rag-agents.py) | [๐](https://docs.praison.ai/features/rag) |
| Chat with PDF Agents | [Example](examples/python/concepts/chat-with-pdf.py) | [๐](https://docs.praison.ai/features/chat-with-pdf) |
| Data Readers (PDF, DOCX, etc.) | [CLI](#knowledge-cli) | [๐](https://docs.praison.ai/api/praisonai/knowledge-readers-api) |
| Vector Store Selection | [CLI](#knowledge-cli) | [๐](https://docs.praison.ai/api/praisonai/knowledge-vector-store-api) |
| Retrieval Strategies | [CLI](#knowledge-cli) | [๐](https://docs.praison.ai/api/praisonai/knowledge-retrieval-api) |
| Rerankers | [CLI](#knowledge-cli) | [๐](https://docs.praison.ai/api/praisonai/knowledge-reranker-api) |
| Index Types (Vector/Keyword/Hybrid) | [CLI](#knowledge-cli) | [๐](https://docs.praison.ai/api/praisonai/knowledge-index-api) |
| Query Engines (Sub-Question, etc.) | [CLI](#knowledge-cli) | [๐](https://docs.praison.ai/api/praisonai/knowledge-query-engine-api) |
</details>
<details>
<summary><strong>๐ฌ Research & Intelligence</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Deep Research Agents | [Example](examples/python/agents/research-agent.py) | [๐](https://docs.praison.ai/agents/deep-research) |
| Query Rewriter Agent | [Example](#5-query-rewriter-agent) | [๐](https://docs.praison.ai/agents/query-rewriter) |
| Native Web Search | [Example](examples/python/agents/websearch-agent.py) | [๐](https://docs.praison.ai/agents/websearch) |
| Built-in Search Tools | [Example](examples/python/agents/websearch-agent.py) | [๐](https://docs.praison.ai/tools/tavily) |
| Unified Web Search | [Example](src/praisonai-agents/examples/web_search_example.py) | [๐](https://docs.praison.ai/tools/web-search) |
| Web Fetch (Anthropic) | [Example](#web-search-web-fetch--prompt-caching) | [๐](https://docs.praison.ai/features/model-capabilities) |
</details>
<details>
<summary><strong>๐ Planning & Execution</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Planning Mode | [Example](examples/python/agents/planning-agent.py) | [๐](https://docs.praison.ai/features/planning-mode) |
| Planning Tools | [Example](#3-agent-with-planning-mode) | [๐](https://docs.praison.ai/features/planning-mode) |
| Planning Reasoning | [Example](#3-agent-with-planning-mode) | [๐](https://docs.praison.ai/features/planning-mode) |
| Prompt Chaining | [Example](examples/python/general/prompt_chaining.py) | [๐](https://docs.praison.ai/features/promptchaining) |
| Evaluator Optimiser | [Example](examples/python/general/evaluator-optimiser.py) | [๐](https://docs.praison.ai/features/evaluator-optimiser) |
| Orchestrator Workers | [Example](examples/python/general/orchestrator-workers.py) | [๐](https://docs.praison.ai/features/orchestrator-worker) |
</details>
<details>
<summary><strong>๐ฅ Specialized Agents</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Data Analyst Agent | [Example](examples/python/agents/data-analyst-agent.py) | [๐](https://docs.praison.ai/agents/data-analyst) |
| Finance Agent | [Example](examples/python/agents/finance-agent.py) | [๐](https://docs.praison.ai/agents/finance) |
| Shopping Agent | [Example](examples/python/agents/shopping-agent.py) | [๐](https://docs.praison.ai/agents/shopping) |
| Recommendation Agent | [Example](examples/python/agents/recommendation-agent.py) | [๐](https://docs.praison.ai/agents/recommendation) |
| Wikipedia Agent | [Example](examples/python/agents/wikipedia-agent.py) | [๐](https://docs.praison.ai/agents/wikipedia) |
| Programming Agent | [Example](examples/python/agents/programming-agent.py) | [๐](https://docs.praison.ai/agents/programming) |
| Math Agents | [Example](examples/python/agents/math-agent.py) | [๐](https://docs.praison.ai/features/mathagent) |
| Markdown Agent | [Example](examples/python/agents/markdown-agent.py) | [๐](https://docs.praison.ai/agents/markdown) |
| Prompt Expander Agent | [Example](#prompt-expansion) | [๐](https://docs.praison.ai/agents/prompt-expander) |
</details>
<details>
<summary><strong>๐จ Media & Multimodal</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Image Generation Agent | [Example](examples/python/image/image-agent.py) | [๐](https://docs.praison.ai/features/image-generation) |
| Image to Text Agent | [Example](examples/python/agents/image-to-text-agent.py) | [๐](https://docs.praison.ai/agents/image-to-text) |
| Video Agent | [Example](examples/python/agents/video-agent.py) | [๐](https://docs.praison.ai/agents/video) |
| Camera Integration | [Example](examples/python/camera/) | [๐](https://docs.praison.ai/features/camera-integration) |
</details>
<details>
<summary><strong>๐ Protocols & Integration</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| MCP Transports | [Example](examples/python/mcp/mcp-transports-overview.py) | [๐](https://docs.praison.ai/mcp/transports) |
| WebSocket MCP | [Example](examples/python/mcp/websocket-mcp.py) | [๐](https://docs.praison.ai/mcp/sse-transport) |
| MCP Security | [Example](examples/python/mcp/mcp-security.py) | [๐](https://docs.praison.ai/mcp/transports) |
| MCP Resumability | [Example](examples/python/mcp/mcp-resumability.py) | [๐](https://docs.praison.ai/mcp/sse-transport) |
| MCP Config Management | [Example](#mcp-config-cli) | [๐](https://docs.praison.ai/docs/cli/mcp) |
| LangChain Integrated Agents | [Example](examples/python/general/langchain_example.py) | [๐](https://docs.praison.ai/features/langchain) |
</details>
<details>
<summary><strong>๐ก๏ธ Safety & Control</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Guardrails | [Example](examples/python/guardrails/comprehensive-guardrails-example.py) | [๐](https://docs.praison.ai/features/guardrails) |
| Human Approval | [Example](examples/python/general/human_approval_example.py) | [๐](https://docs.praison.ai/features/approval) |
| Rules & Instructions | [Example](#7-rules--instructions) | [๐](https://docs.praison.ai/features/rules) |
</details>
<details>
<summary><strong>โ๏ธ Advanced Features</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Async & Parallel Processing | [Example](examples/python/general/async_example.py) | [๐](https://docs.praison.ai/features/async) |
| Parallelisation | [Example](examples/python/general/parallelisation.py) | [๐](https://docs.praison.ai/features/parallelisation) |
| Repetitive Agents | [Example](examples/python/concepts/repetitive-agents.py) | [๐](https://docs.praison.ai/features/repetitive) |
| Agent Handoffs | [Example](examples/python/handoff/handoff_basic.py) | [๐](https://docs.praison.ai/features/handoffs) |
| Stateful Agents | [Example](examples/python/stateful/workflow-state-example.py) | [๐](https://docs.praison.ai/features/stateful-agents) |
| Autonomous Workflow | [Example](examples/python/general/autonomous-agent.py) | [๐](https://docs.praison.ai/features/autonomous-workflow) |
| Structured Output Agents | [Example](examples/python/general/structured_agents_example.py) | [๐](https://docs.praison.ai/features/structured) |
| Model Router | [Example](examples/python/agents/router-agent-cost-optimization.py) | [๐](https://docs.praison.ai/features/model-router) |
| Prompt Caching | [Example](#web-search-web-fetch--prompt-caching) | [๐](https://docs.praison.ai/features/model-capabilities) |
| Fast Context | [Example](examples/context/00_agent_fast_context_basic.py) | [๐](https://docs.praison.ai/features/fast-context) |
</details>
<details>
<summary><strong>๐ ๏ธ Tools & Configuration</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| 100+ Custom Tools | [Example](examples/python/general/tools_example.py) | [๐](https://docs.praison.ai/tools/tools) |
| YAML Configuration | [Example](examples/cookbooks/yaml/secondary_market_research_agents.yaml) | [๐](https://docs.praison.ai/developers/agents-playbook) |
| 100+ LLM Support | [Example](examples/python/providers/openai/openai_gpt4_example.py) | [๐](https://docs.praison.ai/models) |
| Callback Agents | [Example](examples/python/general/advanced-callback-systems.py) | [๐](https://docs.praison.ai/features/callbacks) |
| Hooks | [Example](#10-hooks) | [๐](https://docs.praison.ai/features/hooks) |
| Middleware System | [Example](examples/middleware/basic_middleware.py) | [๐](https://docs.praison.ai/features/middleware) |
| Configurable Model | [Example](examples/middleware/configurable_model.py) | [๐](https://docs.praison.ai/features/configurable-model) |
| Rate Limiter | [Example](examples/middleware/rate_limiter.py) | [๐](https://docs.praison.ai/features/rate-limiter) |
| Injected Tool State | [Example](examples/middleware/injected_state.py) | [๐](https://docs.praison.ai/features/injected-state) |
| Shadow Git Checkpoints | [Example](#11-shadow-git-checkpoints) | [๐](https://docs.praison.ai/features/checkpoints) |
| Background Tasks | [Example](examples/background/basic_background.py) | [๐](https://docs.praison.ai/features/background-tasks) |
| Policy Engine | [Example](examples/policy/basic_policy.py) | [๐](https://docs.praison.ai/features/policy-engine) |
| Thinking Budgets | [Example](examples/thinking/basic_thinking.py) | [๐](https://docs.praison.ai/features/thinking-budgets) |
| Output Styles | [Example](examples/output/basic_output.py) | [๐](https://docs.praison.ai/features/output-styles) |
| Context Compaction | [Example](examples/compaction/basic_compaction.py) | [๐](https://docs.praison.ai/features/context-compaction) |
</details>
<details>
<summary><strong>๐ Monitoring & Management</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Sessions Management | [Example](examples/python/sessions/comprehensive-session-management.py) | [๐](https://docs.praison.ai/features/sessions) |
| Auto-Save Sessions | [Example](#session-management-python) | [๐](https://docs.praison.ai/docs/cli/session) |
| History in Context | [Example](#session-management-python) | [๐](https://docs.praison.ai/docs/cli/session) |
| Telemetry | [Example](examples/python/telemetry/production-telemetry-example.py) | [๐](https://docs.praison.ai/features/telemetry) |
| Project Docs (.praison/docs/) | [Example](#docs-cli) | [๐](https://docs.praison.ai/docs/cli/docs) |
| AI Commit Messages | [Example](#ai-commit-cli) | [๐](https://docs.praison.ai/docs/cli/commit) |
| @Mentions in Prompts | [Example](#mentions-in-prompts) | [๐](https://docs.praison.ai/docs/cli/mentions) |
</details>
<details>
<summary><strong>๐ฅ๏ธ CLI Features</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Slash Commands | [Example](examples/python/cli/slash_commands_example.py) | [๐](https://docs.praison.ai/docs/cli/slash-commands) |
| Autonomy Modes | [Example](examples/python/cli/autonomy_modes_example.py) | [๐](https://docs.praison.ai/docs/cli/autonomy-modes) |
| Cost Tracking | [Example](examples/python/cli/cost_tracking_example.py) | [๐](https://docs.praison.ai/docs/cli/cost-tracking) |
| Repository Map | [Example](examples/python/cli/repo_map_example.py) | [๐](https://docs.praison.ai/docs/cli/repo-map) |
| Interactive TUI | [Example](examples/python/cli/interactive_tui_example.py) | [๐](https://docs.praison.ai/docs/cli/interactive-tui) |
| Git Integration | [Example](examples/python/cli/git_integration_example.py) | [๐](https://docs.praison.ai/docs/cli/git-integration) |
| Sandbox Execution | [Example](examples/python/cli/sandbox_execution_example.py) | [๐](https://docs.praison.ai/docs/cli/sandbox-execution) |
| CLI Compare | [Example](examples/compare/cli_compare_basic.py) | [๐](https://docs.praison.ai/docs/cli/compare) |
| Profile/Benchmark | [Example](#profile-benchmark) | [๐](https://docs.praison.ai/docs/cli/profile) |
| Auto Mode | [Example](#auto-mode) | [๐](https://docs.praison.ai/docs/cli/auto) |
| Init | [Example](#init) | [๐](https://docs.praison.ai/docs/cli/init) |
| File Input | [Example](#file-input) | [๐](https://docs.praison.ai/docs/cli/file-input) |
| Final Agent | [Example](#final-agent) | [๐](https://docs.praison.ai/docs/cli/final-agent) |
| Max Tokens | [Example](#max-tokens) | [๐](https://docs.praison.ai/docs/cli/max-tokens) |
</details>
<details>
<summary><strong>๐งช Evaluation</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Accuracy Evaluation | [Example](examples/eval/accuracy_example.py) | [๐](https://docs.praison.ai/docs/cli/eval) |
| Performance Evaluation | [Example](examples/eval/performance_example.py) | [๐](https://docs.praison.ai/docs/cli/eval) |
| Reliability Evaluation | [Example](examples/eval/reliability_example.py) | [๐](https://docs.praison.ai/docs/cli/eval) |
| Criteria Evaluation | [Example](examples/eval/criteria_example.py) | [๐](https://docs.praison.ai/docs/cli/eval) |
</details>
<details>
<summary><strong>๐ฏ Agent Skills</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Skills Management | [Example](examples/skills/basic_skill_usage.py) | [๐](https://docs.praison.ai/features/skills) |
| Custom Skills | [Example](examples/skills/custom_skill_example.py) | [๐](https://docs.praison.ai/features/skills) |
</details>
<details>
<summary><strong>โฐ 24/7 Scheduling</strong></summary>
| Feature | Code | Docs |
|---------|:----:|:----:|
| Agent Scheduler | [Example](examples/python/scheduled_agents/news_checker_live.py) | [๐](https://docs.praison.ai/docs/cli/scheduler) |
</details>
---
## ๐ Supported Providers
PraisonAI supports 100+ LLM providers through seamless integration:
| Provider | Example |
|----------|:-------:|
| OpenAI | [Example](examples/python/providers/openai/openai_gpt4_example.py) |
| Anthropic | [Example](examples/python/providers/anthropic/anthropic_claude_example.py) |
| Google Gemini | [Example](examples/python/providers/google/google_gemini_example.py) |
| Ollama | [Example](examples/python/providers/ollama/ollama-agents.py) |
| Groq | [Example](examples/python/providers/groq/kimi_with_groq_example.py) |
| DeepSeek | [Example](examples/python/providers/deepseek/deepseek_example.py) |
| xAI Grok | [Example](examples/python/providers/xai/xai_grok_example.py) |
| Mistral | [Example](examples/python/providers/mistral/mistral_example.py) |
| Cohere | [Example](examples/python/providers/cohere/cohere_example.py) |
| Perplexity | [Example](examples/python/providers/perplexity/perplexity_example.py) |
| Fireworks | [Example](examples/python/providers/fireworks/fireworks_example.py) |
| Together AI | [Example](examples/python/providers/together/together_ai_example.py) |
| OpenRouter | [Example](examples/python/providers/openrouter/openrouter_example.py) |
| HuggingFace | [Example](examples/python/providers/huggingface/huggingface_example.py) |
| Azure OpenAI | [Example](examples/python/providers/azure/azure_openai_example.py) |
| AWS Bedrock | [Example](examples/python/providers/aws/aws_bedrock_example.py) |
| Google Vertex | [Example](examples/python/providers/vertex/vertex_example.py) |
| Databricks | [Example](examples/python/providers/databricks/databricks_example.py) |
| Cloudflare | [Example](examples/python/providers/cloudflare/cloudflare_example.py) |
| AI21 | [Example](examples/python/providers/ai21/ai21_example.py) |
| Replicate | [Example](examples/python/providers/replicate/replicate_example.py) |
| SageMaker | [Example](examples/python/providers/sagemaker/sagemaker_example.py) |
| Moonshot | [Example](examples/python/providers/moonshot/moonshot_example.py) |
| vLLM | [Example](examples/python/providers/vllm/vllm_example.py) |
---
## ๐ Using Python Code
Light weight package dedicated for coding:
```bash
pip install praisonaiagents
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
```
### 1. Single Agent
Create app.py file and add the code below:
```python
from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")
```
Run:
```bash
python app.py
```
### 2. Multi Agents
Create app.py file and add the code below:
```python
from praisonaiagents import Agent, PraisonAIAgents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = PraisonAIAgents(agents=[research_agent, summarise_agent])
agents.start()
```
Run:
```bash
python app.py
```
### 3. Agent with Planning Mode
Enable planning for any agent - the agent creates a plan, then executes step by step:
```python
from praisonaiagents import Agent
def search_web(query: str) -> str:
return f"Search results for: {query}"
agent = Agent(
name="AI Assistant",
instructions="Research and write about topics",
planning=True, # Enable planning mode
planning_tools=[search_web], # Tools for planning research
planning_reasoning=True # Chain-of-thought reasoning
)
result = agent.start("Research AI trends in 2025 and write a summary")
```
**What happens:**
1. ๐ Agent creates a multi-step plan
2. ๐ Executes each step sequentially
3. ๐ Shows progress with context passing
4. โ
Returns final result
### 4. Deep Research Agent
Automated research with real-time streaming, web search, and citations using OpenAI or Gemini Deep Research APIs.
```python
from praisonaiagents import DeepResearchAgent
# OpenAI Deep Research
agent = DeepResearchAgent(
model="o4-mini-deep-research", # or "o3-deep-research"
verbose=True
)
result = agent.research("What are the latest AI trends in 2025?")
print(result.report)
print(f"Citations: {len(result.citations)}")
```
```python
# Gemini Deep Research
from praisonaiagents import DeepResearchAgent
agent = DeepResearchAgent(
model="deep-research-pro", # Auto-detected as Gemini
verbose=True
)
result = agent.research("Research quantum computing advances")
print(result.report)
```
**Features:**
- ๐ Multi-provider support (OpenAI, Gemini, LiteLLM)
- ๐ก Real-time streaming with reasoning summaries
- ๐ Structured citations with URLs
- ๐ ๏ธ Built-in tools: web search, code interpreter, MCP, file search
- ๐ Automatic provider detection from model name
### 5. Query Rewriter Agent
Transform user queries to improve RAG retrieval quality using multiple strategies.
```python
from praisonaiagents import QueryRewriterAgent, RewriteStrategy
agent = QueryRewriterAgent(model="gpt-4o-mini")
# Basic - expands abbreviations, adds context
result = agent.rewrite("AI trends")
print(result.primary_query) # "What are the current trends in Artificial Intelligence?"
# HyDE - generates hypothetical document for semantic matching
result = agent.rewrite("What is quantum computing?", strategy=RewriteStrategy.HYDE)
# Step-back - generates broader context question
result = agent.rewrite("GPT-4 vs Claude 3?", strategy=RewriteStrategy.STEP_BACK)
# Sub-queries - decomposes complex questions
result = agent.rewrite("RAG setup and best embedding models?", strategy=RewriteStrategy.SUB_QUERIES)
# Contextual - resolves references using chat history
result = agent.rewrite("What about cost?", chat_history=[...])
```
**Strategies:**
- **BASIC**: Expand abbreviations, fix typos, add context
- **HYDE**: Generate hypothetical document for semantic matching
- **STEP_BACK**: Generate higher-level concept questions
- **SUB_QUERIES**: Decompose multi-part questions
- **MULTI_QUERY**: Generate multiple paraphrased versions
- **CONTEXTUAL**: Resolve references using conversation history
- **AUTO**: Automatically detect best strategy
### 6. Agent Memory (Zero Dependencies)
Enable persistent memory for agents - works out of the box without any extra packages.
```python
from praisonaiagents import Agent
from praisonaiagents.memory import FileMemory
# Enable memory with a single parameter
agent = Agent(
name="Personal Assistant",
instructions="You are a helpful assistant that remembers user preferences.",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123" # Isolate memory per user
)
# Memory is automatically injected into conversations
result = agent.start("My name is John and I prefer Python")
# Agent will remember this for future conversations
```
**Memory Types:**
- **Short-term**: Rolling buffer of recent context (auto-expires)
- **Long-term**: Persistent important facts (sorted by importance)
- **Entity**: People, places, organizations with attributes
- **Episodic**: Date-based interaction history
**Advanced Features:**
```python
from praisonaiagents.memory import FileMemory
memory = FileMemory(user_id="user123")
# Session Save/Resume
memory.save_session("project_session", conversation_history=[...])
memory.resume_session("project_session")
# Context Compression
memory.compress(llm_func=lambda p: agent.chat(p), max_items=10)
# Checkpointing
memory.create_checkpoint("before_refactor", include_files=["main.py"])
memory.restore_checkpoint("before_refactor", restore_files=True)
# Slash Commands
memory.handle_command("/memory show")
memory.handle_command("/memory save my_session")
```
**Storage Options:**
| Option | Dependencies | Description |
|--------|-------------|-------------|
| `memory=True` | None | File-based JSON storage (default) |
| `memory="file"` | None | Explicit file-based storage |
| `memory="sqlite"` | Built-in | SQLite with indexing |
| `memory="chromadb"` | chromadb | Vector/semantic search |
### 7. Rules & Instructions
PraisonAI auto-discovers instruction files from your project root and git root:
| File | Description | Priority |
|------|-------------|----------|
| `PRAISON.md` | PraisonAI native instructions | High |
| `PRAISON.local.md` | Local overrides (gitignored) | Higher |
| `CLAUDE.md` | Claude Code memory file | High |
| `CLAUDE.local.md` | Local overrides (gitignored) | Higher |
| `AGENTS.md` | OpenAI Codex CLI instructions | High |
| `GEMINI.md` | Gemini CLI memory file | High |
| `.cursorrules` | Cursor IDE rules | High |
| `.windsurfrules` | Windsurf IDE rules | High |
| `.claude/rules/*.md` | Claude Code modular rules | Medium |
| `.windsurf/rules/*.md` | Windsurf modular rules | Medium |
| `.cursor/rules/*.mdc` | Cursor modular rules | Medium |
| `.praison/rules/*.md` | Workspace rules | Medium |
| `~/.praison/rules/*.md` | Global rules | Low |
```python
from praisonaiagents import Agent
# Agent auto-discovers CLAUDE.md, AGENTS.md, GEMINI.md, etc.
agent = Agent(name="Assistant", instructions="You are helpful.")
# Rules are injected into system prompt automatically
```
**@Import Syntax:**
```markdown
# CLAUDE.md
See @README for project overview
See @docs/architecture.md for system design
@~/.praison/my-preferences.md
```
**Rule File Format (with YAML frontmatter):**
```markdown
---
description: Python coding guidelines
globs: ["**/*.py"]
activation: always # always, glob, manual, ai_decision
---
# Guidelines
- Use type hints
- Follow PEP 8
```
### 8. Auto-Generated Memories
```python
from praisonaiagents.memory import FileMemory, AutoMemory
memory = FileMemory(user_id="user123")
auto = AutoMemory(memory, enabled=True)
# Automatically extracts and stores memories from conversations
memories = auto.process_interaction(
"My name is John and I prefer Python for backend work"
)
# Extracts: name="John", preference="Python for backend"
```
### 9. Agentic Workflows
Create powerful multi-agent workflows with the `Workflow` class:
```python
from praisonaiagents import Agent, Workflow
# Create agents
researcher = Agent(
name="Researcher",
role="Research Analyst",
goal="Research topics thoroughly",
instructions="Provide concise, factual information."
)
writer = Agent(
name="Writer",
role="Content Writer",
goal="Write engaging content",
instructions="Write clear, engaging content based on research."
)
# Create workflow with agents as steps
workflow = Workflow(steps=[researcher, writer])
# Run workflow - agents process sequentially
result = workflow.start("What are the benefits of AI agents?")
print(result["output"])
```
**Key Features:**
- **Agent-first** - Pass `Agent` objects directly as workflow steps
- **Pattern helpers** - Use `route()`, `parallel()`, `loop()`, `repeat()`
- **Planning mode** - Enable with `planning=True`
- **Callbacks** - Monitor with `on_step_complete`, `on_workflow_complete`
- **Async execution** - Use `workflow.astart()` for async
### Workflow Patterns (route, parallel, loop, repeat)
```python
from praisonaiagents import Agent, Workflow
from praisonaiagents.workflows import route, parallel, loop, repeat
# 1. ROUTING - Classifier agent routes to specialized agents
classifier = Agent(name="Classifier", instructions="Respond with 'technical' or 'creative'")
tech_agent = Agent(name="TechExpert", role="Technical Expert")
creative_agent = Agent(name="Creative", role="Creative Writer")
workflow = Workflow(steps=[
classifier,
route({
"technical": [tech_agent],
"creative": [creative_agent]
})
])
# 2. PARALLEL - Multiple agents work concurrently
market_agent = Agent(name="Market", role="Market Researcher")
competitor_agent = Agent(name="Competitor", role="Competitor Analyst")
aggregator = Agent(name="Aggregator", role="Synthesizer")
workflow = Workflow(steps=[
parallel([market_agent, competitor_agent]),
aggregator
])
# 3. LOOP - Agent processes each item
processor = Agent(name="Processor", role="Item Processor")
summarizer = Agent(name="Summarizer", role="Summarizer")
workflow = Workflow(
steps=[loop(processor, over="items"), summarizer],
variables={"items": ["AI", "ML", "NLP"]}
)
# 4. REPEAT - Evaluator-Optimizer pattern
generator = Agent(name="Generator", role="Content Generator")
evaluator = Agent(name="Evaluator", instructions="Say 'APPROVED' if good")
workflow = Workflow(steps=[
generator,
repeat(evaluator, until=lambda ctx: "approved" in ctx.previous_result.lower(), max_iterations=3)
])
# 5. CALLBACKS
workflow = Workflow(
steps=[researcher, writer],
on_step_complete=lambda name, r: print(f"โ
{name} done")
)
# 6. WITH PLANNING & REASONING
workflow = Workflow(
steps=[researcher, writer],
planning=True,
reasoning=True
)
# 7. ASYNC EXECUTION
result = asyncio.run(workflow.astart("input"))
# 8. STATUS TRACKING
workflow.status # "not_started" | "running" | "completed"
workflow.step_statuses # {"step1": "completed", "step2": "skipped"}
```
### YAML Workflow Template
```yaml
# .praison/workflows/research.yaml
name: Research Workflow
description: Research and write content with all patterns
agents:
researcher:
role: Research Expert
goal: Find accurate information
tools: [tavily_search, web_scraper]
writer:
role: Content Writer
goal: Write engaging content
editor:
role: Editor
goal: Polish content
steps:
# Sequential
- agent: researcher
action: Research {{topic}}
output_variable: research_data
# Routing
- name: classifier
action: Classify content type
route:
technical: [tech_handler]
creative: [creative_handler]
default: [general_handler]
# Parallel
- name: parallel_research
parallel:
- agent: researcher
action: Research market
- agent: researcher
action: Research competitors
# Loop
- agent: writer
action: Write about {{item}}
loop_over: topics
loop_var: item
# Repeat (evaluator-optimizer)
- agent: editor
action: Review and improve
repeat:
until: "quality > 8"
max_iterations: 3
# Output to file
- agent: writer
action: Write final report
output_file: output/{{topic}}_report.md
variables:
topic: AI trends
topics: [ML, NLP, Vision]
workflow:
planning: true
planning_llm: gpt-4o
memory_config:
provider: chroma
persist: true
```
### Loading YAML Workflows
```python
from praisonaiagents.workflows import YAMLWorkflowParser, WorkflowManager
# Option 1: Parse YAML string
parser = YAMLWorkflowParser()
workflow = parser.parse_string(yaml_content)
result = workflow.start("Research AI trends")
# Option 2: Load from file with WorkflowManager
manager = WorkflowManager()
workflow = manager.load_yaml("research_workflow.yaml")
result = workflow.start("Research AI trends")
# Option 3: Execute YAML directly
result = manager.execute_yaml(
"research_workflow.yaml",
input_data="Research AI trends",
variables={"topic": "Machine Learning"}
)
```
### Complete workflow.yaml Reference
```yaml
# workflow.yaml - Full feature reference
name: Complete Workflow
description: Demonstrates all workflow.yaml features
framework: praisonai # praisonai, crewai, autogen
process: workflow # sequential, hierarchical, workflow
workflow:
planning: true
planning_llm: gpt-4o
reasoning: true
verbose: true
memory_config:
provider: chroma
persist: true
variables:
topic: AI trends
items: [ML, NLP, Vision]
agents:
researcher:
name: Researcher
role: Research Analyst
goal: Research topics thoroughly
instructions: "Provide detailed research findings"
backstory: "Expert researcher with 10 years experience" # alias for instructions
llm: gpt-4o-mini
function_calling_llm: gpt-4o # For tool calls
max_rpm: 10 # Rate limiting
max_execution_time: 300 # Timeout in seconds
reflect_llm: gpt-4o # For self-reflection
min_reflect: 1
max_reflect: 3
system_template: "You are a helpful assistant"
tools:
- tavily_search
writer:
name: Writer
role: Content Writer
goal: Write clear content
instructions: "Write engaging content"
steps:
- name: research_step
agent: researcher
action: "Research {{topic}}"
expected_output: "Comprehensive research report"
output_file: "output/research.md"
create_directory: true
- name: writing_step
agent: writer
action: "Write article based on research"
context: # Task dependencies
- research_step
output_json: # Structured output
type: object
properties:
title: { type: string }
content: { type: string }
callbacks:
on_workflow_start: log_start
on_step_complete: log_step
on_workflow_complete: log_complete
```
### 10. Hooks
Intercept and modify agent behavior at various lifecycle points:
```python
from praisonaiagents.hooks import (
HookRegistry, HookRunner, HookEvent, HookResult,
BeforeToolInput
)
# Create a hook registry
registry = HookRegistry()
# Log all tool calls
@registry.on(HookEvent.BEFORE_TOOL)
def log_tools(event_data: BeforeToolInput) -> HookResult:
print(f"Tool: {event_data.tool_name}")
return HookResult.allow()
# Block dangerous operations
@registry.on(HookEvent.BEFORE_TOOL)
def security_check(event_data: BeforeToolInput) -> HookResult:
if "delete" in event_data.tool_name.lower():
return HookResult.deny("Delete operations blocked")
return HookResult.allow()
# Execute hooks
runner = HookRunner(registry)
```
**CLI Commands:**
```bash
praisonai hooks list # List registered hooks
praisonai hooks test before_tool # Test hooks for an event
praisonai hooks run "echo test" # Run a command hook
praisonai hooks validate hooks.json # Validate configuration
```
### 11. Shadow Git Checkpoints
File-level undo/restore using shadow git:
```python
from praisonaiagents.checkpoints import CheckpointService
service = CheckpointService(workspace_dir="./my_project")
await service.initialize()
# Save checkpoint before changes
result = await service.save("Before refactoring")
# Make changes...
# Restore if needed
await service.restore(result.checkpoint.id)
# View diff
diff = await service.diff()
```
**CLI Commands:**
```bash
praisonai checkpoint save "Before changes" # Save checkpoint
praisonai checkpoint list # List checkpoints
praisonai checkpoint diff # Show changes
praisonai checkpoint restore abc123 # Restore to checkpoint
```
**Links:**
- [๐ Coding Docs](https://docs.praison.ai/features/checkpoints)
- [๐ CLI Docs](https://docs.praison.ai/docs/cli/checkpoint)
- [๐ป Example](examples/checkpoints/basic_checkpoints.py)
---
### 12. Background Tasks
Run agent tasks asynchronously without blocking:
```python
import asyncio
from praisonaiagents.background import BackgroundRunner, BackgroundConfig
async def main():
config = BackgroundConfig(max_concurrent_tasks=3)
runner = BackgroundRunner(config=config)
async def my_task(name: str) -> str:
await asyncio.sleep(2)
return f"Task {name} completed"
task = await runner.submit(my_task, args=("example",), name="my_task")
await task.wait(timeout=10.0)
print(task.result)
asyncio.run(main())
```
**CLI Commands:**
```bash
praisonai background list # List running tasks
praisonai background status <id> # Check task status
praisonai background cancel <id> # Cancel a task
praisonai background clear # Clear completed tasks
```
**Links:**
- [๐ Coding Docs](https://docs.praison.ai/features/background-tasks)
- [๐ CLI Docs](https://docs.praison.ai/docs/cli/background)
- [๐ป Example](examples/background/basic_background.py)
---
### 13. Policy Engine
Control what agents can and cannot do with policy-based execution:
```python
from praisonaiagents.policy import (
PolicyEngine, Policy, PolicyRule, PolicyAction
)
engine = PolicyEngine()
policy = Policy(
name="no_delete",
rules=[
PolicyRule(
action=PolicyAction.DENY,
resource="tool:delete_*",
reason="Delete operations blocked"
)
]
)
engine.add_policy(policy)
result = engine.check("tool:delete_file", {})
print(f"Allowed: {result.allowed}")
```
**CLI Commands:**
```bash
praisonai policy list # List policies
praisonai policy check "tool:name" # Check if allowed
praisonai policy init # Create template
```
**Links:**
- [๐ Coding Docs](https://docs.praison.ai/features/policy-engine)
- [๐ CLI Docs](https://docs.praison.ai/docs/cli/policy)
- [๐ป Example](examples/policy/basic_policy.py)
---
### 14. Thinking Budgets
Configure token budgets for extended thinking:
```python
from praisonaiagents.thinking import ThinkingBudget, ThinkingTracker
# Use predefined levels
budget = ThinkingBudget.high() # 16,000 tokens
# Track usage
tracker = ThinkingTracker()
session = tracker.start_session(budget_tokens=16000)
tracker.end_session(session, tokens_used=12000)
summary = tracker.get_summary()
print(f"Utilization: {summary['average_utilization']:.1%}")
```
**CLI Commands:**
```bash
praisonai thinking status # Show current budget
praisonai thinking set high # Set budget level
praisonai thinking stats # Show usage statistics
```
**Links:**
- [๐ Coding Docs](https://docs.praison.ai/features/thinking-budgets)
- [๐ CLI Docs](https://docs.praison.ai/docs/cli/thinking)
- [๐ป Example](examples/thinking/basic_thinking.py)
---
### 15. Output Styles
Configure how agents format their responses:
```python
from praisonaiagents.output import OutputStyle, OutputFormatter
# Use preset styles
style = OutputStyle.concise()
formatter = OutputFormatter(style)
# Format output
text = "# Hello\n\nThis is **bold** text."
plain = formatter.format(text)
print(plain)
```
**CLI Commands:**
```bash
praisonai output status # Show current style
praisonai output set concise # Set output style
```
**Links:**
- [๐ Coding Docs](https://docs.praison.ai/features/output-styles)
- [๐ CLI Docs](https://docs.praison.ai/docs/cli/output-style)
- [๐ป Example](examples/output/basic_output.py)
---
### 16. Context Compaction
Automatically manage context window size:
```python
from praisonaiagents.compaction import (
ContextCompactor, CompactionStrategy
)
compactor = ContextCompactor(
max_tokens=4000,
strategy=CompactionStrategy.SLIDING,
preserve_recent=3
)
messages = [...] # Your conversation history
compacted, result = compactor.compact(messages)
print(f"Compression: {result.compression_ratio:.1%}")
```
**CLI Commands:**
```bash
praisonai compaction status # Show settings
praisonai compaction set sliding # Set strategy
praisonai compaction stats # Show statistics
```
**Links:**
- [๐ Coding Docs](https://docs.praison.ai/features/context-compaction)
- [๐ CLI Docs](https://docs.praison.ai/docs/cli/compaction)
- [๐ป Example](examples/compaction/basic_compaction.py)
---
### 17. Field Names Reference (A-I-G-S)
PraisonAI accepts both old (agents.yaml) and new (workflow.yaml) field names. Use the **canonical names** for new projects:
| Canonical (Recommended) | Alias (Also Works) | Purpose |
|-------------------------|-------------------|---------|
| `agents` | `roles` | Define agent personas |
| `instructions` | `backstory` | Agent behavior/persona |
| `action` | `description` | What the step does |
| `steps` | `tasks` (nested) | Define work items |
| `name` | `topic` | Workflow identifier |
**A-I-G-S Mnemonic** - Easy to remember:
- **A**gents - Who does the work
- **I**nstructions - How they behave
- **G**oal - What they achieve
- **S**teps - What they do
```yaml
# Quick Reference - Canonical Format
name: My Workflow # Workflow name (not 'topic')
agents: # Define agents (not 'roles')
my_agent:
role: Job Title # Agent's role
goal: What to achieve # Agent's goal
instructions: How to act # Agent's behavior (not 'backstory')
steps: # Define steps (not 'tasks')
- agent: my_agent
action: What to do # Step action (not 'description')
```
> **Note:** The parser accepts both old and new names. Run `praisonai workflow validate <file.yaml>` to see suggestions for canonical names.
### 18. Extended agents.yaml with Workflow Patterns
**Feature Parity:** Both `agents.yaml` and `workflow.yaml` now support the same features:
- All workflow patterns (route, parallel, loop, repeat)
- All agent fields (function_calling_llm, max_rpm, max_execution_time, reflect_llm, templates)
- All step fields (expected_output, context, output_json, create_directory, callback)
- Framework support (praisonai, crewai, autogen)
- Process types (sequential, hierarchical, workflow)
You can use advanced workflow patterns directly in agents.yaml by setting `process: workflow`:
```yaml
# agents.yaml with workflow patterns
framework: praisonai
process: workflow # Enables workflow mode
topic: "Research AI trends"
workflow:
planning: true
reasoning: true
verbose: true
variables:
topic: AI trends
agents: # Canonical: use 'agents' instead of 'roles'
classifier:
role: Request Classifier
instructions: "Classify requests into categories" # Canonical: use 'instructions' instead of 'backstory'
goal: Classify requests
researcher:
role: Research Analyst
instructions: "Expert researcher" # Canonical: use 'instructions' instead of 'backstory'
goal: Research topics
tools:
- tavily_search
steps:
# Sequential step
- agent: classifier
action: "Classify: {{topic}}"
# Route pattern - decision-based branching
- name: routing
route:
technical: [tech_expert]
default: [researcher]
# Parallel pattern - concurrent execution
- name: parallel_research
parallel:
- agent: researcher
action: "Research market trends"
- agent: researcher
action: "Research competitors"
# Loop pattern - iterate over items
- agent: researcher
action: "Analyze {{item}}"
loop:
over: topics
# Repeat pattern - evaluator-optimizer
- agent: aggregator
action: "Synthesize findings"
repeat:
until: "comprehensive"
max_iterations: 3
```
Run with the same simple command:
```bash
praisonai agents.yaml
```
### 19. MCP (Model Context Protocol)
PraisonAI supports MCP Protocol Revision 2025-11-25 with multiple transports.
#### MCP Client (Consume MCP Servers)
```python
from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# SSE (Legacy) - Backward compatibility
agent = Agent(tools=MCP("http://localhost:8080/sse"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)
# Multiple MCP servers + regular functions
def my_custom_tool(query: str) -> str:
"""Custom tool function."""
return f"Result: {query}"
agent = Agent(
name="MultiToolAgent",
instructions="Agent with multiple MCP servers",
tools=[
MCP("uvx mcp-server-time"), # Time tools
MCP("npx @modelcontextprotocol/server-memory"), # Memory tools
my_custom_tool # Regular function
]
)
```
#### MCP Server (Expose Tools as MCP Server)
Expose your Python functions as MCP tools for Claude Desktop, Cursor, and other MCP clients:
```python
from praisonaiagents.mcp import ToolsMCPServer
def search_web(query: str, max_results: int = 5) -> dict:
"""Search the web for information."""
return {"results": [f"Result for {query}"]}
def calculate(expression: str) -> dict:
"""Evaluate a mathematical expression."""
return {"result": eval(expression)}
# Create and run MCP server
server = ToolsMCPServer(name="my-tools")
server.register_tools([search_web, calculate])
server.run() # stdio for Claude Desktop
# server.run_sse(host="0.0.0.0", port=8080) # SSE for web clients
```
#### MCP Features
| Feature | Description |
|---------|-------------|
| Session Management | Automatic Mcp-Session-Id handling |
| Protocol Versioning | Mcp-Protocol-Version header |
| Resumability | SSE stream recovery via Last-Event-ID |
| Security | Origin validation, DNS rebinding prevention |
| WebSocket | Auto-reconnect with exponential backoff |
### 20. A2A (Agent2Agent Protocol)
PraisonAI supports the [A2A Protocol](https://a2a-protocol.org) for agent-to-agent communication, enabling your agents to be discovered and collaborate with other AI agents.
#### A2A Server (Expose Agent as A2A Server)
```python
from praisonaiagents import Agent, A2A
from fastapi import FastAPI
# Create an agent with tools
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
agent = Agent(
name="Research Assistant",
role="Research Analyst",
goal="Help users research topics",
tools=[search_web]
)
# Expose as A2A Server
a2a = A2A(agent=agent, url="http://localhost:8000/a2a")
app = FastAPI()
app.include_router(a2a.get_router())
# Run: uvicorn app:app --reload
# Agent Card: GET /.well-known/agent.json
# Status: GET /status
```
#### A2A Features
| Feature | Description |
|---------|-------------|
| Agent Card | JSON metadata for agent discovery |
| Skills Extraction | Auto-generate skills from tools |
| Task Management | Stateful task lifecycle |
| Streaming | SSE streaming for real-time updates |
> **Documentation**: [docs.praison.ai/a2a](https://docs.praison.ai/a2a) | **Examples**: [examples/python/a2a](https://github.com/MervinPraison/PraisonAI/tree/main/examples/python/a2a)
---
## ๐ฏ CLI / No-Code Interface
PraisonAI provides a powerful CLI for no-code automation and quick prototyping.
### CLI Quick Reference
| Category | Commands |
|----------|----------|
| **Execution** | `praisonai`, `--auto`, `--interactive`, `--chat` |
| **Research** | `research`, `--query-rewrite`, `--deep-research` |
| **Planning** | `--planning`, `--planning-tools`, `--planning-reasoning` |
| **Workflows** | `workflow run`, `workflow list`, `workflow auto` |
| **Memory** | `memory show`, `memory add`, `memory search`, `memory clear` |
| **Knowledge** | `knowledge add`, `knowledge query`, `knowledge list` |
| **Sessions** | `session list`, `session resume`, `session delete` |
| **Tools** | `tools list`, `tools info`, `tools search` |
| **MCP** | `mcp list`, `mcp create`, `mcp enable` |
| **Development** | `commit`, `docs`, `checkpoint`, `hooks` |
| **Scheduling** | `schedule start`, `schedule list`, `schedule stop` |
### Auto Mode
```bash
pip install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
praisonai --auto create a movie script about Robots in Mars
```
### Interactive Mode CLI:
```bash
# Start interactive terminal mode (inspired by Gemini CLI, Codex CLI, Claude Code)
praisonai --interactive
praisonai -i
# Features:
# - Streaming responses (no boxes)
# - Built-in tools: read_file, write_file, list_files, execute_command, internet_search
# - Slash commands: /help, /exit, /tools, /clear
# Chat mode - single prompt with interactive style (for testing/scripting)
# Use --chat (or --chat-mode for backward compatibility)
praisonai "list files in current folder" --chat
praisonai "search the web for AI news" --chat
praisonai "read README.md" --chat
```
### Chat UI (Web Interface):
```bash
# Start web-based Chainlit chat interface (requires praisonai[chat])
pip install "praisonai[chat]"
praisonai chat
# Opens browser at http://localhost:8084
```
### Query Rewriting (works with any command):
```bash
# Rewrite query for better results (uses QueryRewriterAgent)
praisonai "AI trends" --query-rewrite
# Rewrite with search tools (agent decides when to search)
praisonai "latest developments" --query-rewrite --rewrite-tools "internet_search"
# Works with any prompt
praisonai "explain quantum computing" --query-rewrite -v
```
### Deep Research CLI:
```bash
# Default: OpenAI (o4-mini-deep-research)
praisonai research "What are the latest AI trends in 2025?"
# Use Gemini
praisonai research --model deep-research-pro "Your research query"
# Rewrite query before research
praisonai research --query-rewrite "AI trends"
# Rewrite with search tools
praisonai research --query-rewrite --rewrite-tools "internet_search" "AI trends"
# Use custom tools from file (gathers context before deep research)
praisonai research --tools tools.py "Your research query"
praisonai research -t my_tools.py "Your research query"
# Use built-in tools by name (comma-separated)
praisonai research --tools "internet_search,wiki_search" "Your query"
praisonai research -t "yfinance,calculator_tools" "Stock analysis query"
# Save output to file (output/research/{query}.md)
praisonai research --save "Your research query"
praisonai research -s "Your research query"
# Combine options
praisonai research --query-rewrite --tools tools.py --save "Your research query"
# Verbose mode (show debug logs)
praisonai research -v "Your research query"
```
### Planning Mode CLI:
```bash
# Enable planning mode - agent creates a plan before execution
praisonai "Research AI trends and write a summary" --planning
# Planning with tools for research
praisonai "Analyze market trends" --planning --planning-tools tools.py
# Planning with chain-of-thought reasoning
praisonai "Complex analysis task" --planning --planning-reasoning
# Auto-approve plans without confirmation
praisonai "Task" --planning --auto-approve-plan
```
### Tool Approval CLI:
```bash
# Auto-approve ALL tool executions (use with caution!)
praisonai "run ls command" --trust
# Auto-approve tools up to a risk level (prompt for higher)
# Levels: low, medium, high, critical
praisonai "write to file" --approve-level high # Prompts for critical tools only
praisonai "task" --approve-level medium # Prompts for high and critical
# Default behavior (no flags): prompts for all dangerous tools
praisonai "run shell command" # Will prompt for approval
```
### Memory CLI:
```bash
# Enable memory for agent (persists across sessions)
praisonai "My name is John" --memory
# Memory with user isolation
praisonai "Remember my preferences" --memory --user-id user123
# Memory management commands
praisonai memory show # Show memory statistics
praisonai memory add "User prefers Python" # Add to long-term memory
praisonai memory search "Python" # Search memories
praisonai memory clear # Clear short-term memory
praisonai memory clear all # Clear all memory
praisonai memory save my_session # Save session
praisonai memory resume my_session # Resume session
praisonai memory sessions # List saved sessions
praisonai memory checkpoint # Create checkpoint
praisonai memory restore <checkpoint_id> # Restore checkpoint
praisonai memory checkpoints # List checkpoints
praisonai memory help # Show all commands
```
### Rules CLI:
```bash
# List all loaded rules (from PRAISON.md, CLAUDE.md, etc.)
praisonai rules list
# Show specific rule details
praisonai rules show <rule_name>
# Create a new rule
praisonai rules create my_rule "Always use type hints"
# Delete a rule
praisonai rules delete my_rule
# Show rules statistics
praisonai rules stats
# Include manual rules with prompts
praisonai "Task" --include-rules security,testing
```
### Workflow CLI:
```bash
# List available workflows
praisonai workflow list
# Execute a workflow with tools and save output
praisonai workflow run "Research Blog" --tools tavily --save
# Execute with variables
praisonai workflow run deploy --workflow-var environment=staging --workflow-var branch=main
# Execute with planning mode (AI creates sub-steps for each workflow step)
praisonai workflow run "Research Blog" --planning --verbose
# Execute with reasoning mode (chain-of-thought)
praisonai workflow run "Analysis" --reasoning --verbose
# Execute with memory enabled
praisonai workflow run "Research" --memory
# Show workflow details
praisonai workflow show deploy
# Create a new workflow template
praisonai workflow create my_workflow
# Inline workflow (no template file needed)
praisonai "What is AI?" --workflow "Research,Summarize" --save
# Inline workflow with step actions
praisonai "GPT-5" --workflow "Research:Search for info,Write:Write blog" --tools tavily
# Workflow CLI help
praisonai workflow help
```
#### YAML Workflow Files:
```bash
# Run a YAML workflow file
praisonai workflow run research.yaml
# Run with variables
praisonai workflow run research.yaml --var topic="AI trends"
# Validate a YAML workflow
praisonai workflow validate research.yaml
# Create from template (simple, routing, parallel, loop, evaluator-optimizer)
praisonai workflow template routing --output my_workflow.yaml
```
#### Auto-Generate Workflows:
```bash
# Auto-generate a sequential workflow from topic
praisonai workflow auto "Research AI trends"
# Generate parallel workflow (multiple agents work concurrently)
praisonai workflow auto "Research AI trends" --pattern parallel
# Generate routing workflow (classifier routes to specialists)
praisonai workflow auto "Build a chatbot" --pattern routing
# Generate orchestrator-workers workflow (central orchestrator delegates)
praisonai workflow auto "Comprehensive market analysis" --pattern orchestrator-workers
# Generate evaluator-optimizer workflow (iterative refinement)
praisonai workflow auto "Write and refine article" --pattern evaluator-optimizer
# Specify output file
praisonai workflow auto "Build a chatbot" --pattern routing
# Specify output file
praisonai workflow auto "Research AI" --pattern sequential --output my_workflow.yaml
```
**Workflow CLI Options:**
| Flag | Description |
|------|-------------|
| `--workflow-var key=value` | Set workflow variable (can be repeated) |
| `--var key=value` | Set variable for YAML workflows |
| `--pattern <pattern>` | Pattern for auto-generation (sequential, parallel, routing, loop, orchestrator-workers, evaluator-optimizer) |
| `--output <file>` | Output file for auto-generation |
| `--llm <model>` | LLM model (e.g., openai/gpt-4o-mini) |
| `--tools <tools>` | Tools (comma-separated, e.g., tavily) |
| `--planning` | Enable planning mode |
| `--reasoning` | Enable reasoning mode |
| `--memory` | Enable memory |
| `--verbose` | Enable verbose output |
| `--save` | Save output to file |
### Hooks CLI:
```bash
# List configured hooks
praisonai hooks list
# Show hooks statistics
praisonai hooks stats
# Create hooks.json template
praisonai hooks init
```
### Claude Memory Tool CLI:
```bash
# Enable Claude Memory Tool (Anthropic models only)
praisonai "Research and remember findings" --claude-memory --llm anthropic/claude-sonnet-4-20250514
```
### Guardrail CLI:
```bash
# Validate output with LLM guardrail
praisonai "Write code" --guardrail "Ensure code is secure and follows best practices"
# Combine with other flags
praisonai "Generate SQL query" --guardrail "No DROP or DELETE statements" --save
```
### Metrics CLI:
```bash
# Display token usage and cost metrics
praisonai "Analyze this data" --metrics
# Combine with other features
praisonai "Complex task" --metrics --planning
```
### Scheduler CLI:
```bash
praisonai schedule start <name> "task" --interval hourly
praisonai schedule list
praisonai schedule logs <name> [--follow]
praisonai schedule stop <name>
praisonai schedule restart <name>
praisonai schedule delete <name>
praisonai schedule describe <name>
praisonai schedule save <name> [file.yaml]
praisonai schedule "task" --interval hourly # foreground mode
praisonai schedule agents.yaml # foreground mode
```
### Image Processing CLI:
```bash
# Process images with vision-based tasks
praisonai "Describe this image" --image path/to/image.png
# Analyze image content
praisonai "What objects are in this photo?" --image photo.jpg --llm openai/gpt-4o
```
### Telemetry CLI:
```bash
# Enable usage monitoring and analytics
praisonai "Task" --telemetry
# Combine with metrics for full observability
praisonai "Complex analysis" --telemetry --metrics
```
### MCP (Model Context Protocol) CLI:
```bash
# Use MCP server tools
praisonai "Search files" --mcp "npx -y @modelcontextprotocol/server-filesystem ."
# MCP with environment variables
praisonai "Search web" --mcp "npx -y @modelcontextprotocol/server-brave-search" --mcp-env "BRAVE_API_KEY=your_key"
# Multiple MCP options
praisonai "Task" --mcp "npx server" --mcp-env "KEY1=value1,KEY2=value2"
```
### Fast Context CLI:
```bash
# Search codebase for relevant context
praisonai "Find authentication code" --fast-context ./src
# Add code context to any task
praisonai "Explain this function" --fast-context /path/to/project
```
### Knowledge CLI:
```bash
# Add documents to knowledge base
praisonai knowledge add document.pdf
praisonai knowledge add ./docs/
# Search knowledge base
praisonai knowledge search "API authentication"
# List indexed documents
praisonai knowledge list
# Clear knowledge base
praisonai knowledge clear
# Show knowledge base info
praisonai knowledge info
# Show all commands
praisonai knowledge help
```
### Session CLI:
```bash
# List all saved sessions
praisonai session list
# Show session details
praisonai session show my-project
# Resume a session (load into memory)
praisonai session resume my-project
# Delete a session
praisonai session delete my-project
# Auto-save session after each run
praisonai "Analyze this code" --auto-save my-project
# Load history from last N sessions into context
praisonai "Continue our discussion" --history 5
```
### Session Management (Python):
```python
from praisonaiagents import Agent
# Auto-save session after each run
agent = Agent(
name="Assistant",
memory=True,
auto_save="my-project"
)
# Load history from last 5 sessions
agent = Agent(
name="Assistant",
memory=True,
history_in_context=5
)
```
### Workflow Checkpoints:
```python
from praisonaiagents.memory.workflows import WorkflowManager
manager = WorkflowManager()
# Save checkpoint after each step
result = manager.execute("deploy", checkpoint="deploy-v1")
# Resume from checkpoint
result = manager.execute("deploy", resume="deploy-v1")
# List/delete checkpoints
manager.list_checkpoints()
manager.delete_checkpoint("deploy-v1")
```
### Tools CLI:
```bash
praisonai tools list
praisonai tools info internet_search
praisonai tools search "web"
praisonai tools doctor
praisonai tools resolve shell_tool
praisonai tools discover
praisonai tools show-sources
praisonai tools show-sources --template ai-video-editor
```
| Command | Example | Docs |
|---------|---------|------|
| `tools list` | [example](examples/tools/) | [docs](https://docs.praison.ai/docs/cli/tools) |
| `tools resolve` | [example](examples/tools/example_tools_resolve.py) | [docs](https://docs.praison.ai/docs/cli/tools-resolve) |
| `tools discover` | [example](examples/tools/example_tools_discover.py) | [docs](https://docs.praison.ai/docs/cli/tools-discover) |
| `tools show-sources` | [example](examples/tools/example_tools_sources.py) | [docs](https://docs.praison.ai/docs/cli/tools-show-sources) |
### Handoff CLI:
```bash
# Enable agent-to-agent task delegation
praisonai "Research and write article" --handoff "researcher,writer,editor"
# Complex multi-agent workflow
praisonai "Analyze data and create report" --handoff "analyst,visualizer,writer"
```
### Auto Memory CLI:
```bash
# Enable automatic memory extraction
praisonai "Learn about user preferences" --auto-memory
# Combine with user isolation
praisonai "Remember my settings" --auto-memory --user-id user123
```
### Todo CLI:
```bash
# Generate todo list from task
praisonai "Plan the project" --todo
# Add a todo item
praisonai todo add "Implement feature X"
# List all todos
praisonai todo list
# Complete a todo
praisonai todo complete 1
# Delete a todo
praisonai todo delete 1
# Clear all todos
praisonai todo clear
# Show all commands
praisonai todo help
```
### Router CLI:
```bash
# Auto-select best model based on task complexity
praisonai "Simple question" --router
# Specify preferred provider
praisonai "Complex analysis" --router --router-provider anthropic
# Router automatically selects:
# - Simple tasks โ gpt-4o-mini, claude-3-haiku
# - Complex tasks โ gpt-4-turbo, claude-3-opus
# Create workflow with model routing template
praisonai workflow create --template model-routing --output my_workflow.yaml
```
Custom models can be configured in `agents.yaml`. See [Model Router Docs](https://docs.praison.ai/features/model-router) for details.
### Flow Display CLI:
```bash
# Enable visual workflow tracking
praisonai agents.yaml --flow-display
# Combine with other features
praisonai "Multi-step task" --planning --flow-display
```
### Docs CLI:
```bash
# List all project docs
praisonai docs list
# Create a new doc
praisonai docs create project-overview "This project is a Python web app..."
# Show a specific doc
praisonai docs show project-overview
# Delete a doc
praisonai docs delete old-doc
# Show all commands
praisonai docs help
```
### MCP Config CLI:
```bash
# List all MCP configurations
praisonai mcp list
# Create a new MCP config
praisonai mcp create filesystem npx -y @modelcontextprotocol/server-filesystem .
# Show a specific config
praisonai mcp show filesystem
# Enable/disable a config
praisonai mcp enable filesystem
praisonai mcp disable filesystem
# Delete a config
praisonai mcp delete filesystem
# Show all commands
praisonai mcp help
```
### AI Commit CLI:
```bash
# Full auto mode: stage all, security check, commit, and push
praisonai commit -a
# Interactive mode (requires git add first)
praisonai commit
# Interactive with auto-push
praisonai commit --push
# Skip security check (not recommended)
praisonai commit -a --no-verify
```
**Features:**
- ๐ค AI-generated conventional commit messages
- ๐ Built-in security scanning (API keys, passwords, secrets, sensitive files)
- ๐ฆ Auto-staging with `-a` flag
- ๐ Auto-push in full auto mode
- โ๏ธ Edit message before commit in interactive mode
**Security Detection:**
- API keys, secrets, tokens (AWS, GitHub, GitLab, Slack)
- Passwords and private keys
- Sensitive files (`.env`, `id_rsa`, `.pem`, `.key`, etc.)
### Serve CLI (API Server):
```bash
# Start API server for agents defined in YAML
praisonai serve agents.yaml
# With custom port and host
praisonai serve agents.yaml --port 8005 --host 0.0.0.0
# Alternative flag style
praisonai agents.yaml --serve
# The server provides:
# POST /agents - Run all agents sequentially
# POST /agents/{name} - Run specific agent (e.g., /agents/researcher)
# GET /agents/list - List available agents
```
### n8n Integration CLI:
```bash
# Export workflow to n8n and open in browser
praisonai agents.yaml --n8n
# With custom n8n URL
praisonai agents.yaml --n8n --n8n-url http://localhost:5678
# Set N8N_API_KEY for auto-import
export N8N_API_KEY="your-api-key"
praisonai agents.yaml --n8n
```
### External Agents CLI:
Use external AI coding CLI tools (Claude Code, Gemini CLI, Codex CLI, Cursor CLI) as agent tools:
```bash
# Use Claude Code for coding tasks
praisonai "Refactor the auth module" --external-agent claude
# Use Gemini CLI for code analysis
praisonai "Analyze codebase architecture" --external-agent gemini
# Use OpenAI Codex CLI
praisonai "Fix all bugs in src/" --external-agent codex
# Use Cursor CLI
praisonai "Add comprehensive tests" --external-agent cursor
```
**Python API:**
```python
from praisonai.integrations import (
ClaudeCodeIntegration,
GeminiCLIIntegration,
CodexCLIIntegration,
CursorCLIIntegration
)
# Create integration
claude = ClaudeCodeIntegration(workspace="/project")
# Execute a coding task
result = await claude.execute("Refactor the auth module")
# Use as agent tool
from praisonai import Agent
tool = claude.as_tool()
agent = Agent(tools=[tool])
```
**Environment Variables:**
```bash
export ANTHROPIC_API_KEY=your-key # Claude Code
export GEMINI_API_KEY=your-key # Gemini CLI
export OPENAI_API_KEY=your-key # Codex CLI
export CURSOR_API_KEY=your-key # Cursor CLI
```
See [External Agents Documentation](https://docs.praison.ai/code/external-agents) for more details.
### @Mentions in Prompts:
```bash
# Include file content in prompt
praisonai "@file:src/main.py explain this code"
# Include project doc
praisonai "@doc:project-overview help me add a feature"
# Search the web
praisonai "@web:python best practices give me tips"
# Fetch URL content
praisonai "@url:https://docs.python.org summarize this"
# Combine multiple mentions
praisonai "@file:main.py @doc:coding-standards review this code"
```
## Prompt Expansion
Expand short prompts into detailed, actionable prompts:
### CLI Usage
```bash
# Expand a short prompt into detailed prompt
praisonai "write a movie script in 3 lines" --expand-prompt
# With verbose output
praisonai "blog about AI" --expand-prompt -v
# With tools for context gathering
praisonai "latest AI trends" --expand-prompt --expand-tools tools.py
# Combine with query rewrite
praisonai "AI news" --query-rewrite --expand-prompt
```
### Programmatic Usage
```python
from praisonaiagents import PromptExpanderAgent, ExpandStrategy
# Basic usage
agent = PromptExpanderAgent()
result = agent.expand("write a movie script in 3 lines")
print(result.expanded_prompt)
# With specific strategy
result = agent.expand("blog about AI", strategy=ExpandStrategy.DETAILED)
# Available strategies: BASIC, DETAILED, STRUCTURED, CREATIVE, AUTO
```
**Key Difference:**
- `--query-rewrite`: Optimizes queries for search/retrieval (RAG)
- `--expand-prompt`: Expands prompts for detailed task execution
## Web Search, Web Fetch & Prompt Caching
### CLI Usage
```bash
# Web Search - Get real-time information
praisonai "What are the latest AI news today?" --web-search --llm openai/gpt-4o-search-preview
# Web Fetch - Retrieve and analyze URL content (Anthropic only)
praisonai "Summarize https://docs.praison.ai" --web-fetch --llm anthropic/claude-sonnet-4-20250514
# Prompt Caching - Reduce costs for repeated prompts
praisonai "Analyze this document..." --prompt-caching --llm anthropic/claude-sonnet-4-20250514
```
### Programmatic Usage
```python
from praisonaiagents import Agent
# Web Search
agent = Agent(
instructions="You are a research assistant",
llm="openai/gpt-4o-search-preview",
web_search=True
)
# Web Fetch (Anthropic only)
agent = Agent(
instructions="You are a content analyzer",
llm="anthropic/claude-sonnet-4-20250514",
web_fetch=True
)
# Prompt Caching
agent = Agent(
instructions="You are an AI assistant..." * 50, # Long system prompt
llm="anthropic/claude-sonnet-4-20250514",
prompt_caching=True
)
```
**Supported Providers:**
| Feature | Providers |
|---------|----------|
| Web Search | OpenAI, Gemini, Anthropic, xAI, Perplexity |
| Web Fetch | Anthropic |
| Prompt Caching | OpenAI (auto), Anthropic, Bedrock, Deepseek |
## CLI Features
| Feature | Docs |
|---------|:----:|
| ๐ Query Rewrite - RAG optimization | [๐](https://docs.praison.ai/docs/cli/query-rewrite) |
| ๐ฌ Deep Research - Automated research | [๐](https://docs.praison.ai/docs/cli/deep-research) |
| ๐ Planning - Step-by-step execution | [๐](https://docs.praison.ai/docs/cli/planning) |
| ๐พ Memory - Persistent agent memory | [๐](https://docs.praison.ai/docs/cli/memory) |
| ๐ Rules - Auto-discovered instructions | [๐](https://docs.praison.ai/docs/cli/rules) |
| ๐ Workflow - Multi-step workflows | [๐](https://docs.praison.ai/docs/cli/workflow) |
| ๐ช Hooks - Event-driven actions | [๐](https://docs.praison.ai/docs/cli/hooks) |
| ๐ง Claude Memory - Anthropic memory tool | [๐](https://docs.praison.ai/docs/cli/claude-memory) |
| ๐ก๏ธ Guardrail - Output validation | [๐](https://docs.praison.ai/docs/cli/guardrail) |
| ๐ Metrics - Token usage tracking | [๐](https://docs.praison.ai/docs/cli/metrics) |
| ๐ผ๏ธ Image - Vision processing | [๐](https://docs.praison.ai/docs/cli/image) |
| ๐ก Telemetry - Usage monitoring | [๐](https://docs.praison.ai/docs/cli/telemetry) |
| ๐ MCP - Model Context Protocol | [๐](https://docs.praison.ai/docs/cli/mcp) |
| โก Fast Context - Codebase search | [๐](https://docs.praison.ai/docs/cli/fast-context) |
| ๐ Knowledge - RAG management | [๐](https://docs.praison.ai/docs/cli/knowledge) |
| ๐ฌ Session - Conversation management | [๐](https://docs.praison.ai/docs/cli/session) |
| ๐ง Tools - Tool discovery | [๐](https://docs.praison.ai/docs/cli/tools) |
| ๐ค Handoff - Agent delegation | [๐](https://docs.praison.ai/docs/cli/handoff) |
| ๐ง Auto Memory - Memory extraction | [๐](https://docs.praison.ai/docs/cli/auto-memory) |
| ๐ Todo - Task management | [๐](https://docs.praison.ai/docs/cli/todo) |
| ๐ฏ Router - Smart model selection | [๐](https://docs.praison.ai/docs/cli/router) |
| ๐ Flow Display - Visual workflow | [๐](https://docs.praison.ai/docs/cli/flow-display) |
| โจ Prompt Expansion - Detailed prompts | [๐](https://docs.praison.ai/docs/cli/prompt-expansion) |
| ๐ Web Search - Real-time search | [๐](https://docs.praison.ai/docs/cli/web-search) |
| ๐ฅ Web Fetch - URL content retrieval | [๐](https://docs.praison.ai/docs/cli/web-fetch) |
| ๐พ Prompt Caching - Cost reduction | [๐](https://docs.praison.ai/docs/cli/prompt-caching) |
| ๐ฆ Template Catalog - Browse & discover templates | [๐](https://docs.praison.ai/docs/cli/template-catalog) |
### Template Catalog CLI
| Command | Description |
|---------|-------------|
| `praisonai templates browse` | Open template catalog in browser |
| `praisonai templates browse --print` | Print catalog URL only |
| `praisonai templates validate` | Validate template YAML files |
| `praisonai templates validate --source <dir>` | Validate specific directory |
| `praisonai templates validate --strict` | Strict validation mode |
| `praisonai templates validate --json` | JSON output format |
| `praisonai templates catalog build` | Build catalog locally |
| `praisonai templates catalog build --out <dir>` | Build to specific directory |
| `praisonai templates catalog sync` | Sync template sources |
| `praisonai templates catalog sync --source <name>` | Sync specific source |
**Examples:** [examples/catalog/](examples/catalog/) | **Docs:** [Code](https://docs.praison.ai/docs/cli/template-catalog-code) | [CLI](https://docs.praison.ai/docs/cli/template-catalog)
---
## ๐ป Using JavaScript Code
```bash
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
```
```javascript
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');
```

---
## โญ Star History
[](https://docs.praison.ai)
---
## ๐ Process Types & Patterns
### AI Agents Flow
```mermaid
graph LR
%% Define the main flow
Start([โถ Start]) --> Agent1
Agent1 --> Process[โ Process]
Process --> Agent2
Agent2 --> Output([โ Output])
Process -.-> Agent1
%% Define subgraphs for agents and their tasks
subgraph Agent1[ ]
Task1[๐ Task]
AgentIcon1[๐ค AI Agent]
Tools1[๐ง Tools]
Task1 --- AgentIcon1
AgentIcon1 --- Tools1
end
subgraph Agent2[ ]
Task2[๐ Task]
AgentIcon2[๐ค AI Agent]
Tools2[๐ง Tools]
Task2 --- AgentIcon2
AgentIcon2 --- Tools2
end
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef tools fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Start,Output,Task1,Task2 input
class Process,AgentIcon1,AgentIcon2 process
class Tools1,Tools2 tools
class Agent1,Agent2 transparent
```
## AI Agents with Tools
Create AI agents that can use tools to interact with external systems and perform actions.
```mermaid
flowchart TB
subgraph Tools
direction TB
T3[Internet Search]
T1[Code Execution]
T2[Formatting]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
T3 --> A1
T1 --> A2
T2 --> A3
style Tools fill:#189AB4,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
```
## AI Agents with Memory
Create AI agents with memory capabilities for maintaining context and information across tasks.
```mermaid
flowchart TB
subgraph Memory
direction TB
STM[Short Term]
LTM[Long Term]
end
subgraph Store
direction TB
DB[(Vector DB)]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
Memory <--> Store
Store <--> A1
Store <--> A2
Store <--> A3
style Memory fill:#189AB4,color:#fff
style Store fill:#2E8B57,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
```
## AI Agents with Different Processes
### Sequential Process
The simplest form of task execution where tasks are performed one after another.
```mermaid
graph LR
Input[Input] --> A1
subgraph Agents
direction LR
A1[Agent 1] --> A2[Agent 2] --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class A1,A2,A3 process
class Agents transparent
```
### Hierarchical Process
Uses a manager agent to coordinate task execution and agent assignments.
```mermaid
graph TB
Input[Input] --> Manager
subgraph Agents
Manager[Manager Agent]
subgraph Workers
direction LR
W1[Worker 1]
W2[Worker 2]
W3[Worker 3]
end
Manager --> W1
Manager --> W2
Manager --> W3
end
W1 --> Manager
W2 --> Manager
W3 --> Manager
Manager --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Manager,W1,W2,W3 process
class Agents,Workers transparent
```
### Workflow Process
Advanced process type supporting complex task relationships and conditional execution.
```mermaid
graph LR
Input[Input] --> Start
subgraph Workflow
direction LR
Start[Start] --> C1{Condition}
C1 --> |Yes| A1[Agent 1]
C1 --> |No| A2[Agent 2]
A1 --> Join
A2 --> Join
Join --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef decision fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Start,A1,A2,A3,Join process
class C1 decision
class Workflow transparent
```
#### Agentic Routing Workflow
Create AI agents that can dynamically route tasks to specialized LLM instances.
```mermaid
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Out[Out]
LLM2 --> Out
LLM3 --> Out
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
```
#### Agentic Orchestrator Worker
Create AI agents that orchestrate and distribute tasks among specialized workers.
```mermaid
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Synthesizer[Synthesizer]
LLM2 --> Synthesizer
LLM3 --> Synthesizer
Synthesizer --> Out[Out]
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Synthesizer fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
```
#### Agentic Autonomous Workflow
Create AI agents that can autonomously monitor, act, and adapt based on environment feedback.
```mermaid
flowchart LR
Human[Human] <--> LLM[LLM Call]
LLM -->|ACTION| Environment[Environment]
Environment -->|FEEDBACK| LLM
LLM --> Stop[Stop]
style Human fill:#8B0000,color:#fff
style LLM fill:#2E8B57,color:#fff
style Environment fill:#8B0000,color:#fff
style Stop fill:#333,color:#fff
```
#### Agentic Parallelization
Create AI agents that can execute tasks in parallel for improved performance.
```mermaid
flowchart LR
In[In] --> LLM2[LLM Call 2]
In --> LLM1[LLM Call 1]
In --> LLM3[LLM Call 3]
LLM1 --> Aggregator[Aggregator]
LLM2 --> Aggregator
LLM3 --> Aggregator
Aggregator --> Out[Out]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Aggregator fill:#fff,color:#000
style Out fill:#8B0000,color:#fff
```
#### Agentic Prompt Chaining
Create AI agents with sequential prompt chaining for complex workflows.
```mermaid
flowchart LR
In[In] --> LLM1[LLM Call 1] --> Gate{Gate}
Gate -->|Pass| LLM2[LLM Call 2] -->|Output 2| LLM3[LLM Call 3] --> Out[Out]
Gate -->|Fail| Exit[Exit]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
style Exit fill:#8B0000,color:#fff
```
#### Agentic Evaluator Optimizer
Create AI agents that can generate and optimize solutions through iterative feedback.
```mermaid
flowchart LR
In[In] --> Generator[LLM Call Generator]
Generator -->|SOLUTION| Evaluator[LLM Call Evaluator] -->|ACCEPTED| Out[Out]
Evaluator -->|REJECTED + FEEDBACK| Generator
style In fill:#8B0000,color:#fff
style Generator fill:#2E8B57,color:#fff
style Evaluator fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
```
#### Repetitive Agents
Create AI agents that can efficiently handle repetitive tasks through automated loops.
```mermaid
flowchart LR
In[Input] --> LoopAgent[("Looping Agent")]
LoopAgent --> Task[Task]
Task --> |Next iteration| LoopAgent
Task --> |Done| Out[Output]
style In fill:#8B0000,color:#fff
style LoopAgent fill:#2E8B57,color:#fff,shape:circle
style Task fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
```
---
## ๐ง Configuration & Integration
### Ollama Integration
```bash
export OPENAI_BASE_URL=http://localhost:11434/v1
```
### Groq Integration
Replace xxxx with Groq API KEY:
```bash
export OPENAI_API_KEY=xxxxxxxxxxx
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
```
### 100+ Models Support
PraisonAI supports 100+ LLM models from various providers. Visit our [models documentation](https://docs.praison.ai/models/) for the complete list.
<div align="center">
<a href="https://docs.praison.ai/models">
<p align="center">
<img src="https://img.shields.io/badge/๐_Models_Documentation-Visit_docs.praison.ai-blue?style=for-the-badge&logo=bookstack&logoColor=white" alt="Models Documentation" />
</p>
</a>
</div>
---
## ๐ Agents Playbook
### Simple Playbook Example
Create `agents.yaml` file and add the code below:
```yaml
framework: praisonai
topic: Artificial Intelligence
agents: # Canonical: use 'agents' instead of 'roles'
screenwriter:
instructions: "Skilled in crafting scripts with engaging dialogue about {topic}." # Canonical: use 'instructions' instead of 'backstory'
goal: Create scripts from concepts.
role: Screenwriter
tasks:
scriptwriting_task:
description: "Develop scripts with compelling characters and dialogue about {topic}."
expected_output: "Complete script ready for production."
```
*To run the playbook:*
```bash
praisonai agents.yaml
```
---
## ๐ ๏ธ Custom Tools / Create Plugins
PraisonAI supports multiple ways to create and integrate custom tools (plugins) into your agents.
### Using `@tool` Decorator
```python
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")
```
### Using `BaseTool` Class
```python
from praisonaiagents import Agent, BaseTool
class WeatherTool(BaseTool):
name = "weather"
description = "Get current weather for a location"
def run(self, location: str) -> str:
return f"Weather in {location}: 72ยฐF, Sunny"
agent = Agent(
instructions="You are a weather assistant",
tools=[WeatherTool()]
)
agent.start("What's the weather in Paris?")
```
### Creating a Tool Package (pip installable)
```toml
# pyproject.toml
[project]
name = "my-praisonai-tools"
version = "1.0.0"
dependencies = ["praisonaiagents"]
[project.entry-points."praisonaiagents.tools"]
my_tool = "my_package:MyTool"
```
```python
# my_package/__init__.py
from praisonaiagents import BaseTool
class MyTool(BaseTool):
name = "my_tool"
description = "My custom tool"
def run(self, param: str) -> str:
return f"Result: {param}"
```
After `pip install`, tools are auto-discovered:
```python
agent = Agent(tools=["my_tool"]) # Works automatically!
```
---
## ๐ง Memory & Context
PraisonAI provides zero-dependency persistent memory for agents. For detailed examples, see [section 6. Agent Memory](#6-agent-memory-zero-dependencies) in the Python Code Examples.
---
## ๐ Knowledge & Retrieval (RAG)
PraisonAI provides a complete knowledge stack for building RAG applications with multiple vector stores, retrieval strategies, rerankers, and query modes.
### Knowledge CLI Commands
| Command | Description |
|---------|-------------|
| `praisonai knowledge add <file\|dir\|url>` | Add documents to knowledge base |
| `praisonai knowledge query <question>` | Query knowledge base with RAG |
| `praisonai knowledge list` | List indexed documents |
| `praisonai knowledge clear` | Clear knowledge base |
| `praisonai knowledge stats` | Show knowledge base statistics |
### Knowledge CLI Options
| Option | Values | Description |
|--------|--------|-------------|
| `--vector-store` | `memory`, `chroma`, `pinecone`, `qdrant`, `weaviate` | Vector store backend |
| `--retrieval` | `basic`, `fusion`, `recursive`, `auto_merge` | Retrieval strategy |
| `--reranker` | `simple`, `llm`, `cross_encoder`, `cohere` | Reranking method |
| `--index-type` | `vector`, `keyword`, `hybrid` | Index type |
| `--query-mode` | `default`, `sub_question`, `summarize` | Query mode |
### Knowledge CLI Examples
```bash
# Add documents
praisonai knowledge add ./docs/
praisonai knowledge add https://example.com/page.html
praisonai knowledge add "*.pdf"
# Query with advanced options
praisonai knowledge query "How to authenticate?" --retrieval fusion --reranker llm
# Full advanced query
praisonai knowledge query "authentication flow" \
--vector-store chroma \
--retrieval fusion \
--reranker llm \
--index-type hybrid \
--query-mode sub_question
```
### Knowledge SDK Usage
```python
from praisonaiagents import Agent, Knowledge
# Simple usage with Agent
agent = Agent(
name="Research Assistant",
knowledge=["docs/manual.pdf", "data/faq.txt"],
knowledge_config={
"vector_store": {"provider": "chroma"}
}
)
response = agent.chat("How do I authenticate?")
# Direct Knowledge usage
knowledge = Knowledge()
knowledge.add("document.pdf")
results = knowledge.search("authentication", limit=5)
```
### Knowledge Stack Features Table
| Feature | Description | SDK Docs | CLI Docs |
|---------|-------------|----------|----------|
| **Data Readers** | Load PDF, Markdown, Text, HTML, URLs | [SDK](/docs/sdk/praisonaiagents/knowledge/protocols) | [CLI](/docs/cli/knowledge) |
| **Vector Stores** | ChromaDB, Pinecone, Qdrant, Weaviate, In-Memory | [SDK](/docs/sdk/praisonaiagents/knowledge/protocols) | [CLI](/docs/cli/knowledge) |
| **Retrieval Strategies** | Basic, Fusion (RRF), Recursive, Auto-Merge | [SDK](/docs/sdk/praisonaiagents/knowledge/protocols) | [CLI](/docs/cli/knowledge) |
| **Rerankers** | Simple, LLM, Cross-Encoder, Cohere | [SDK](/docs/sdk/praisonaiagents/knowledge/protocols) | [CLI](/docs/cli/knowledge) |
| **Index Types** | Vector, Keyword (BM25), Hybrid | [SDK](/docs/sdk/praisonaiagents/knowledge/protocols) | [CLI](/docs/cli/knowledge) |
| **Query Engines** | Default, Sub-Question, Summarize | [SDK](/docs/sdk/praisonaiagents/knowledge/protocols) | [CLI](/docs/cli/knowledge) |
---
## ๐ฌ Advanced Features
### Research & Intelligence
- ๐ฌ **Deep Research Agents** - OpenAI & Gemini support for automated research
- ๐ **Query Rewriter Agent** - HyDE, Step-back, Multi-query strategies for RAG optimization
- ๐ **Native Web Search** - Real-time search via OpenAI, Gemini, Anthropic, xAI, Perplexity
- ๐ฅ **Web Fetch** - Retrieve full content from URLs (Anthropic)
- ๐ **Prompt Expander Agent** - Expand short prompts into detailed instructions
### Memory & Caching
- ๐พ **Prompt Caching** - Reduce costs & latency (OpenAI, Anthropic, Bedrock, Deepseek)
- ๐ง **Claude Memory Tool** - Persistent cross-conversation memory (Anthropic Beta)
- ๐พ **File-Based Memory** - Zero-dependency persistent memory for all agents
- ๐ **Built-in Search Tools** - Tavily, You.com, Exa for web search, news, content extraction
### Planning & Workflows
- ๐ **Planning Mode** - Plan before execution for agents & multi-agent systems
- ๐ง **Planning Tools** - Research with tools during planning phase
- ๐ง **Planning Reasoning** - Chain-of-thought planning for complex tasks
- โ๏ธ **Prompt Chaining** - Sequential prompt workflows with conditional gates
- ๐ **Evaluator Optimiser** - Generate and optimize through iterative feedback
- ๐ท **Orchestrator Workers** - Distribute tasks among specialised workers
- โก **Parallelisation** - Execute tasks in parallel for improved performance
- ๐ **Repetitive Agents** - Handle repetitive tasks through automated loops
- ๐ค **Autonomous Workflow** - Monitor, act, adapt based on environment feedback
### Specialised Agents
- ๐ผ๏ธ **Image Generation Agent** - Create images from text descriptions
- ๐ท **Image to Text Agent** - Extract text and descriptions from images
- ๐ฌ **Video Agent** - Analyse and process video content
- ๐ **Data Analyst Agent** - Analyse data and generate insights
- ๐ฐ **Finance Agent** - Financial analysis and recommendations
- ๐ **Shopping Agent** - Price comparison and shopping assistance
- โญ **Recommendation Agent** - Personalised recommendations
- ๐ **Wikipedia Agent** - Search and extract Wikipedia information
- ๐ป **Programming Agent** - Code development and analysis
- ๐ **Markdown Agent** - Generate and format Markdown content
- ๐ **Model Router** - Smart model selection based on task complexity
### MCP Protocol
- ๐ **MCP Transports** - stdio, Streamable HTTP, WebSocket, SSE (Protocol 2025-11-25)
- ๐ **WebSocket MCP** - Real-time bidirectional connections with auto-reconnect
- ๐ **MCP Security** - Origin validation, DNS rebinding prevention, secure sessions
- ๐ **MCP Resumability** - SSE stream recovery via Last-Event-ID
### A2A & A2UI Protocols
- ๐ **A2A Protocol** - Agent-to-Agent communication for inter-agent collaboration
- ๐ผ๏ธ **A2UI Protocol** - Agent-to-User Interface for generating rich UIs from agents
- ๐ **UI Templates** - ChatTemplate, ListTemplate, FormTemplate, DashboardTemplate
- ๐ง **Surface Builder** - Fluent API for building declarative UIs
### Safety & Control
- ๐ค **Agent Handoffs** - Transfer context between specialised agents
- ๐ก๏ธ **Guardrails** - Input/output validation and safety checks
- โ
**Human Approval** - Require human confirmation for critical actions
- ๐ **Tool Approval CLI** - `--trust` (auto-approve all) and `--approve-level` (risk-based approval)
- ๐ฌ **Sessions Management** - Isolated conversation contexts
- ๐ **Stateful Agents** - Maintain state across interactions
### Developer Tools
- โก **Fast Context** - Rapid parallel code search (10-20x faster)
- ๐ **Rules & Instructions** - Auto-discover CLAUDE.md, AGENTS.md, GEMINI.md
- ๐ช **Hooks** - Pre/post operation hooks for custom logic
- ๐ **Telemetry** - Track agent performance and usage
- ๐น **Camera Integration** - Capture and analyse camera input
### Other Features
- ๐ **CrewAI & AG2 Integration** - Use CrewAI or AG2 (Formerly AutoGen) Framework
- ๐ป **Codebase Chat** - Chat with entire codebase
- ๐จ **Interactive UIs** - Multiple interactive interfaces
- ๐ **YAML Configuration** - YAML-based agent and workflow configuration
- ๐ ๏ธ **Custom Tools** - Easy custom tool integration
- ๐ **Internet Search** - Multiple providers (Tavily, You.com, Exa, DuckDuckGo, Crawl4AI)
- ๐ผ๏ธ **VLM Support** - Vision Language Model support
- ๐๏ธ **Voice Interaction** - Real-time voice interaction
---
## ๐พ Persistence (Databases)
Enable automatic conversation persistence with 2 lines of code:
```python
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"), # db(...) shortcut
session_id="my-session" # Optional: defaults to per-hour ID (YYYYMMDDHH)
)
agent.chat("Hello!") # Auto-persists messages, runs, traces
```
### Persistence CLI Commands
| Command | Description |
|---------|-------------|
| `praisonai persistence doctor` | Validate DB connectivity |
| `praisonai persistence run` | Run agent with persistence |
| `praisonai persistence resume` | Resume existing session |
| `praisonai persistence export` | Export session to JSONL |
| `praisonai persistence import` | Import session from JSONL |
| `praisonai persistence migrate` | Apply schema migrations |
| `praisonai persistence status` | Show schema status |
### Knowledge CLI Commands {#knowledge-cli}
| Command | Description |
|---------|-------------|
| `praisonai knowledge add <source>` | Add file, directory, URL, or glob pattern |
| `praisonai knowledge query "<question>"` | Query knowledge base with RAG |
| `praisonai knowledge list` | List indexed documents |
| `praisonai knowledge clear` | Clear knowledge base |
| `praisonai knowledge stats` | Show knowledge base statistics |
**Knowledge Query Flags:**
| Flag | Values | Default |
|------|--------|---------|
| `--vector-store` | `memory`, `chroma`, `pinecone`, `qdrant`, `weaviate` | `chroma` |
| `--retrieval-strategy` | `basic`, `fusion`, `recursive`, `auto_merge` | `basic` |
| `--reranker` | `none`, `simple`, `llm`, `cross_encoder`, `cohere` | `none` |
| `--index-type` | `vector`, `keyword`, `hybrid` | `vector` |
| `--query-mode` | `default`, `sub_question`, `summarize` | `default` |
| `--workspace` | Path to workspace directory | Current dir |
| `--session` | Session ID for persistence | - |
**Examples:**
```bash
# Add documents
praisonai knowledge add document.pdf
praisonai knowledge add ./docs/
praisonai knowledge add "*.md"
# Query with options
praisonai knowledge query "How to authenticate?" \
--vector-store chroma \
--retrieval-strategy fusion \
--reranker simple \
--query-mode sub_question
```
### Databases Table
| Database | Store Type | Install | Example | Docs |
|----------|------------|---------|---------|------|
| PostgreSQL | Conversation | `pip install "praisonai[tools]"` | [simple_db_agent.py](examples/persistence/simple_db_agent.py) | [docs](https://docs.praison.ai/docs/databases/postgres) |
| MySQL | Conversation | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| SQLite | Conversation | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| SingleStore | Conversation | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Supabase | Conversation | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| SurrealDB | Conversation | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Qdrant | Knowledge | `pip install "praisonai[tools]"` | [knowledge_qdrant.py](examples/persistence/knowledge_qdrant.py) | [docs](https://docs.praison.ai/docs/databases/qdrant) |
| ChromaDB | Knowledge | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Pinecone | Knowledge | `pip install pinecone` | [pinecone_wow.py](examples/vector/pinecone_wow.py) | [docs](https://docs.praison.ai/docs/databases/pinecone) |
| Weaviate | Knowledge | `pip install weaviate-client` | [weaviate_wow.py](examples/vector/weaviate_wow.py) | [docs](https://docs.praison.ai/docs/databases/weaviate) |
| LanceDB | Knowledge | `pip install lancedb` | [lancedb_real_wow.py](examples/vector/lancedb_real_wow.py) | [docs](https://docs.praison.ai/docs/databases/lancedb) |
| Milvus | Knowledge | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| PGVector | Knowledge | `pip install psycopg2-binary` | [pgvector_real_wow.py](examples/vector/pgvector_real_wow.py) | [docs](https://docs.praison.ai/docs/databases/pgvector) |
| Redis Vector | Knowledge | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Cassandra | Knowledge | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| ClickHouse | Knowledge | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Redis | State | `pip install "praisonai[tools]"` | [state_redis.py](examples/persistence/state_redis.py) | [docs](https://docs.praison.ai/docs/databases/redis) |
| MongoDB | State | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| DynamoDB | State | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Firestore | State | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Upstash | State | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
| Memory | State | `pip install "praisonai[tools]"` | - | [docs](https://docs.praison.ai/docs/databases/overview) |
---
## ๐ง Tools Table
Install all tools with: `pip install "praisonai[tools]"`
| Tool | Category | Import | Docs |
|------|----------|--------|------|
| Tavily | Web Search | `from praisonai_tools import TavilyTool` | [docs](https://docs.praison.ai/docs/tools/external/tavily) |
| DuckDuckGo | Web Search | `from praisonai_tools import DuckDuckGoTool` | [docs](https://docs.praison.ai/docs/tools/external/duckduckgo) |
| Exa | Web Search | `from praisonai_tools import ExaTool` | [docs](https://docs.praison.ai/docs/tools/external/exa) |
| Serper | Web Search | `from praisonai_tools import SerperTool` | [docs](https://docs.praison.ai/docs/tools/external/serper) |
| Jina | Web Reader | `from praisonai_tools import JinaTool` | [docs](https://docs.praison.ai/docs/tools/external/jina) |
| Firecrawl | Web Scraping | `from praisonai_tools import FirecrawlTool` | [docs](https://docs.praison.ai/docs/tools/external/firecrawl) |
| Crawl4AI | Web Scraping | `from praisonai_tools import Crawl4AITool` | [docs](https://docs.praison.ai/docs/tools/external/crawl4ai) |
| Wikipedia | Knowledge | `from praisonai_tools import WikipediaTool` | [docs](https://docs.praison.ai/docs/tools/external/wikipedia) |
| ArXiv | Research | `from praisonai_tools import ArxivTool` | [docs](https://docs.praison.ai/docs/tools/external/arxiv) |
| HackerNews | News | `from praisonai_tools import HackerNewsTool` | [docs](https://docs.praison.ai/docs/tools/external/hackernews) |
| YouTube | Media | `from praisonai_tools import YouTubeTool` | [docs](https://docs.praison.ai/docs/tools/external/youtube) |
| Weather | Data | `from praisonai_tools import WeatherTool` | [docs](https://docs.praison.ai/docs/tools/external/weather) |
| PostgreSQL | Database | `from praisonai_tools import PostgresTool` | [docs](https://docs.praison.ai/docs/tools/external/postgres) |
| MySQL | Database | `from praisonai_tools import MySQLTool` | [docs](https://docs.praison.ai/docs/tools/external/mysql) |
| SQLite | Database | `from praisonai_tools import SQLiteTool` | [docs](https://docs.praison.ai/docs/tools/external/sqlite) |
| MongoDB | Database | `from praisonai_tools import MongoDBTool` | [docs](https://docs.praison.ai/docs/tools/external/mongodb) |
| Redis | Database | `from praisonai_tools import RedisTool` | [docs](https://docs.praison.ai/docs/tools/external/redis) |
| Qdrant | Vector DB | `from praisonai_tools import QdrantTool` | [docs](https://docs.praison.ai/docs/tools/external/qdrant) |
| GitHub | DevOps | `from praisonai_tools import GitHubTool` | [docs](https://docs.praison.ai/docs/tools/external/github) |
| Slack | Communication | `from praisonai_tools import SlackTool` | [docs](https://docs.praison.ai/docs/tools/external/slack) |
| Discord | Communication | `from praisonai_tools import DiscordTool` | [docs](https://docs.praison.ai/docs/tools/external/discord) |
| Telegram | Communication | `from praisonai_tools import TelegramTool` | [docs](https://docs.praison.ai/docs/tools/external/telegram) |
| Email | Communication | `from praisonai_tools import EmailTool` | [docs](https://docs.praison.ai/docs/tools/external/email) |
| Notion | Productivity | `from praisonai_tools import NotionTool` | [docs](https://docs.praison.ai/docs/tools/external/notion) |
| File | File System | `from praisonai_tools import FileTool` | [docs](https://docs.praison.ai/docs/tools/external/file) |
| Shell | System | `from praisonai_tools import ShellTool` | [docs](https://docs.praison.ai/docs/tools/external/shell) |
| Python | Code | `from praisonai_tools import PythonTool` | [docs](https://docs.praison.ai/docs/tools/external/python) |
| JSON | Data | `from praisonai_tools import JSONTool` | [docs](https://docs.praison.ai/docs/tools/external/json) |
| CSV | Data | `from praisonai_tools import CSVTool` | [docs](https://docs.praison.ai/docs/tools/external/csv) |
| Calculator | Math | `from praisonai_tools import CalculatorTool` | [docs](https://docs.praison.ai/docs/tools/external/calculator) |
> See [full tools documentation](https://docs.praison.ai/docs/tools/tools) for all 100+ available tools.
---
## ๐ Video Tutorials
Learn PraisonAI through our comprehensive video series:
| Topic | Video |
|-------|--------|
| AI Agents with Self Reflection | [](https://www.youtube.com/watch?v=vLXobEN2Vc8) |
| Reasoning Data Generating Agent | [](https://www.youtube.com/watch?v=fUT332Y2zA8) |
| AI Agents with Reasoning | [](https://www.youtube.com/watch?v=KNDVWGN3TpM) |
| Multimodal AI Agents | [](https://www.youtube.com/watch?v=hjAWmUT1qqY) |
| AI Agents Workflow | [](https://www.youtube.com/watch?v=yWTH44QPl2A) |
| Async AI Agents | [](https://www.youtube.com/watch?v=VhVQfgo00LE) |
| Mini AI Agents | [](https://www.youtube.com/watch?v=OkvYp5aAGSg) |
| AI Agents with Memory | [](https://www.youtube.com/watch?v=1hVfVxvPnnQ) |
| Repetitive Agents | [](https://www.youtube.com/watch?v=dAYGxsjDOPg) |
| Introduction | [](https://www.youtube.com/watch?v=Fn1lQjC0GO0) |
| Tools Overview | [](https://www.youtube.com/watch?v=XaQRgRpV7jo) |
| Custom Tools | [](https://www.youtube.com/watch?v=JSU2Rndh06c) |
| Firecrawl Integration | [](https://www.youtube.com/watch?v=UoqUDcLcOYo) |
| User Interface | [](https://www.youtube.com/watch?v=tg-ZjNl3OCg) |
| Crawl4AI Integration | [](https://www.youtube.com/watch?v=KAvuVUh0XU8) |
| Chat Interface | [](https://www.youtube.com/watch?v=sw3uDqn2h1Y) |
| Code Interface | [](https://www.youtube.com/watch?v=_5jQayO-MQY) |
| Mem0 Integration | [](https://www.youtube.com/watch?v=KIGSgRxf1cY) |
| Training | [](https://www.youtube.com/watch?v=aLawE8kwCrI) |
| Realtime Voice Interface | [](https://www.youtube.com/watch?v=frRHfevTCSw) |
| Call Interface | [](https://www.youtube.com/watch?v=m1cwrUG2iAk) |
| Reasoning Extract Agents | [](https://www.youtube.com/watch?v=2PPamsADjJA) |
---
## ๐ฅ Contributing
We welcome contributions from the community! Here's how you can contribute:
1. **Fork on GitHub** - Use the "Fork" button on the [repository page](https://github.com/MervinPraison/PraisonAI)
2. **Clone your fork** - `git clone https://github.com/yourusername/praisonAI.git`
3. **Create a branch** - `git checkout -b new-feature`
4. **Make changes and commit** - `git commit -am "Add some feature"`
5. **Push to your fork** - `git push origin new-feature`
6. **Submit a pull request** - Via GitHub's web interface
7. **Await feedback** - From project maintainers
---
## ๐ง Development
### Using uv
```bash
# Install uv if you haven't already
pip install uv
# Install from requirements
uv pip install -r pyproject.toml
# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"
```
### Bump and Release
```bash
# From project root - bumps version and releases in one command
python src/praisonai/scripts/bump_and_release.py 2.2.99
# With praisonaiagents dependency
python src/praisonai/scripts/bump_and_release.py 2.2.99 --agents 0.0.169
# Then publish
cd src/praisonai && uv publish
```
---
## โ FAQ & Troubleshooting
<details>
<summary><strong>ModuleNotFoundError: No module named 'praisonaiagents'</strong></summary>
Install the package:
```bash
pip install praisonaiagents
```
</details>
<details>
<summary><strong>API key not found / Authentication error</strong></summary>
Ensure your API key is set:
```bash
export OPENAI_API_KEY=your_key_here
```
For other providers, see [Environment Variables](#environment-variables).
</details>
<details>
<summary><strong>How do I use a local model (Ollama)?</strong></summary>
```bash
# Start Ollama server first
ollama serve
# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1
```
See [Ollama Integration](#ollama-integration) for more details.
</details>
<details>
<summary><strong>How do I persist conversations to a database?</strong></summary>
Use the `db` parameter:
```python
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
```
See [Persistence (Databases)](#-persistence-databases) for supported databases.
</details>
<details>
<summary><strong>How do I enable agent memory?</strong></summary>
```python
from praisonaiagents import Agent
agent = Agent(
name="Assistant",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123"
)
```
See [Agent Memory](#6-agent-memory-zero-dependencies) for more options.
</details>
<details>
<summary><strong>How do I run multiple agents together?</strong></summary>
```python
from praisonaiagents import Agent, PraisonAIAgents
agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = PraisonAIAgents(agents=[agent1, agent2])
agents.start()
```
See [Multi Agents](#2-multi-agents) for more examples.
</details>
<details>
<summary><strong>How do I use MCP tools?</strong></summary>
```python
from praisonaiagents import Agent, MCP
agent = Agent(
tools=MCP("npx @modelcontextprotocol/server-memory")
)
```
See [MCP Protocol](#19-mcp-model-context-protocol) for all transport options.
</details>
### Getting Help
- ๐ [Full Documentation](https://docs.praison.ai)
- ๐ [Report Issues](https://github.com/MervinPraison/PraisonAI/issues)
- ๐ฌ [Discussions](https://github.com/MervinPraison/PraisonAI/discussions)
---
<div align="center">
<p><strong>Made with โค๏ธ by the PraisonAI Team</strong></p>
<p>
<a href="https://docs.praison.ai">Documentation</a> โข
<a href="https://github.com/MervinPraison/PraisonAI">GitHub</a> โข
<a href="https://github.com/MervinPraison/PraisonAI/issues">Issues</a>
</p>
</div>
", Assign "at most 3 tags" to the expected json: {"id":"9130","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"