AI prompts
base on A framework for Claude Opus to intelligently orchestrate subagents. # Maestro - A Framework for Claude Opus, GPT and local LLMs to Orchestrate Subagents
This Python script demonstrates an AI-assisted task breakdown and execution workflow using the Anthropic API. It utilizes two AI models, Opus and Haiku, to break down an objective into sub-tasks, execute each sub-task, and refine the results into a cohesive final output.
## New:
# Updated the original Maestro to support Claude 3.5 Sonnet
```bash
python maestro.py
```
# Use Maestro with any APIs, Anthropic, Gemini, OpenAI, Cohere, etc.
Thanks to a rewrite of the codebase using LiteLLM, it's now much easier to select the model you want.
Simply
#### Set environment variables for API keys for the services you are using
os.environ["OPENAI_API_KEY"] = "YOUR KEY"
os.environ["ANTHROPIC_API_KEY"] = "YOUR KEY"
os.environ["GEMINI_API_KEY"] = "YOUR KEY"
#### Define the models to be used for each stage
ORCHESTRATOR_MODEL = "gemini/gemini-1.5-flash-latest"
SUB_AGENT_MODEL = "gemini/gemini-1.5-flash-latest"
REFINER_MODEL = "gemini/gemini-1.5-flash-latest"
Or gpt-3.5-turbo, etc.
First install litellm
```bash
pip install litellm
```
Afeter installing dependecies run
```bash
python maestro-anyapi.py
```
## GPT-4o
The GPT script has been updated from the ground up to support the code capabilities of GPT-4o
Afeter installing dependecies run
```bash
python maestro-gpt4o.py
```
## Run locally with LMStudio or Ollama
### Lmstudio
First download the app here
https://lmstudio.ai/
Then run the local server using your preferred method. I also recommend removing any system prompt for the app (leave your prompt field empty so it can take advantage of the script prompts).
Then
```bash
python maestro-lmstudio.py
```
### Ollama
Mestro now runs locally thanks to the Ollama platform. Experience the power of Llama 3 locally!
Before running the script
Install Ollama client from here
https://ollama.com/download
then
```bash
pip install ollama
```
And
```bash
ollama.pull('llama3:70b')
```
This will depend on the model you want to use it, you only need to do it once or if you want to update the model when a new version it's out.
In the script I am using both versions but you can customize the model you want to use
ollama.pull('llama3:70b')
ollama.pull('llama3:8b')
Then
```bash
python maestro-ollama.py
```
## Highly requested features
- GROQ SUPPORT
Experience the power of maestro thanks to Groq super fast api responses.
```bash
pip install groq
```
Then
```bash
python maestro-groq.py
```
- SEARCH 🔍
Now, when it's creating a task for its subagent, Claude Opus will perform a search and get the best answer to help the subagent solve that task even better.
Make sure you replace your Tavil API for search to work
Get one here https://tavily.com/
- GPT4 SUPPORT
Add support for GPT-4 as an orchestrator in maestro-gpt.py
Simply
```bash
python maestro-gpt.py
```
After you complete your installs.
## Features
- Breaks down an objective into manageable sub-tasks using the Opus model
- Executes each sub-task using the Haiku model
- Provides the Haiku model with memory of previous sub-tasks for context
- Refines the sub-task results into a final output using the Opus model
- Generates a detailed exchange log capturing the entire task breakdown and execution process
- Saves the exchange log to a Markdown file for easy reference
- Utilizes an improved prompt for the Opus model to better assess task completion
- Creates code files and folders when working on code projects.
## Prerequisites
To run this script, you need to have the following:
- Python installed
- Anthropic API key
- Required Python packages: `anthropic` and `rich`
## Installation
1. Clone the repository or download the script file.
2. Install the required Python packages by running the following command:
```bash
pip install -r requirements.txt
```
3. Replace the placeholder API key in the script with your actual Anthropic API key:
```python
client = Anthropic(api_key="YOUR_API_KEY_HERE")
```
If using search, replace your Tavil API
```python
tavily = TavilyClient(api_key="YOUR API KEY HERE")
```
## Usage
1. Open a terminal or command prompt and navigate to the directory containing the script.
2. Run the script using the following command:
```bash
python maestro.py
```
3. Enter your objective when prompted:
```bash
Please enter your objective: Your objective here
```
The script will start the task breakdown and execution process. It will display the progress and results in the console using formatted panels.
Once the process is complete, the script will display the refined final output and save the full exchange log to a Markdown file with a filename based on the objective.
## Code Structure
The script consists of the following main functions:
- `opus_orchestrator(objective, previous_results=None)`: Calls the Opus model to break down the objective into sub-tasks or provide the final output. It uses an improved prompt to assess task completion and includes the phrase "The task is complete:" when the objective is fully achieved.
- `haiku_sub_agent(prompt, previous_haiku_tasks=None)`: Calls the Haiku model to execute a sub-task prompt, providing it with the memory of previous sub-tasks.
- `opus_refine(objective, sub_task_results)`: Calls the Opus model to review and refine the sub-task results into a cohesive final output.
The script follows an iterative process, repeatedly calling the opus_orchestrator function to break down the objective into sub-tasks until the final output is provided. Each sub-task is then executed by the haiku_sub_agent function, and the results are stored in the task_exchanges and haiku_tasks lists.
The loop terminates when the Opus model includes the phrase "The task is complete:" in its response, indicating that the objective has been fully achieved.
Finally, the opus_refine function is called to review and refine the sub-task results into a final output. The entire exchange log, including the objective, task breakdown, and refined final output, is saved to a Markdown file.
## Customization
You can customize the script according to your needs:
- Adjust the max_tokens parameter in the client.messages.create() function calls to control the maximum number of tokens generated by the AI models.
- Change the models to what you prefer, like replacing Haiku with Sonnet or Opus.
- Modify the console output formatting by updating the rich library's Panel and Console configurations.
- Customize the exchange log formatting and file extension by modifying the relevant code sections.
## License
This script is released under the MIT License.
## Acknowledgements
- Anthropic for providing the AI models and API.
- Rich for the beautiful console formatting.
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=Doriandarko/maestro&type=Date)](https://star-history.com/#Doriandarko/maestro&Date)
", Assign "at most 3 tags" to the expected json: {"id":"8730","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"