base on π€ AgentVerse πͺ is designed to facilitate the deployment of multiple LLM-based agents in various applications, which primarily provides two frameworks: task-solving and simulation <h1 align="center"> π€ AgentVerse πͺ </h1>
<!--
<h3 align="center">
<p>A Framework for Multi-LLM Environment Simulation</p>
</h3>
-->
<p align="center">
<a href="https://github.com/OpenBMB/AgentVerse/blob/main/LICENSE">
<img alt="License: Apache2" src="https://img.shields.io/badge/License-Apache_2.0-green.svg">
</a>
<a href="https://www.python.org/downloads/release/python-3916/">
<img alt="Python Version" src="https://img.shields.io/badge/python-3.9+-blue.svg">
</a>
<a href="https://github.com/OpenBMB/AgentVerse/actions/">
<img alt="Build" src="https://img.shields.io/github/actions/workflow/status/OpenBMB/AgentVerse/test.yml">
</a>
<a href="https://github.com/psf/black">
<img alt="Code Style: Black" src="https://img.shields.io/badge/code%20style-black-black">
<!-- </a>
<a href="https://github.com/OpenBMB/AgentVerse/issues">
<img alt="Contributions: Welcome" src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat">
</a> -->
<a href="https://huggingface.co/AgentVerse">
<img alt="HuggingFace" src="https://img.shields.io/badge/hugging_face-play-yellow">
</a>
<a href="https://discord.gg/gDAXfjMw">
<img alt="Discord" src="https://img.shields.io/badge/AgentVerse-Discord-purple?style=flat">
</a>
</p>
<p align="center">
<img src="./imgs/title.png" width="512">
</p>
<p align="center">
γ<a href="https://arxiv.org/abs/2308.10848">Paper</a>γ
</p>
<p align="center">
γEnglish | <a href="README_zh.md">Chinese</a>γ
</p>
**AgentVerse** is designed to facilitate the deployment of multiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **task-solving** and **simulation**.
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([AgentVerse-Tasksolving](https://arxiv.org/pdf/2308.10848.pdf), [Multi-agent as system](https://arxiv.org/abs/2309.02427)) to collaboratively accomplish the corresponding tasks.
Applications: software development system, consulting system, etc.
<p align="center">
<img width="616" alt="Screen Shot 2023-09-01 at 12 08 57 PM" src="https://github.com/OpenBMB/AgentVerse/assets/11704492/6db1c907-b7fc-42f9-946c-89853a28f386">
</p>
- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. β οΈβ οΈβ οΈ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch. Applications: game, social behavior research of LLM-based agents, etc.
<p align="center">
<img width="616" alt="Screen Shot 2023-10-16 at 10 53 49 PM" src="https://github.com/OpenBMB/AgentVerse/assets/11704492/4102d1e2-3fe7-4656-aa2c-a218ce1f2c95">
</p>
---
# π° What's New
- [2024/3/17] AgentVerse was introduced in NVIDIA's blog - [Building Your First LLM Agent Application](https://developer.nvidia.com/blog/building-your-first-llm-agent-application/).
- [2024/1/17] We're super excited to announce that our paper got accepted at ICLR 2024. More updates will be coming soon!
- [2023/10/17] We're super excited to share our open-source AI community hugging face: [`AgentVerse`](https://huggingface.co/spaces/AgentVerse/agentVerse). You are able to try out the two simulation applications, NLP Classroom and Prisoner's Dilemma,with your code of the openai API key and the openai organization. Have fun!
- [2023/10/5] Re-factor our codebase to enable the deployment of both simulation and task-solving framework! We have placed the code for Minecraft example in the paper at the [`minecraft`](https://github.com/OpenBMB/AgentVerse/tree/minecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned!
- [2023/8/22] We're excited to share our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) that illustrate the task-solving framework
in detail of AgentVerse.
- [2023/6/5] We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun!
- [2023/5/1] π [AgentVerse](https://github.com/OpenBMB/AgentVerse) is officially launched!
# π Coming Soon
- [x] Code release of our [paper](https://arxiv.org/abs/2308.10848)
- [x] Add support for local LLM (LLaMA, Vicunna, etc.)
- [ ] Add documentation
- [ ] Support more sophisticated memory for conversation history
<!--
## πΎ Simple Demo Video
We demonstrate the following cases that are expertly crafted by AgentVerse.
#### NLP Classroom
In the NLP class, the professor and students engage in interactive communication. When students have a question, they raise their hands and patiently wait for the professor to call on them. Only after being called on by the professor, can students speak and ask their questions.
Use the following command to launch the NLP Classroom example:
```bash
python agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/6ea07850-595e-4a28-a82e-f863011353c2
#### Prisoner Dilemma
A prisoner's Dilemma is a thought experiment that challenges two completely rational agents to a dilemma: they can cooperate with their partner for mutual benefit or betray their partner ("defect") for individual reward.
Use the following command to launch the Prisoner Dilemma example:
```bash
python agentverse_command/main_simulation_gui.py --task simulation/prisoner_dilemma
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/017c46e5-c738-4fca-9352-b008e2d518bd
#### Software Design
In the Software Design example, a code writer, a code tester and a code reviewer collaborate on the code generation problem. Given a problem, the code writer first composes the code implementation. The code tester runs the unit tests and provides the feedback. The code viewer then generates a review. After collecting the test feedback and review, the code writer iteratively refines the code.
Use the following command to launch the Software Design example:
```bash
python agentverse_command/main_simulation_gui.py --task simulation/sde_team/sde_team_2players
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4e54661626a
#### [Database Administrator (DBA)](https://github.com/TsinghuaDatabaseGroup/DB-GPT)
In the database diagnosis scenario, the Chief DBA monitors the system anomalies (e.g., slow queries, locks, crash down). If detected, the domain experts are alerted to analyze root causes, share insights, and suggest optimization solutions together. The Chief DBA then provides a summarized report to the user.
```bash
python agentverse_command/main_simulation_gui.py --task simulation/db_diag
```
https://github.com/OpenBMB/AgentVerse/assets/11704492/c633419d-afbb-47d4-bb12-6bb512e7af3a
#### [Text Evaluation (ChatEval)](https://github.com/chanchimin/ChatEval)
In the context of the text evaluation scenario, we recommend users explore the [ChatEval](https://github.com/chanchimin/ChatEval) repo. They've implemented a multi-agent referee team on AgentVerse to assess the quality of text generated by different models. When given two distinct pieces of text, roles within ChatEval can autonomously debate the nuances and disparities, drawing upon their assigned personas, and subsequently provide their judgments. Experiments indicate that their referee team, enriched with diverse roles specified in [config.yaml](#2-configuring-the-agents), aligns more closely with human evaluations. This demo is built upon the [Fastchat](https://github.com/lm-sys/FastChat) repo, and we'd like to express our appreciation for their foundational work.
https://github.com/OpenBMB/AgentVerse/assets/75533759/58f33468-f15b-4bac-ae01-8d0780019f85
#### Pokemon
**Currently available only in [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the PokΓ©mon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone).
To launch the Pokemon game, first launch a local server with the following command:
```bash
uvicorn pokemon_server:app --reload --port 10002
```
Then open another terminal in the project's root path and run the following command:
```bash
cd ui
# If you do not have npm installed, you need to install it before running the following commands
# https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
# We have tested on
[email protected],
[email protected]
npm install
npm run watch
```
Wait for the compilation to complete, and have fun! (WASD for moving around, and SPACE for launching a conversation.)
https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7
-->
<!--
## Contents
- [β¨ Features](#-features)
- [π° What's New](#-whats-new)
- [π Join Us!](#-join-us)
- [How Can You Contribute?](#how-can-you-contribute)
- [π Coming Soon](#-coming-soon)
- [πΎ Simple Demo Video](#-simple-demo-video)
- [NLP Classroom](#nlp-classroom)
- [Prisoner Dilemma](#prisoner-dilemma)
- [Software Design](#software-design)
- [Database Administrator (DBA)](#database-administrator-dba)
- [Text Evaluation (ChatEval)](#text-evaluation-chateval)
- [Pokemon](#pokemon)
- [Contents](#contents)
- [π Getting Started](#-getting-started)
- [Installation](#installation)
- [Simulation Example](#simulation)
- [Task-Solving Example](#task-solving-cli-example)
- [π‘ Philosophy](#-philosophy)
- [Environment](#environment)
- [Agent](#agent)
- [βοΈ Customize Your Own Environment](#οΈ-customize-your-own-environment)
- [A Simple Example: Building a Classroom Environment](#a-simple-example-building-a-classroom-environment)
- [1. Creating a Task Directory and Configuring the Environment](#1-creating-a-task-directory-and-configuring-the-environment)
- [2. Configuring the Agents](#2-configuring-the-agents)
- [3. Writing an Output Parser](#3-writing-an-output-parser)
- [Customization Guide for More Complex Environments](#customization-guide-for-more-complex-environments)
- [π Examples](#-examples)
- [Star History](#star-history)
- [Citation](#citation)
- [Contact](#contact)
-->
# Contents
- [π° What's New](#-whats-new)
- [π Coming Soon](#-coming-soon)
- [Contents](#contents)
- [π Getting Started](#-getting-started)
- [Installation](#installation)
- [Environment Variables](#environment-variables)
- [Simulation](#simulation)
- [Framework Required Modules](#framework-required-modules)
- [CLI Example](#cli-example)
- [GUI Example](#gui-example)
- [Task-Solving](#task-solving)
- [Framework Required Modules](#framework-required-modules-1)
- [CLI Example](#cli-example-1)
- [Local Model Support](#local-model-support)
- [vLLM Support](#vllm-support)
- [FSChat Support](#fschat-support)
- [1. Install the Additional Dependencies](#1-install-the-additional-dependencies)
- [2. Launch the Local Server](#2-launch-the-local-server)
- [3. Modify the Config File](#3-modify-the-config-file)
- [AgentVerse Showcases](#agentverse-showcases)
- [Simulation Showcases](#simulation-showcases)
- [Task-Solving Showcases](#task-solving-showcases)
- [π Join Us!](#-join-us)
- [Leaders](#leaders)
- [Contributors](#contributors)
- [How Can You Contribute?](#how-can-you-contribute)
- [Social Media and Community](#social-media-and-community)
- [Star History](#star-history)
- [Citation](#citation)
- [Contact](#contact)
# π Getting Started
## Installation
**Manually Install (Recommended!)**
**Make sure you have Python >= 3.9**
```bash
git clone https://github.com/OpenBMB/AgentVerse.git --depth 1
cd AgentVerse
pip install -e .
```
If you want to use AgentVerse with local models such as LLaMA, you need to additionally install some other dependencies:
```bash
pip install -r requirements_local.txt
```
**Install with pip**
Or you can install through pip
```bash
pip install -U agentverse
```
## Environment Variables
You need to export your OpenAI API key as followsοΌ
```bash
# Export your OpenAI API key
export OPENAI_API_KEY="your_api_key_here"
```
If you want use Azure OpenAI services, please export your Azure OpenAI key and OpenAI API base as followsοΌ
```bash
export AZURE_OPENAI_API_KEY="your_api_key_here"
export AZURE_OPENAI_API_BASE="your_api_base_here"
```
## Simulation
### Framework Required Modules
```
- agentverse
- agents
- simulation_agent
- environments
- simulation_env
```
### CLI Example
You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
```shell
agentverse-simulation --task simulation/nlp_classroom_9players
```
### GUI Example
We also provide a local website demo for this environment. You can launch it with
```shell
agentverse-simulation-gui --task simulation/nlp_classroom_9players
```
After successfully launching the local server, you can visit [http://127.0.0.1:7860/](http://127.0.0.1:7860/) to view the classroom environment.
If you want to run the simulation cases with tools (e.g., simulation/nlp_classroom_3players_withtool), you need to install BMTools as follows:
```bash
git clone git+https://github.com/OpenBMB/BMTools.git
cd BMTools
pip install -r requirements.txt
python setup.py develop
```
This is optional. If you do not install BMTools, the simulation cases without tools can still run normally.
## Task-Solving
### Framework Required Modules
```
- agentverse
- agents
- simulation_env
- environments
- tasksolving_env
```
### CLI Example
To run the experiments with the task-solving environment proposed in our [paper](https://arxiv.org/abs/2308.10848), you can use the following command:
To run AgentVerse on a benchmark dataset, you can try
```shell
# Run the Humaneval benchmark using gpt-3.5-turbo (config file `agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml`)
agentverse-benchmark --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
```
To run AgentVerse on a specific problem, you can try
```shell
# Run a single query (config file `agentverse/tasks/tasksolving/brainstorming/gpt-3.5/config.yaml`). The task is specified in the config file.
agentverse-tasksolving --task tasksolving/brainstorming
```
To run the tool using cases presented in our paper, i.e., multi-agent using tools such as web browser, Jupyter notebook, bing search, etc., you can first build ToolsServer provided by [XAgent](https://github.com/OpenBMB/XAgent). You can follow their [instruction](https://github.com/OpenBMB/XAgent#%EF%B8%8F-build-and-setup-toolserver) to build and run the ToolServer.
After building and launching the ToolServer, you can use the following command to run the task-solving cases with tools:
```shell
agentverse-tasksolving --task tasksolving/tool_using/24point
```
We have provided more tasks in `agentverse/tasks/tasksolving/tool_using/` that show how multi-agent can use tools to solve problems.
Also, you can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper.
## Local Model Support
## vLLM Support
If you want to use vLLM, follow the guide [here](https://docs.vllm.ai/en/latest/getting_started/quickstart.html) to install and setup the vLLM server which is used to handle larger inference workloads. Create the following environment variables to connect to the vLLM server:
```bash
export VLLM_API_KEY="your_api_key_here"
export VLLM_API_BASE="http://your_vllm_url_here"
```
Then modify the `model` in the task config file so that it matches the model name in the vLLM server. For example:
```yaml
model_type: vllm
model: llama-2-7b-chat-hf
```
## FSChat Support
This section provides a step-by-step guide to integrate FSChat into AgentVerse. FSChat is a framework that supports local models such as LLaMA, Vicunna, etc. running on your local machine.
### 1. Install the Additional Dependencies
If you want to use local models such as LLaMA, you need to additionally install some other dependencies:
```bash
pip install -r requirements_local.txt
```
### 2. Launch the Local Server
Then modify the `MODEL_PATH` and `MODEL_NAME` according to your need to launch the local server with the following command:
```bash
bash scripts/run_local_model_server.sh
```
The script will launch a service for Llama 7B chat model.
The `MODEL_NAME` in AgentVerse currently supports several models including `llama-2-7b-chat-hf`, `llama-2-13b-chat-hf`, `llama-2-70b-chat-hf`, `vicuna-7b-v1.5`, and `vicuna-13b-v1.5`. If you wish to integrate additional models that are [compatible with FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md), you need to:
1. Add the new `MODEL_NAME` into the `LOCAL_LLMS` within `agentverse/llms/__init__.py`. Furthermore, establish
2. Add the mapping from the new `MODEL_NAME` to its corresponding Huggingface identifier in the `LOCAL_LLMS_MAPPING` within the `agentverse/llms/__init__.py` file.
### 3. Modify the Config File
In your config file, set the `llm_type` to `local` and `model` to the `MODEL_NAME`. For example
```yaml
llm:
llm_type: local
model: llama-2-7b-chat-hf
...
```
You can refer to `agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml` for a more detailed example.
# AgentVerse Showcases
## Simulation Showcases
Refer to [simulation showcases](README_simulation_cases.md)
## Task-Solving Showcases
Refer to [tasksolving showcases](README_tasksolving_cases.md)
<!--
## π‘ Philosophy
### Environment
At the core of our framework is the environment, which plays a crucial role in enabling researchers to study the behavior of agents under different conditions. We believe that the environment should be flexible and extensible, allowing researchers to easily customize it to fit their needs. To achieve this, we have abstracted the environment into five rule components, and implementing different environments is actually implementing different rules:
- **Describer**: This component provides a description of the environment at each turn for each agent. You can customize the describer to define the specific requirements of their environment, such as the agents with whom an agent can interact.
- **Order**: This component defines the order in which agents take actions within the environment. You can customize the order to reflect the desired interaction between agents. We provide several basic order options, including `random`, `sequential`, and `concurrent` (in which all agents take an action in each turn).
- **Selector**: This component selects the valid messages generated by agents. Sometimes agents may generate invalid responses, and the selector is used to filter out unexpected results.
- **Updater**: This component updates the memory of each agent. In certain cases, the response generated by one agent should not be seen by all agents (e.g., if agents are in different rooms). For each response, the updater updates only the agents who can see it.
- **Visibility**: This component maintains the list of agents that each agent can see throughout the environment's changes. For example, when an agent moves from one room to another, the list of visible agents of each agent should be updated by `visibility`.
By abstracting the environment into these five components, we have created a highly flexible and extensible framework that enables researchers to easily build and customize their own multi-agent environments.
### Agent
Another fundamental component is the agent. Currently we provide two types of agents: **ConversationAgent** and **ToolAgent**. You can also customize your own agent by inheriting BaseAgent class (tutorial coming soon).
-->
<!--
## βοΈ Customize Your Own Environment
We have provided several examples in the `agentverse/tasks` directory. To customize your environment, you should
1. Create a task directory in `agentverse/tasks`
2. Write the configuration file
3. Write the output parser that parses the response of your agents.
4. Add your parser in `agentverse/tasks/__init__.py`
We will use a simple example in `agentverse/tasks/nlp_classroom_3players` to illustrate the procedure.
### A Simple Example: Building a Classroom Environment
To illustrate how to customize your environment, we'll use a simple example of building a classroom environment where one agent is the professor, one is the student, and one is the teaching assistant.
##### 1. Creating a Task Directory and Configuring the Environment
First, we need to create a task directory and write our configuration file for the environment. In the `agentverse/tasks` directory, create a new directory called `nlp_classroom_3players`. Inside this directory, create a `config.yaml` file and write the following configuration:
```yaml
# config.yaml
environment:
env_type: basic # Use the basic environment provided in AgentVerse
max_turns: 10 # Specify the maximum number of dialogue turns
rule:
order:
type: sequential # Use the sequential order
visibility:
type: all # Each message can be seen by all agents
selector:
type: basic # Basic selector (do not select)
updater:
type: basic # Basic updater (update the message to all agents)
describer:
type: basic # Basic describer (no description)
```
This configuration specifies that we will use the basic environment provided in AgentVerse, with a maximum of 10 dialogue turns. We'll use the sequential order, with all messages visible to all agents. We won't be using any selectors, our updater will update the messages to all the agents and our describer will provide no description.
##### 2. Configuring the Agents
Next, we'll configure the agents. In the `config.yaml` file, we'll add the configuration for each agent. Here's an example configuration for the professor:
```yaml
# config.yaml
agents:
-
agent_type: conversation
name: Professor Micheal # Name of the agent
role_description: You are Prof. Micheal, ... # Description of the agent
memory:
memory_type: chat_history # Will store all the chat history
prompt_template: *professor_prompt
llm:
llm_type: text-davinci-003 # Will use OpenAICompletion LLM
model: text-davinci-003 # The arguments passed to the api call
temperature: 0.7
max_tokens: 250
```
In this example, we'll use the `conversation` agent type. We've given the agent a name and a description, and we'll store the chat history in memory. We've also provided a prompt template with placeholders marked as ${placeholder}. These will be instantiated by the `_fill_prompt_template` method of the agent.
##### 3. Writing an Output Parser
The next step is to write a simple parser for your agent's response. Because you may have specified the output format in your prompt template, you need to provide a corresponding parser. In this example, we inform the model to output in the following format in our prompt template
```
Action: Speak
Action Input: (the content)
```
We'll write a parser to extract the content from the agent's response. Refer to the code for more details. We've decorated our parser function with `@output_parser_registry.register('classroom_parser')` to register it with our framework. Finally, we import our parser in `agentverse/tasks/__init__.py`.
With these steps, we've successfully built a simple classroom environment and customized it for our needs.
### Customization Guide for More Complex Environments
While we provide a basic framework for building environments with our five rule components, more complex environments may require further customization. A detailed documentation and tutorial is coming soon. Here we briefly introduce some steps you can take to customize your environment:
1. **Customize the five rule components**. Each rule component has an interface, allowing you to customize its behavior to suit your specific needs. It's important to note that these components are not necessarily independent and can interact through the `rule_params` dictionary in the environment. You can create your own rule components and integrate them with the existing ones to build more complex interactions between agents.
2. **Customize the environment itself**. Our `basic` environment provides a default execution order for the five rule components that is suitable for most cases, but you can inherit the `BaseEnvironment` class and write your own `run` method to implement a more sophisticated execution order.
3. **Customize the agent**. Depending on your specific use case, you may also need to inherit the `BaseAgent` class. For example, you may want to use your local LLM as your agents or create agents with specialized knowledge or skills.
-->
<!--
## π Examples
Currently, we offer some simple examples in the `agentverse/tasks` directory, each demonstrating different possibilities of our framework. While the performance of these examples may not be optimal due to limited prompt engineering, they are intended to showcase the capabilities of our framework, such as allowing the use of tools.
Here's a brief overview of each example:
1. `nlp_classroom_3players`: This example illustrates a simple case in which agents will speak in sequential order.
2. `nlp_classroom_9players`: This is an NLP class example. Here, students can raise their hand when they have a question, and the professor can call on the students to let them ask. Students are only allowed to speak after they are called on.
3. `nlp_classroom_9players_group`: This example showcases group discussions. The professor may initiate a group discussion when needed, and students can exclusively interact with fellow students within the same group during the discussion.
4. `nlp_classroom_3players_withtool`: Students in this classroom can use Bing search API when listening to the class.
5. `math_problem_2players_tools`: A simple example demonstrating how two agents can use the WolframAlpha API to play an arithmetic game.
6. `prisoner_dilema`: The Prisoner's Dilemma is a thought experiment involving two rational agents facing a choice between cooperating for mutual benefit or betraying their partner for individual gain.
7. `db_diag`: The Chief DBA monitors (agents) the database system for anomalies and alerts memory and CPU agents if any are detected. They (agents) analyze root causes and suggest optimization solutions. The Chief DBA (agent) provides a diagnosis summary to the user, who can give instructions or evaluate the proposed solutions' effectiveness.
8. `sde_team`: In the SDE team, code writer, code tester and code reviewer collaborate on the code generation problem.
9. `pokemon`: This example intimates Pokemon game.
-->
# π Join Us!
AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.
## Leaders
<a href="https://github.com/chenweize1998"><img src="https://avatars.githubusercontent.com/u/32613237?v=4" alt="Leader" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/yushengsu-thu"><img src="https://avatars.githubusercontent.com/u/11704492?v=4" alt="Leader" style="width:5%; border-radius: 50%;"/></a>
## Contributors
<a href="https://github.com/chanchimin"><img src="https://avatars.githubusercontent.com/u/75533759?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/libowen2121"><img src="https://avatars.githubusercontent.com/u/19970308?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/Xial-kotori"><img src="https://avatars.githubusercontent.com/u/53430452?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/Dr-Left"><img src="https://avatars.githubusercontent.com/u/39025643?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/minleminzui"><img src="https://avatars.githubusercontent.com/u/54172054?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/Tsuruko04"><img src="https://avatars.githubusercontent.com/u/129145574?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/kierangilliam"><img src="https://avatars.githubusercontent.com/u/18583882?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/zhouxh19"><img src="https://avatars.githubusercontent.com/u/17394639?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/tzw2698"><img src="https://avatars.githubusercontent.com/u/68865752?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/JetSquirrel"><img src="https://avatars.githubusercontent.com/u/20291255?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/Muiruriscode"><img src="https://avatars.githubusercontent.com/u/85177347?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
<a href="https://github.com/eltociear"><img src="https://avatars.githubusercontent.com/u/22633385?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
## How Can You Contribute?
- **Issue and Pull-Request**: If you encounter any problems when use AgentVerse, you can propose the issue in English. Beisdes, you can also autonomously ask us to assign issue to you and send the PR (Please follow the [PULL_REQUEST_TEMPLATE](https://github.com/OpenBMB/AgentVerse/blob/main/PULL_REQUEST_TEMPLATE.md)) after you solve it.
- **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.
- **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.
- **Application Exploration**: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!
- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](mailto:
[email protected]?subject=[GitHub]%AgentVerse%20Project), and CC to [Weize Chen](mailto:
[email protected]?subject=[GitHub]%AgentVerse%20Project) and [Yusheng Su](mailto:
[email protected]?subject=[GitHub]%AgentVerse%20Project). We're keen to welcome motivated individuals like you to our team!
## Social Media and Community
- Twitter: https://twitter.com/Agentverse71134
- Discord: https://discord.gg/gDAXfjMw.
- Hugging Face: https://huggingface.co/spaces/AgentVerse/agentVerse.
# Star History
[![Star History Chart](https://api.star-history.com/svg?repos=OpenBMB/AgentVerse&type=Date)](https://star-history.com/#OpenBMB/AgentVerse&Date)
## Citation
If you find this repo helpful, feel free to cite us.
```
@article{chen2023agentverse,
title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents},
author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others},
journal={arXiv preprint arXiv:2308.10848},
year={2023}
}
```
# Contact
AgentVerse Team:
[email protected]
Project leaders:
- Weize Chen:
[email protected]
- [Yusheng Su](https://yushengsu-thu.github.io/):
[email protected]
", Assign "at most 3 tags" to the expected json: {"id":"483","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"