AI prompts
base on Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks like CrewAI, Langchain, and Autogen <div align="center">
<a href="https://agentops.ai?ref=gh">
<img src="docs/images/external/logo/github-banner.png" alt="Logo">
</a>
</div>
<div align="center">
<em>Observability and DevTool platform for AI Agents</em>
</div>
<br />
<div align="center">
<a href="https://pepy.tech/project/agentops">
<img src="https://static.pepy.tech/badge/agentops/month" alt="Downloads">
</a>
<a href="https://github.com/agentops-ai/agentops/issues">
<img src="https://img.shields.io/github/commit-activity/m/agentops-ai/agentops" alt="git commit activity">
</a>
<img src="https://img.shields.io/pypi/v/agentops?&color=3670A0" alt="PyPI - Version">
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-yellow.svg?&color=3670A0" alt="License: MIT">
</a>
</div>
<p align="center">
<a href="https://twitter.com/agentopsai/">
<img src="https://img.shields.io/twitter/follow/agentopsai?style=social" alt="Twitter" style="height: 20px;">
</a>
<a href="https://discord.gg/FagdcwwXRR">
<img src="https://img.shields.io/badge/discord-7289da.svg?style=flat-square&logo=discord" alt="Discord" style="height: 20px;">
</a>
<a href="https://app.agentops.ai/?ref=gh">
<img src="https://img.shields.io/badge/Dashboard-blue.svg?style=flat-square" alt="Dashboard" style="height: 20px;">
</a>
<a href="https://docs.agentops.ai/introduction">
<img src="https://img.shields.io/badge/Documentation-orange.svg?style=flat-square" alt="Documentation" style="height: 20px;">
</a>
<a href="https://entelligence.ai/AgentOps-AI&agentops">
<img src="https://img.shields.io/badge/Chat%20with%20Docs-green.svg?style=flat-square" alt="Chat with Docs" style="height: 20px;">
</a>
</p>
<div style="justify-content: center">
<img src="docs/images/external/app_screenshots/dashboard-banner.png" alt="Dashboard Banner">
</div>
<br/>
AgentOps helps developers build, evaluate, and monitor AI agents. From prototype to production.
| | |
| ------------------------------------- | ------------------------------------------------------------- |
| ๐ **Replay Analytics and Debugging** | Step-by-step agent execution graphs |
| ๐ธ **LLM Cost Management** | Track spend with LLM foundation model providers |
| ๐งช **Agent Benchmarking** | Test your agents against 1,000+ evals |
| ๐ **Compliance and Security** | Detect common prompt injection and data exfiltration exploits |
| ๐ค **Framework Integrations** | Native Integrations with CrewAI, AutoGen, Camel AI, & LangChain |
## Quick Start โจ๏ธ
```bash
pip install agentops
```
#### Session replays in 2 lines of code
Initialize the AgentOps client and automatically get analytics on all your LLM calls.
[Get an API key](https://app.agentops.ai/settings/projects)
```python
import agentops
# Beginning of your program (i.e. main.py, __init__.py)
agentops.init( < INSERT YOUR API KEY HERE >)
...
# End of program
agentops.end_session('Success')
```
All your sessions can be viewed on the [AgentOps dashboard](https://app.agentops.ai?ref=gh)
<br/>
<details>
<summary>Agent Debugging</summary>
<a href="https://app.agentops.ai?ref=gh">
<img src="docs/images/external/app_screenshots/session-drilldown-metadata.png" style="width: 90%;" alt="Agent Metadata"/>
</a>
<a href="https://app.agentops.ai?ref=gh">
<img src="docs/images/external/app_screenshots/chat-viewer.png" style="width: 90%;" alt="Chat Viewer"/>
</a>
<a href="https://app.agentops.ai?ref=gh">
<img src="docs/images/external/app_screenshots/session-drilldown-graphs.png" style="width: 90%;" alt="Event Graphs"/>
</a>
</details>
<details>
<summary>Session Replays</summary>
<a href="https://app.agentops.ai?ref=gh">
<img src="docs/images/external/app_screenshots/session-replay.png" style="width: 90%;" alt="Session Replays"/>
</a>
</details>
<details open>
<summary>Summary Analytics</summary>
<a href="https://app.agentops.ai?ref=gh">
<img src="docs/images/external/app_screenshots/overview.png" style="width: 90%;" alt="Summary Analytics"/>
</a>
<a href="https://app.agentops.ai?ref=gh">
<img src="docs/images/external/app_screenshots/overview-charts.png" style="width: 90%;" alt="Summary Analytics Charts"/>
</a>
</details>
### First class Developer Experience
Add powerful observability to your agents, tools, and functions with as little code as possible: one line at a time.
<br/>
Refer to our [documentation](http://docs.agentops.ai)
```python
# Automatically associate all Events with the agent that originated them
from agentops import track_agent
@track_agent(name='SomeCustomName')
class MyAgent:
...
```
```python
# Automatically create ToolEvents for tools that agents will use
from agentops import record_tool
@record_tool('SampleToolName')
def sample_tool(...):
...
```
```python
# Automatically create ActionEvents for other functions.
from agentops import record_action
@agentops.record_action('sample function being record')
def sample_function(...):
...
```
```python
# Manually record any other Events
from agentops import record, ActionEvent
record(ActionEvent("received_user_input"))
```
## Integrations ๐ฆพ
### CrewAI ๐ถ
Build Crew agents with observability with only 2 lines of code. Simply set an `AGENTOPS_API_KEY` in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.
```bash
pip install 'crewai[agentops]'
```
- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/crewai)
- [Official CrewAI documentation](https://docs.crewai.com/how-to/AgentOps-Observability)
### AutoGen ๐ค
With only two lines of code, add full observability and monitoring to Autogen agents. Set an `AGENTOPS_API_KEY` in your environment and call `agentops.init()`
- [Autogen Observability Example](https://microsoft.github.io/autogen/docs/notebooks/agentchat_agentops)
- [Autogen - AgentOps Documentation](https://microsoft.github.io/autogen/docs/ecosystem/agentops)
### Camel AI ๐ช
Track and analyze CAMEL agents with full observability. Set an `AGENTOPS_API_KEY` in your environment and initialize AgentOps to get started.
- [Camel AI](https://www.camel-ai.org/) - Advanced agent communication framework
- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/camel)
- [Official Camel AI documentation](https://docs.camel-ai.org/cookbooks/agents_tracking.html)
<details>
<summary>Installation</summary>
```bash
pip install "camel-ai[all]==0.2.11"
pip install agentops
```
```python
import os
import agentops
from camel.agents import ChatAgent
from camel.messages import BaseMessage
from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType
# Initialize AgentOps
agentops.init(os.getenv("AGENTOPS_API_KEY"), default_tags=["CAMEL Example"])
# Import toolkits after AgentOps init for tracking
from camel.toolkits import SearchToolkit
# Set up the agent with search tools
sys_msg = BaseMessage.make_assistant_message(
role_name='Tools calling operator',
content='You are a helpful assistant'
)
# Configure tools and model
tools = [*SearchToolkit().get_tools()]
model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O_MINI,
)
# Create and run the agent
camel_agent = ChatAgent(
system_message=sys_msg,
model=model,
tools=tools,
)
response = camel_agent.step("What is AgentOps?")
print(response)
agentops.end_session("Success")
```
Check out our [Camel integration guide](https://docs.agentops.ai/v1/integrations/camel) for more examples including multi-agent scenarios.
</details>
### Langchain ๐ฆ๐
AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:
<details>
<summary>Installation</summary>
```shell
pip install agentops[langchain]
```
To use the handler, import and set
```python
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.partners.langchain_callback_handler import LangchainCallbackHandler
AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo')
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler], # You must pass in a callback handler to record your agent
handle_parsing_errors=True)
```
Check out the [Langchain Examples Notebook](./examples/langchain_examples.ipynb) for more details including Async handlers.
</details>
### Cohere โจ๏ธ
First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!
- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/cohere)
- [Official Cohere documentation](https://docs.cohere.com/reference/about)
<details>
<summary>Installation</summary>
```bash
pip install cohere
```
```python python
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
chat = co.chat(
message="Is it pronounced ceaux-hear or co-hehray?"
)
print(chat)
agentops.end_session('Success')
```
```python python
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
stream = co.chat_stream(
message="Write me a haiku about the synergies between Cohere and AgentOps"
)
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
agentops.end_session('Success')
```
</details>
### Anthropic ๏นจ
Track agents built with the Anthropic Python SDK (>=0.32.0).
- [AgentOps integration guide](https://docs.agentops.ai/v1/integrations/anthropic)
- [Official Anthropic documentation](https://docs.anthropic.com/en/docs/welcome)
<details>
<summary>Installation</summary>
```bash
pip install anthropic
```
```python python
import anthropic
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
agentops.end_session('Success')
```
Streaming
```python python
import anthropic
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
stream = client.messages.create(
max_tokens=1024,
model="claude-3-opus-20240229",
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
stream=True,
)
response = ""
for event in stream:
if event.type == "content_block_delta":
response += event.delta.text
elif event.type == "message_stop":
print("\n")
print(response)
print("\n")
```
Async
```python python
import asyncio
from anthropic import AsyncAnthropic
client = AsyncAnthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
async def main() -> None:
message = await client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
await main()
```
</details>
### Mistral ใฝ๏ธ
Track agents built with the Anthropic Python SDK (>=0.32.0).
- [AgentOps integration example](./examples/mistral//mistral_example.ipynb)
- [Official Mistral documentation](https://docs.mistral.ai)
<details>
<summary>Installation</summary>
```bash
pip install mistralai
```
Sync
```python python
from mistralai import Mistral
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.complete(
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
agentops.end_session('Success')
```
Streaming
```python python
from mistralai import Mistral
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.stream(
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
agentops.end_session('Success')
```
Async
```python python
import asyncio
from mistralai import Mistral
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.complete_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
await main()
```
Async Streaming
```python python
import asyncio
from mistralai import Mistral
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.stream_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
async for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
await main()
```
</details>
### CamelAI ๏นจ
Track agents built with the CamelAI Python SDK (>=0.32.0).
- [CamelAI integration guide](https://docs.camel-ai.org/cookbooks/agents_tracking.html#)
- [Official CamelAI documentation](https://docs.camel-ai.org/index.html)
<details>
<summary>Installation</summary>
```bash
pip install camel-ai[all]
pip install agentops
```
```python python
#Import Dependencies
import agentops
import os
from getpass import getpass
from dotenv import load_dotenv
#Set Keys
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY") or "<your openai key here>"
agentops_api_key = os.getenv("AGENTOPS_API_KEY") or "<your agentops key here>"
```
</details>
[You can find usage examples here!](examples/camelai_examples/README.md).
### LiteLLM ๐
AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.
- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/litellm)
- [Official LiteLLM documentation](https://docs.litellm.ai/docs/providers)
<details>
<summary>Installation</summary>
```bash
pip install litellm
```
```python python
# Do not use LiteLLM like this
# from litellm import completion
# ...
# response = completion(model="claude-3", messages=messages)
# Use LiteLLM like this
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
# or
response = await litellm.acompletion(model="claude-3", messages=messages)
```
</details>
### LlamaIndex ๐ฆ
AgentOps works seamlessly with applications built using LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.
<details>
<summary>Installation</summary>
```shell
pip install llama-index-instrumentation-agentops
```
To use the handler, import and set
```python
from llama_index.core import set_global_handler
# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.
set_global_handler("agentops")
```
Check out the [LlamaIndex docs](https://docs.llamaindex.ai/en/stable/module_guides/observability/?h=agentops#agentops) for more details.
</details>
### Llama Stack ๐ฆ๐ฅ
AgentOps provides support for Llama Stack Python Client(>=0.0.53), allowing you to monitor your Agentic applications.
- [AgentOps integration example 1](https://github.com/AgentOps-AI/agentops/pull/530/files/65a5ab4fdcf310326f191d4b870d4f553591e3ea#diff-fdddf65549f3714f8f007ce7dfd1cde720329fe54155d54389dd50fbd81813cb)
- [AgentOps integration example 2](https://github.com/AgentOps-AI/agentops/pull/530/files/65a5ab4fdcf310326f191d4b870d4f553591e3ea#diff-6688ff4fb7ab1ce7b1cc9b8362ca27264a3060c16737fb1d850305787a6e3699)
- [Official Llama Stack Python Client](https://github.com/meta-llama/llama-stack-client-python)
## Time travel debugging ๐ฎ
<div style="justify-content: center">
<img src="docs/images/external/app_screenshots/time_travel_banner.png" alt="Time Travel Banner">
</div>
<br />
[Try it out!](https://app.agentops.ai/timetravel)
## Agent Arena ๐ฅ
(coming soon!)
## Evaluations Roadmap ๐งญ
| Platform | Dashboard | Evals |
| ---------------------------------------------------------------------------- | ------------------------------------------ | -------------------------------------- |
| โ
Python SDK | โ
Multi-session and Cross-session metrics | โ
Custom eval metrics |
| ๐ง Evaluation builder API | โ
Custom event tag trackingย | ๐ Agent scorecards |
| โ
[Javascript/Typescript SDK](https://github.com/AgentOps-AI/agentops-node) | โ
Session replays | ๐ Evaluation playground + leaderboard |
## Debugging Roadmap ๐งญ
| Performance testing | Environments | LLM Testing | Reasoning and execution testing |
| ----------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------- | ------------------------------------------------- |
| โ
Event latency analysis | ๐ Non-stationary environment testing | ๐ LLM non-deterministic function detection | ๐ง Infinite loops and recursive thought detection |
| โ
Agent workflow execution pricing | ๐ Multi-modal environments | ๐ง Token limit overflow flags | ๐ Faulty reasoning detection |
| ๐ง Success validators (external) | ๐ Execution containers | ๐ Context limit overflow flags | ๐ Generative code validators |
| ๐ Agent controllers/skill tests | โ
Honeypot and prompt injection detection ([PromptArmor](https://promptarmor.com)) | ๐ API bill tracking | ๐ Error breakpoint analysis |
| ๐ Information context constraint testing | ๐ Anti-agent roadblocks (i.e. Captchas) | ๐ CI/CD integration checks | |
| ๐ Regression testing | ๐ Multi-agent framework visualization | | |
### Why AgentOps? ๐ค
Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:
- **Comprehensive Observability**: Track your AI agents' performance, user interactions, and API usage.
- **Real-Time Monitoring**: Get instant insights with session replays, metrics, and live monitoring tools.
- **Cost Control**: Monitor and manage your spend on LLM and API calls.
- **Failure Detection**: Quickly identify and respond to agent failures and multi-agent interaction issues.
- **Tool Usage Statistics**: Understand how your agents utilize external tools with detailed analytics.
- **Session-Wide Metrics**: Gain a holistic view of your agents' sessions with comprehensive statistics.
AgentOps is designed to make agent observability, testing, and monitoring easy.
## Star History
Check out our growth in the community:
<img src="https://api.star-history.com/svg?repos=AgentOps-AI/agentops&type=Date" style="max-width: 500px" width="50%" alt="Logo">
## Popular projects using AgentOps
| Repository | Stars |
| :-------- | -----: |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/2707039?s=40&v=4" width="20" height="20" alt=""> [geekan](https://github.com/geekan) / [MetaGPT](https://github.com/geekan/MetaGPT) | 42787 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/130722866?s=40&v=4" width="20" height="20" alt=""> [run-llama](https://github.com/run-llama) / [llama_index](https://github.com/run-llama/llama_index) | 34446 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/170677839?s=40&v=4" width="20" height="20" alt=""> [crewAIInc](https://github.com/crewAIInc) / [crewAI](https://github.com/crewAIInc/crewAI) | 18287 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/134388954?s=40&v=4" width="20" height="20" alt=""> [camel-ai](https://github.com/camel-ai) / [camel](https://github.com/camel-ai/camel) | 5166 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/152537519?s=40&v=4" width="20" height="20" alt=""> [superagent-ai](https://github.com/superagent-ai) / [superagent](https://github.com/superagent-ai/superagent) | 5050 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/30197649?s=40&v=4" width="20" height="20" alt=""> [iyaja](https://github.com/iyaja) / [llama-fs](https://github.com/iyaja/llama-fs) | 4713 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/162546372?s=40&v=4" width="20" height="20" alt=""> [BasedHardware](https://github.com/BasedHardware) / [Omi](https://github.com/BasedHardware/Omi) | 2723 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/454862?s=40&v=4" width="20" height="20" alt=""> [MervinPraison](https://github.com/MervinPraison) / [PraisonAI](https://github.com/MervinPraison/PraisonAI) | 2007 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/140554352?s=40&v=4" width="20" height="20" alt=""> [AgentOps-AI](https://github.com/AgentOps-AI) / [Jaiqu](https://github.com/AgentOps-AI/Jaiqu) | 272 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/3074263?s=40&v=4" width="20" height="20" alt=""> [strnad](https://github.com/strnad) / [CrewAI-Studio](https://github.com/strnad/CrewAI-Studio) | 134 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/18406448?s=40&v=4" width="20" height="20" alt=""> [alejandro-ao](https://github.com/alejandro-ao) / [exa-crewai](https://github.com/alejandro-ao/exa-crewai) | 55 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/64493665?s=40&v=4" width="20" height="20" alt=""> [tonykipkemboi](https://github.com/tonykipkemboi) / [youtube_yapper_trapper](https://github.com/tonykipkemboi/youtube_yapper_trapper) | 47 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/17598928?s=40&v=4" width="20" height="20" alt=""> [sethcoast](https://github.com/sethcoast) / [cover-letter-builder](https://github.com/sethcoast/cover-letter-builder) | 27 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/109994880?s=40&v=4" width="20" height="20" alt=""> [bhancockio](https://github.com/bhancockio) / [chatgpt4o-analysis](https://github.com/bhancockio/chatgpt4o-analysis) | 19 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/14105911?s=40&v=4" width="20" height="20" alt=""> [breakstring](https://github.com/breakstring) / [Agentic_Story_Book_Workflow](https://github.com/breakstring/Agentic_Story_Book_Workflow) | 14 |
|<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/124134656?s=40&v=4" width="20" height="20" alt=""> [MULTI-ON](https://github.com/MULTI-ON) / [multion-python](https://github.com/MULTI-ON/multion-python) | 13 |
_Generated using [github-dependents-info](https://github.com/nvuillam/github-dependents-info), by [Nicolas Vuillamy](https://github.com/nvuillam)_
", Assign "at most 3 tags" to the expected json: {"id":"10187","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"