AI prompts
base on 🦜💬 Web app for interacting with any LangGraph agent (PY & TS) via a chat interface. # Agent Chat UI
Agent Chat UI is a Next.js application which enables chatting with any LangGraph server with a `messages` key through a chat interface.
> [!NOTE]
> 🎥 Watch the video setup guide [here](https://youtu.be/lInrwVnZ83o).
## Setup
> [!TIP]
> Don't want to run the app locally? Use the deployed site here: [agentchat.vercel.app](https://agentchat.vercel.app)!
First, clone the repository, or run the [`npx` command](https://www.npmjs.com/package/create-agent-chat-app):
```bash
npx create-agent-chat-app
```
or
```bash
git clone https://github.com/langchain-ai/agent-chat-ui.git
cd agent-chat-ui
```
Install dependencies:
```bash
pnpm install
```
Run the app:
```bash
pnpm dev
```
The app will be available at `http://localhost:3000`.
## Usage
Once the app is running (or if using the deployed site), you'll be prompted to enter:
- **Deployment URL**: The URL of the LangGraph server you want to chat with. This can be a production or development URL.
- **Assistant/Graph ID**: The name of the graph, or ID of the assistant to use when fetching, and submitting runs via the chat interface.
- **LangSmith API Key**: (only required for connecting to deployed LangGraph servers) Your LangSmith API key to use when authenticating requests sent to LangGraph servers.
After entering these values, click `Continue`. You'll then be redirected to a chat interface where you can start chatting with your LangGraph server.
## Environment Variables
You can bypass the initial setup form by setting the following environment variables:
```bash
NEXT_PUBLIC_API_URL=http://localhost:2024
NEXT_PUBLIC_ASSISTANT_ID=agent
```
> [!TIP]
> If you want to connect to a production LangGraph server, read the [Going to Production](#going-to-production) section.
To use these variables:
1. Copy the `.env.example` file to a new file named `.env`
2. Fill in the values in the `.env` file
3. Restart the application
When these environment variables are set, the application will use them instead of showing the setup form.
## Hiding Messages in the Chat
You can control the visibility of messages within the Agent Chat UI in two main ways:
**1. Prevent Live Streaming:**
To stop messages from being displayed _as they stream_ from an LLM call, add the `langsmith:nostream` tag to the chat model's configuration. The UI normally uses `on_chat_model_stream` events to render streaming messages; this tag prevents those events from being emitted for the tagged model.
_Python Example:_
```python
from langchain_anthropic import ChatAnthropic
# Add tags via the .with_config method
model = ChatAnthropic().with_config(
config={"tags": ["langsmith:nostream"]}
)
```
_TypeScript Example:_
```typescript
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic()
// Add tags via the .withConfig method
.withConfig({ tags: ["langsmith:nostream"] });
```
**Note:** Even if streaming is hidden this way, the message will still appear after the LLM call completes if it's saved to the graph's state without further modification.
**2. Hide Messages Permanently:**
To ensure a message is _never_ displayed in the chat UI (neither during streaming nor after being saved to state), prefix its `id` field with `do-not-render-` _before_ adding it to the graph's state, along with adding the `langsmith:do-not-render` tag to the chat model's configuration. The UI explicitly filters out any message whose `id` starts with this prefix.
_Python Example:_
```python
result = model.invoke([messages])
# Prefix the ID before saving to state
result.id = f"do-not-render-{result.id}"
return {"messages": [result]}
```
_TypeScript Example:_
```typescript
const result = await model.invoke([messages]);
// Prefix the ID before saving to state
result.id = `do-not-render-${result.id}`;
return { messages: [result] };
```
This approach guarantees the message remains completely hidden from the user interface.
## Rendering Artifacts
The Agent Chat UI supports rendering artifacts in the chat. Artifacts are rendered in a side panel to the right of the chat. To render an artifact, you can obtain the artifact context from the `thread.meta.artifact` field. Here's a sample utility hook for obtaining the artifact context:
```tsx
export function useArtifact<TContext = Record<string, unknown>>() {
type Component = (props: {
children: React.ReactNode;
title?: React.ReactNode;
}) => React.ReactNode;
type Context = TContext | undefined;
type Bag = {
open: boolean;
setOpen: (value: boolean | ((prev: boolean) => boolean)) => void;
context: Context;
setContext: (value: Context | ((prev: Context) => Context)) => void;
};
const thread = useStreamContext<
{ messages: Message[]; ui: UIMessage[] },
{ MetaType: { artifact: [Component, Bag] } }
>();
return thread.meta?.artifact;
}
```
After which you can render additional content using the `Artifact` component from the `useArtifact` hook:
```tsx
import { useArtifact } from "../utils/use-artifact";
import { LoaderIcon } from "lucide-react";
export function Writer(props: {
title?: string;
content?: string;
description?: string;
}) {
const [Artifact, { open, setOpen }] = useArtifact();
return (
<>
<div
onClick={() => setOpen(!open)}
className="cursor-pointer rounded-lg border p-4"
>
<p className="font-medium">{props.title}</p>
<p className="text-sm text-gray-500">{props.description}</p>
</div>
<Artifact title={props.title}>
<p className="p-4 whitespace-pre-wrap">{props.content}</p>
</Artifact>
</>
);
}
```
## Going to Production
Once you're ready to go to production, you'll need to update how you connect, and authenticate requests to your deployment. By default, the Agent Chat UI is setup for local development, and connects to your LangGraph server directly from the client. This is not possible if you want to go to production, because it requires every user to have their own LangSmith API key, and set the LangGraph configuration themselves.
### Production Setup
To productionize the Agent Chat UI, you'll need to pick one of two ways to authenticate requests to your LangGraph server. Below, I'll outline the two options:
### Quickstart - API Passthrough
The quickest way to productionize the Agent Chat UI is to use the [API Passthrough](https://github.com/langchain-ai/langgraph-nextjs-api-passthrough) package. This package provides a simple way to proxy requests to your LangGraph server, and handle authentication for you.
This repository already contains all of the code you need to start using this method. The only configuration you need to do is set the proper environment variables.
```bash
NEXT_PUBLIC_ASSISTANT_ID="agent"
# This should be the deployment URL of your LangGraph server
LANGGRAPH_API_URL="https://my-agent.default.us.langgraph.app"
# This should be the URL of your website + "/api". This is how you connect to the API proxy
NEXT_PUBLIC_API_URL="https://my-website.com/api"
# Your LangSmith API key which is injected into requests inside the API proxy
LANGSMITH_API_KEY="lsv2_..."
```
Let's cover what each of these environment variables does:
- `NEXT_PUBLIC_ASSISTANT_ID`: The ID of the assistant you want to use when fetching, and submitting runs via the chat interface. This still needs the `NEXT_PUBLIC_` prefix, since it's not a secret, and we use it on the client when submitting requests.
- `LANGGRAPH_API_URL`: The URL of your LangGraph server. This should be the production deployment URL.
- `NEXT_PUBLIC_API_URL`: The URL of your website + `/api`. This is how you connect to the API proxy. For the [Agent Chat demo](https://agentchat.vercel.app), this would be set as `https://agentchat.vercel.app/api`. You should set this to whatever your production URL is.
- `LANGSMITH_API_KEY`: Your LangSmith API key to use when authenticating requests sent to LangGraph servers. Once again, do _not_ prefix this with `NEXT_PUBLIC_` since it's a secret, and is only used on the server when the API proxy injects it into the request to your deployed LangGraph server.
For in depth documentation, consult the [LangGraph Next.js API Passthrough](https://www.npmjs.com/package/langgraph-nextjs-api-passthrough) docs.
### Advanced Setup - Custom Authentication
Custom authentication in your LangGraph deployment is an advanced, and more robust way of authenticating requests to your LangGraph server. Using custom authentication, you can allow requests to be made from the client, without the need for a LangSmith API key. Additionally, you can specify custom access controls on requests.
To set this up in your LangGraph deployment, please read the LangGraph custom authentication docs for [Python](https://langchain-ai.github.io/langgraph/tutorials/auth/getting_started/), and [TypeScript here](https://langchain-ai.github.io/langgraphjs/how-tos/auth/custom_auth/).
Once you've set it up on your deployment, you should make the following changes to the Agent Chat UI:
1. Configure any additional API requests to fetch the authentication token from your LangGraph deployment which will be used to authenticate requests from the client.
2. Set the `NEXT_PUBLIC_API_URL` environment variable to your production LangGraph deployment URL.
3. Set the `NEXT_PUBLIC_ASSISTANT_ID` environment variable to the ID of the assistant you want to use when fetching, and submitting runs via the chat interface.
4. Modify the [`useTypedStream`](src/providers/Stream.tsx) (extension of `useStream`) hook to pass your authentication token through headers to the LangGraph server:
```tsx
const streamValue = useTypedStream({
apiUrl: process.env.NEXT_PUBLIC_API_URL,
assistantId: process.env.NEXT_PUBLIC_ASSISTANT_ID,
// ... other fields
defaultHeaders: {
Authentication: `Bearer ${addYourTokenHere}`, // this is where you would pass your authentication token
},
});
```
", Assign "at most 3 tags" to the expected json: {"id":"13825","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"