AI prompts
base on OpenAI Api Client in Java ![Maven Central](https://img.shields.io/maven-central/v/com.theokanning.openai-gpt3-java/client?color=blue)
> ⚠️ Notice: This project is no longer maintained and has been archived as of June 6th, 2024.
Thank you to everyone who has contributed and supported this project. While the repository will remain available in its current state, no further updates or support will be provided. Please feel free to fork and modify the code as needed.
> ⚠️OpenAI has deprecated all Engine-based APIs. See [Deprecated Endpoints](https://github.com/TheoKanning/openai-java#deprecated-endpoints) below for more info.
# OpenAI-Java
Java libraries for using OpenAI's GPT apis. Supports GPT-3, ChatGPT, and GPT-4.
Includes the following artifacts:
- `api` : request/response POJOs for the GPT APIs.
- `client` : a basic retrofit client for the GPT endpoints, includes the `api` module
- `service` : A basic service class that creates and calls the client. This is the easiest way to get started.
as well as an example project using the service.
## Supported APIs
- [Models](https://platform.openai.com/docs/api-reference/models)
- [Completions](https://platform.openai.com/docs/api-reference/completions)
- [Chat Completions](https://platform.openai.com/docs/api-reference/chat/create)
- [Edits](https://platform.openai.com/docs/api-reference/edits)
- [Embeddings](https://platform.openai.com/docs/api-reference/embeddings)
- [Audio](https://platform.openai.com/docs/api-reference/audio)
- [Files](https://platform.openai.com/docs/api-reference/files)
- [Fine-tuning](https://platform.openai.com/docs/api-reference/fine-tuning)
- [Images](https://platform.openai.com/docs/api-reference/images)
- [Moderations](https://platform.openai.com/docs/api-reference/moderations)
- [Assistants](https://platform.openai.com/docs/api-reference/assistants)
#### Deprecated by OpenAI
- [Engines](https://platform.openai.com/docs/api-reference/engines)
- [Legacy Fine-Tunes](https://platform.openai.com/docs/guides/legacy-fine-tuning)
## Importing
### Gradle
`implementation 'com.theokanning.openai-gpt3-java:<api|client|service>:<version>'`
### Maven
```xml
<dependency>
<groupId>com.theokanning.openai-gpt3-java</groupId>
<artifactId>{api|client|service}</artifactId>
<version>version</version>
</dependency>
```
## Usage
### Data classes only
If you want to make your own client, just import the POJOs from the `api` module.
Your client will need to use snake case to work with the OpenAI API.
### Retrofit client
If you're using retrofit, you can import the `client` module and use the [OpenAiApi](client/src/main/java/com/theokanning/openai/OpenAiApi.java).
You'll have to add your auth token as a header (see [AuthenticationInterceptor](client/src/main/java/com/theokanning/openai/AuthenticationInterceptor.java))
and set your converter factory to use snake case and only include non-null fields.
### OpenAiService
If you're looking for the fastest solution, import the `service` module and use [OpenAiService](service/src/main/java/com/theokanning/openai/service/OpenAiService.java).
> ⚠️The OpenAiService in the client module is deprecated, please switch to the new version in the service module.
```java
OpenAiService service = new OpenAiService("your_token");
CompletionRequest completionRequest = CompletionRequest.builder()
.prompt("Somebody once told me the world is gonna roll me")
.model("babbage-002"")
.echo(true)
.build();
service.createCompletion(completionRequest).getChoices().forEach(System.out::println);
```
### Customizing OpenAiService
If you need to customize OpenAiService, create your own Retrofit client and pass it in to the constructor.
For example, do the following to add request logging (after adding the logging gradle dependency):
```java
ObjectMapper mapper = defaultObjectMapper();
OkHttpClient client = defaultClient(token, timeout)
.newBuilder()
.interceptor(HttpLoggingInterceptor())
.build();
Retrofit retrofit = defaultRetrofit(client, mapper);
OpenAiApi api = retrofit.create(OpenAiApi.class);
OpenAiService service = new OpenAiService(api);
```
### Adding a Proxy
To use a proxy, modify the OkHttp client as shown below:
```java
ObjectMapper mapper = defaultObjectMapper();
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(host, port));
OkHttpClient client = defaultClient(token, timeout)
.newBuilder()
.proxy(proxy)
.build();
Retrofit retrofit = defaultRetrofit(client, mapper);
OpenAiApi api = retrofit.create(OpenAiApi.class);
OpenAiService service = new OpenAiService(api);
```
### Functions
You can create your functions and define their executors easily using the ChatFunction class, along with any of your custom classes that will serve to define their available parameters. You can also process the functions with ease, with the help of an executor called FunctionExecutor.
First we declare our function parameters:
```java
public class Weather {
@JsonPropertyDescription("City and state, for example: León, Guanajuato")
public String location;
@JsonPropertyDescription("The temperature unit, can be 'celsius' or 'fahrenheit'")
@JsonProperty(required = true)
public WeatherUnit unit;
}
public enum WeatherUnit {
CELSIUS, FAHRENHEIT;
}
public static class WeatherResponse {
public String location;
public WeatherUnit unit;
public int temperature;
public String description;
// constructor
}
```
Next, we declare the function itself and associate it with an executor, in this example we will fake a response from some API:
```java
ChatFunction.builder()
.name("get_weather")
.description("Get the current weather of a location")
.executor(Weather.class, w -> new WeatherResponse(w.location, w.unit, new Random().nextInt(50), "sunny"))
.build()
```
Then, we employ the FunctionExecutor object from the 'service' module to assist with execution and transformation into an object that is ready for the conversation:
```java
List<ChatFunction> functionList = // list with functions
FunctionExecutor functionExecutor = new FunctionExecutor(functionList);
List<ChatMessage> messages = new ArrayList<>();
ChatMessage userMessage = new ChatMessage(ChatMessageRole.USER.value(), "Tell me the weather in Barcelona.");
messages.add(userMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest
.builder()
.model("gpt-3.5-turbo-0613")
.messages(messages)
.functions(functionExecutor.getFunctions())
.functionCall(new ChatCompletionRequestFunctionCall("auto"))
.maxTokens(256)
.build();
ChatMessage responseMessage = service.createChatCompletion(chatCompletionRequest).getChoices().get(0).getMessage();
ChatFunctionCall functionCall = responseMessage.getFunctionCall(); // might be null, but in this case it is certainly a call to our 'get_weather' function.
ChatMessage functionResponseMessage = functionExecutor.executeAndConvertToMessageHandlingExceptions(functionCall);
messages.add(response);
```
> **Note:** The `FunctionExecutor` class is part of the 'service' module.
You can also create your own function executor. The return object of `ChatFunctionCall.getArguments()` is a JsonNode for simplicity and should be able to help you with that.
For a more in-depth look, refer to a conversational example that employs functions in: [OpenAiApiFunctionsExample.java](example/src/main/java/example/OpenAiApiFunctionsExample.java).
Or for an example using functions and stream: [OpenAiApiFunctionsWithStreamExample.java](example/src/main/java/example/OpenAiApiFunctionsWithStreamExample.java)
### Streaming thread shutdown
If you want to shut down your process immediately after streaming responses, call `OpenAiService.shutdownExecutor()`.
This is not necessary for non-streaming calls.
## Running the example project
All the [example](example/src/main/java/example/OpenAiApiExample.java) project requires is your OpenAI api token
```bash
export OPENAI_TOKEN="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
```
You can try all the capabilities of this project using:
```bash
./gradlew runExampleOne
```
And you can also try the new capability of using functions:
```bash
./gradlew runExampleTwo
```
Or functions with 'stream' mode enabled:
```bash
./gradlew runExampleThree
```
## FAQ
### Does this support GPT-4?
Yes! GPT-4 uses the ChatCompletion Api, and you can see the latest model options [here](https://platform.openai.com/docs/models/gpt-4).
GPT-4 is currently in a limited beta (as of 4/1/23), so make sure you have access before trying to use it.
### Does this support functions?
Absolutely! It is very easy to use your own functions without worrying about doing the dirty work.
As mentioned above, you can refer to [OpenAiApiFunctionsExample.java](example/src/main/java/example/OpenAiApiFunctionsExample.java) or
[OpenAiApiFunctionsWithStreamExample.java](example/src/main/java/example/OpenAiApiFunctionsWithStreamExample.java) projects for an example.
### Why am I getting connection timeouts?
Make sure that OpenAI is available in your country.
### Why doesn't OpenAiService support x configuration option?
Many projects use OpenAiService, and in order to support them best I've kept it extremely simple.
You can create your own OpenAiApi instance to customize headers, timeouts, base urls etc.
If you want features like retry logic and async calls, you'll have to make an `OpenAiApi` instance and call it directly instead of using `OpenAiService`
## Deprecated Endpoints
OpenAI has deprecated engine-based endpoints in favor of model-based endpoints.
For example, instead of using `v1/engines/{engine_id}/completions`, switch to `v1/completions` and specify the model in the `CompletionRequest`.
The code includes upgrade instructions for all deprecated endpoints.
I won't remove the old endpoints from this library until OpenAI shuts them down.
## License
Published under the MIT License
", Assign "at most 3 tags" to the expected json: {"id":"3516","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"