AI prompts
base on # gpt-llm-trainer
[![Twitter Follow](https://img.shields.io/twitter/follow/mattshumer_?style=social)](https://twitter.com/mattshumer_)
NEW: Claude 3 -> LLaMA 2 7B Fine-Tuning version: [![Open Claude -> LLaMA Version In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eLe0t8Alu997w5Ewnw9mE96dtaC-qEho?usp=sharing)
LLaMA 2 7B Fine-Tuning version: [![Open LLaMA Version In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1mV9sAY4QBKLmS58dpFGHgwCXQKRASR31?usp=sharing)
GPT-3.5 Fine-Tuning version: [![Open GPT-3.5 Version In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NLqxHHCv3kFyw45t8k_CUfNlcepMdeDW?usp=sharing)
## Overview
Training models is hard. You have to collect a dataset, clean it, get it in the right format, select a model, write the training code and train it. And that's the best-case scenario.
The goal of this project is to explore an experimental new pipeline to train a high-performing task-specific model. We try to abstract away all the complexity, so it's as easy as possible to go from idea -> performant fully-trained model.
**Simply input a description of your task, and the system will generate a dataset from scratch, parse it into the right format, and fine-tune a LLaMA 2 or GPT-3.5 model for you.**
## Features
- **Dataset Generation**: Using Claude 3 or GPT-4, `gpt-llm-trainer` will generate a variety of prompts and responses based on the provided use-case.
- **System Message Generation**: `gpt-llm-trainer` will generate an effective system prompt for your model.
- **Fine-Tuning**: After your dataset has been generated, the system will automatically split it into training and validation sets, fine-tune a model for you, and get it ready for inference.
## Setup
1. [Open the notebook in Google Colab](https://colab.research.google.com/drive/1mV9sAY4QBKLmS58dpFGHgwCXQKRASR31?usp=sharing) or in a local Jupyter notebook.
2. If you're using Colab, switch to the best GPU available (go to Runtime -> change runtime type).
3. Add your OpenAI API key to the line `openai.api_key = "YOUR KEY HERE"`.
## How to Use
1. Define your `prompt`. The `prompt` is a description of what you want the trained AI to do. The more descriptive and clear you can be, the better. Additionally, set the temperature we will use to generate your dataset (high=creative, low=precise), and the number of examples you want to generate (100 is a good starting point).
For example:
```
prompt = "A model that takes in a puzzle-like reasoning-heavy question in English, and responds with a well-reasoned, step-by-step thought out response in Spanish."
temperature = .4
number_of_examples = 100
```
2. Run all the cells (stop at `Merge the model and store in Google Drive` if using the LLaMA 2 version).
*It'll take some time (from 10 minutes to a couple of hours, depending on how many examples you generate), but soon, you'll have your fine-tuned model!*
3. After your model is trained, you can use the `Run Inference` cell (on the LLaMA 2 version) or the `Let's try it out!` cell (on the GPT-3.5 version) to test the model, and the cells below that allow you to save and load the model to and from Google Drive for later use if you are using the LLaMA version. If you're using the OpenAI version, your model will be available for use via the API or in the OpenAI Playground.
## Contributions are welcome! Some ideas:
- improve the example generation pipeline for efficiency/cost reduction (using n=)
- add additional example generation prompts to create more diverse examples
- add example pruning, removing very similar examples to improve performance
- use GPT-4 to automatically choose the training hyperparameters (and potentially, even the model to fine-tune!) based on a few examples + high-level dataset details (i.e. number of examples)
- train multiple model variants and choose the one with the lowest eval loss
## Huge shoutout to [Maxime Labonne](https://twitter.com/maximelabonne) for the training code that was used in this repo!
## License
This project is [MIT](https://github.com/mshumer/gpt-llm-trainer/blob/master/LICENSE) licensed.
## Contact
Matt Shumer - [@mattshumer_](https://twitter.com/mattshumer_)
Lastly, if you want to try something even cooler than this, sign up for [Personal Assistant](https://www.hyperwriteai.com/personal-assistant) (most of my time is spent on this). It's basically an AI that can operate your web browser to complete tasks for you.
", Assign "at most 3 tags" to the expected json: {"id":"8943","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"