AI prompts
base on Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)
# **MAmmoTH** š¦£
This repo contains the code, data, and models for "[MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning](https://arxiv.org/pdf/2309.05653.pdf)". Our paper was accepted to ICLR 2024 as spotlight.
<div align="center">
š„ š„ š„ Check out our <a href = "https://tiger-ai-lab.github.io/MAmmoTH/">[Project Page]</a> for more results and analysis!
</div>
<br>
<div align="center">
<img src="mammoth_github.png" width="80%" title="Introduction Figure">
</div>
### Datasets and Models
Our dataset and models are all available at Huggingface.
š¤ [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
| | Base Model: Llama-2 | Base Model: Code Llama | Base Model: Mistral |
|----- |--------------------------------------------------------------- |--------------------------------------------------------------------------- |---------------------|
| 7B | š¦£ [MAmmoTH-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | š¦£ [MAmmoTH-Coder-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) | š¦£ [MAmmoTH-7B-Mistral](https://huggingface.co/TIGER-Lab/MAmmoTH-7B-Mistral) |
| 13B | š¦£ [MAmmoTH-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | š¦£ [MAmmoTH-Coder-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B) | |
| 34B | - | š¦£ [MAmmoTH-Coder-34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B) | |
| 70B | š¦£ [MAmmoTH-70B](https://huggingface.co/TIGER-Lab/MAmmoTH-70B) | - | |
## **What's New?**
- [Dec. 4] We add the training and evaluation of MAmmoTH-7B-Mistral, which improves significantly over the LLaMA-2 version. We also have better support for vllm.
- [Oct. 10] We update our decoding method to hybrid decoding: first try PoT to generate a program, if it is not excutable, we will regenerate a CoT solution as the final answer. This hybrid decoding method improves the peformance significantly. Check our updated paper Appendix for more details.
## Highlights
We demonstrate the results of our small MAmmoTH-7B-Mistral as follows:
| **Model** | **Decoding** | **GSM** | **MATH** | **MMLU-Math** |
|---------------------------|---------------|-----------|-----------|-----------|
| MAmmoTH-7B | **Hybrid** | 53.6 | 31.5 | 44.5 |
| MAmmoTH-Coder-7B | **Hybrid** | 59.4 | 33.4 | 47.2 |
| MetaMath-7B-Mistral | **CoT** | 77.7 | 28.2 | 49.3 |
| OpenChat-3.5-7B | **CoT** | 77.3 | 28.6 | 49.6 |
| ChatGLM-3-6B | **CoT** | 72.3 | 25.7 | 45.6 |
| DeepSeek-Coder-34B | **PoT** | 58.2 | 35.3 | 46.5 |
| Grok-1 | **CoT** | 62.9 | 15.7 | - |
| QWen-72B | **CoT** | 78.9 | 35.2 | - |
| DeepSeek-67B-Chat | **CoT** | **84.1** | 32.6 | - |
| MAmmoTH-7B-Mistral | **Hybrid** | 75.0 | **40.0** | **52.5** |
## **Table of Contents**
- [š Introduction](#introduction)
- [āļø Installation](#installation)
- [š ļø Training and Inference](#training-and-inference)
- [š License](#license)
- [š Citation](#citation)
## **Introduction**
We introduce MAmmoTH š¦£, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
## **Installation**
Clone this repository and install the required packages:
```bash
git clone https://github.com/TIGER-AI-Lab/MAmmoTH.git
cd MAmmoTH
pip install -r requirements.txt
```
## **Training and Inference**
### **Data Loading**
Run the following command to preprocess the data:
```python
from datasets import load_dataset
dataset = load_dataset("TIGER-Lab/MathInstruct")
```
### **Quick Start**
To play with our model, run:
```python
from transformers import pipeline
pipeline = pipeline("text-generation", "TIGER-Lab/MAmmoTH-Coder-7B")
alpaca_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Instruction:\n{query}\n\n### Response:"
query = "Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?"
### By default, MAmmoTH will output the Chain-of-thought (CoT) rationale
rationale_prefix = ""
### You can let MAmmoTH output Program-of-thought (PoT) rationale by simply adding
rationale_prefix = " Let's write a program."
input = alpaca_template.format(query = query + rationale_prefix)
output = pipeline(input)[0]['generated_text']
print(output)
```
### **Large-scale Evaluation**
To replicate the experimental results in our paper, run:
```bash
### For open-eneded questions, the dataset should be one of
### ['gsm8k', 'svamp', 'math', 'numglue', 'deepmind', 'simuleq']
### We first try PoT and if the generated program is not executable, we shift to CoT
dataset='math'
python run_open.py \
--model "TIGER-Lab/MAmmoTH-7B-Mistral" \
--shots 0 \
--stem_flan_type "pot_prompt" \
--batch_size 8 \
--dataset $dataset \
--model_max_length 1500 \
--cot_backup \
--print \
--use_vllm
```
If you want to run self-consistency with PoT/CoT with 10 ensembles.
```bash
### For open-eneded questions, the dataset should be one of
### ['gsm8k', 'svamp', 'math', 'numglue', 'deepmind', 'simuleq']
### We first try PoT and if the generated program is not executable, we shift to CoT
dataset='gsm8k'
python run_open_sc.py \
--model "TIGER-Lab/MAmmoTH-7B-Mistral" \
--shots 0 \
--stem_flan_type "pot_prompt" \
--batch_size 8 \
--dataset $dataset \
--model_max_length 1500 \
--num_samples 10 \
--print
```
```bash
### For mutilple-choice questions, the dataset should be one of
### ['aqua', 'sat', 'mmlu_mathematics'].
### We first try PoT and if the generated program is not executable, we shift to CoT
dataset='aqua'
python run_choice.py \
--model "TIGER-Lab/MAmmoTH-7B-Mistral" \
--shots 0 \
--stem_flan_type "pot_prompt" \
--batch_size 8 \
--dataset $dataset \
--cot_backup \
--print
```
### **Fine-tuning**
To train the 7B/13B model, run:
```bash
torchrun --nproc_per_node [$WORKER_GPU] \
--master_addr [$WORKER_0_HOST] \
--node_rank [$ROLE_INDEX] \
--master_port [$WORKER_0_PORT] \
--nnodes [$WORKER_NUM] \
train.py \
--model_name_or_path "codellama/CodeLlama-7b-hf" \
--data_path "TIGER-Lab/MathInstruct" \
--bf16 True \
--output_dir checkpoints/MAmmoTH-Coder-7B \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000\
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True
```
To train the 34B/70B model, run:
```bash
torchrun --nproc_per_node [$WORKER_GPU] \
--master_addr [$WORKER_0_HOST] \
--node_rank [$ROLE_INDEX] \
--master_port [$WORKER_0_PORT] \
--nnodes [$WORKER_NUM] \
train.py \
--model_name_or_path "codellama/CodeLlama-34b-hf" \
--data_path "TIGER-Lab/MathInstruct" \
--bf16 True \
--output_dir checkpoints/MAmmoTH-Coder-34B \
--num_train_epochs 3 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "epoch" \
--save_total_limit 1 \
--learning_rate 1e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--deepspeed "ds_config/ds_config_zero3.json" \
--tf32 True
```
## Prompt Format
If you want to do CoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
If you want to do PoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction} Let's write a program.
### Response:
```
## WebUI
We use [llama2-webui](https://github.com/liltom-eth/llama2-webui) as our ui bankend. To use webui for MammoTH run:
```
pip install gradio
cd webui/llama2-webui
python3 mammoth.py --model_path your_model_path --backend_type transformers
```
## **License**
Please check out the license of each subset in our curated dataset MathInstruct.
| Dataset Name | License Type |
|-------------- |---------------- |
| GSM8K | MIT |
| GSM8K-RFT | Non listed |
| AQuA-RAT | Apache 2.0 |
| MATH | MIT |
| TheoremQA | MIT |
| Camel-Math | Attribution-NonCommercial 4.0 International |
| NumGLUE | Apache-2.0 |
| CrowdSourced (Lila) | Attribution 4.0 International |
| MathQA | Apache-2.0 |
| Our Curated | MIT |
## **Citation**
Please cite our paper if you use our data, model or code. Please also kindly cite the original dataset papers.
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
```
", Assign "at most 3 tags" to the expected json: {"id":"2364","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"