AI prompts
base on Can large language models provide useful feedback on research papers? A large-scale empirical analysis. # Can large language models provide useful feedback on research papers? A large-scale empirical analysis.
[![Python 3.10](https://img.shields.io/badge/python-3.10-blue.svg)](https://www.python.org/downloads/release/python-3100/)
[![Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)
[![arXiv](https://img.shields.io/badge/arXiv-2310.01783-b31b1b.svg)](https://arxiv.org/abs/2310.01783)
This repo provides the Python source code of our paper:
[Can large language models provide useful feedback on research papers? A large-scale empirical analysis.](https://arxiv.org/abs/2310.01783)
[[PDF]](https://arxiv.org/pdf/2310.01783.pdf)[[Twitter]](https://twitter.com/james_y_zou/status/1709608909395357946)
```
@inproceedings{LLM-Research-Feedback-2023,
title={{Can large language models provide useful feedback on research papers? A large-scale empirical analysis}},
author={Liang, Weixin and Zhang, Yuhui and Cao, Hancheng and Wang, Binglu and Ding, Daisy and Yang, Xinyu and Vodrahalli, Kailas and He, Siyu and Smith, Daniel and Yin, Yian and McFarland, Daniel and Zou, James},
booktitle={arXiv preprint arXiv:2310.01783},
year={2023}
}
```
## GPT Store Launch
đ⨠Due to high demand, we have now launched an "AI Feedback on Research Manuscripts" GPT available on the OpenAI GPT store.
[![AI Feedback on Research Manuscripts](https://img.shields.io/badge/GPT%20Store-AI_Feedback_on_Research_Manuscripts-9cf)](https://chat.openai.com/g/g-rqNGmiRU9-ai-feedback-on-research-manuscripts)
## Abstract
Expert feedback lays the foundation of rigorous research. However, the rapid growth of scholarly production and intricate knowledge specialization challenge the conventional scientific feedback mechanisms. High-quality peer reviews are increasingly difficult to obtain. Researchers who are more junior or from under-resourced settings have especially hard times getting timely feedback. With the breakthrough of large language models (LLM) such as GPT-4, there is growing interest in using LLMs to generate scientific feedback on research manuscripts. However, the utility of LLM-generated feedback has not been systematically studied. To address this gap, we created an automated pipeline using GPT-4 to provide comments on the full PDFs of scientific papers. We evaluated the quality of GPT-4's feedback through two large-scale studies. We first quantitatively compared GPT-4's generated feedback with human peer reviewer feedback in 15 _Nature_ family journals (3,096 papers in total) and the _ICLR_ machine learning conference (1,709 papers). The overlap in the points raised by GPT-4 and by human reviewers (average overlap 30.85% for _Nature_ journals, 39.23% for _ICLR_) is comparable to the overlap between two human reviewers (average overlap 28.58% for _Nature_ journals, 35.25% for _ICLR_). The overlap between GPT-4 and human reviewers is larger for the weaker papers (i.e., rejected _ICLR_ papers; average overlap 43.80%). We then conducted a prospective user study with 308 researchers from 110 US institutions in the field of AI and computational biology to understand how researchers perceive feedback generated by our GPT-4 system on their own papers. Overall, more than half (57.4%) of the users found GPT-4 generated feedback helpful/very helpful and 82.4% found it more beneficial than feedback from at least some human reviewers. While our findings show that LLM-generated feedback can help researchers, we also identify several limitations. For example, GPT-4 tends to focus on certain aspects of scientific feedback (e.g., `add experiments on more datasets'), and often struggles to provide in-depth critique of method design. Together our results suggest that LLM and human feedback can complement each other. While human expert review is and should continue to be the foundation of rigorous scientific process, LLM feedback could benefit researchers, especially when timely expert feedback is not available and in earlier stages of manuscript preparation before peer-review.
![1](https://github.com/Weixin-Liang/LLM-scientific-feedback/assets/32794044/8958eb56-a652-45bb-9347-e9578f432ae0)
![2](https://github.com/Weixin-Liang/LLM-scientific-feedback/assets/32794044/6228288b-9a54-4c90-8510-32bb823f1e05)
## Usage
To run the code, you need to 1) create a PDF parsing server and run in the background, 2) create the LLM feedback server, 3) open the web browser and upload your paper.
### Create and Run PDF Parsing Server
â ď¸â ď¸â ď¸ **ScienceBeam PDF parser only supports x86 Linux operating system. Please let us know if you find solutions for other operating systems!**
```bash
conda env create -f conda_environment.yml
conda activate ScienceBeam
python -m sciencebeam_parser.service.server --port=8080 # Make sure this is running in the background
```
### Create and Run LLM Feedback Server
```bash
conda create -n llm python=3.10
conda activate llm
pip install -r requirements.txt
cat YOUR_OPENAI_API_KEY > key.txt # Replace YOUR_OPENAI_API_KEY with your OpenAI API key starting with "sk-"
python main.py # If you have installed ScienceBeam using x86 Linux and want to generate feedback from the raw PDF file
python main_from_text.py # If you are using other operating systems or want to generate feedback from the parsed paper in text format
```
### Open the Web Browser and Upload Your Paper
Open http://0.0.0.0:7799 and upload your paper. The feedback will be generated in around 120 seconds.
You should get the following output:
![demo](demo.png)
If you encounter any error, please first check the server log and then open an issue.
", Assign "at most 3 tags" to the expected json: {"id":"3270","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"