base on Open-Source Reproduction/Demo of the LLM Riddles Game # LLM Riddles
<div align="center">
<br>
<a href="https://github.com/opendilab/LLMRiddles/blob/main/llmriddles/assets/banner.svg">
<img src="https://github.com/opendilab/LLMRiddles/blob/main/llmriddles/assets/banner.svg" width="1000" height="200" alt="Click to see the source">
</a>
<br>
</div>
</p>
<p align="center" class="trendshift">
<a href="https://trendshift.io/repositories/4774" target="_blank">
<img src="https://trendshift.io/api/badge/repositories/4774" alt="SocialSisterYi%2Fbilibili-API-collect | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/>
</a>
</p>
English | [简体中文](https://github.com/opendilab/LLMRiddles/blob/main/README_zh.md) | [日本語](https://github.com/opendilab/LLMRiddles/blob/main/README_ja.md)
## :thinking: What's This
Welcome to LLM Riddles! This is a game of wits and courage with language models. In the game, you need to construct questions that interact with the language model to get answers that meet the requirements. In this process, you can use your brain and use all the methods you can think of to get the model to output the results required by the answer.
## :space_invader: How to Play
We provide an online version for players to directly access and try out.
- [Hugging Face][ChatGPT + English(w/o key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN)
- [Hugging Face][ChatGPT + Chinese(w/o key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN)
- [Hugging Face][ChatGLM + English(w/ key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGLMEN)
- [Hugging Face][ChatGLM + Chinese(w/ key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN)
- [OpenXLab][ChatGPT + Chinese(w/o key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGPTCN)
- [OpenXLab][ChatGPT + English(w/o key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGPTEN)
- [OpenXLab][ChatGLM + Chinese(w/ key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGLMCN)
- [OpenXLab][ChatGLM + English(w/ key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGLMEN)
- [Private Server][ChatGPT + Chinese(w/ key)](http://llmriddles.opendilab.net/)
We also provide a technical blog about hints and solutions: [link](https://zhuanlan.zhihu.com/p/667801731)
Local deployment can be done in the following ways:
## Installation
### Use ChatGPT / ChatGLM API
```shell
pip3 install -r requirements.txt
```
### Deploy Mistral-7B-Instruct-v0.1 for local inference
```shell
pip3 install -r requirements-dev.txt
```
## Launch
### ChatGPT + Chinese
```shell
QUESTION_LANG=cn QUESTION_LLM='chatgpt' QUESTION_LLM_KEY=<your API key> python3 -u app.py
```
### ChatGPT + English
```shell
QUESTION_LANG=en QUESTION_LLM='chatgpt' QUESTION_LLM_KEY=<your API key> python3 -u app.py
```
### ChatGLM + Chinese
```shell
QUESTION_LANG=cn QUESTION_LLM='chatglm' QUESTION_LLM_KEY=<your API key> python3 -u app.py
```
### ChatGLM + English
```shell
QUESTION_LANG=en QUESTION_LLM='chatglm' QUESTION_LLM_KEY=<your API key> python3 -u app.py
```
### DeepSeek + Chinese
```shell
Question_Lang=cn QUESTION_LLM='deepseek' QUESTION_LLM_KEY=<your API key> python3 -u app.py
```
### DeepSeek + English
```shell
Question_Lang=en QUESTION_LLM='deepseek' QUESTION_LLM_KEY=<your API key> python3 -u app.py
```
### Mistral-7B-Instruct-v0.1 + English
```shell
QUESTION_LANG=en QUESTION_LLM='mistral-7b' python3 -u app.py
```
## :technologist: Why Doing This
Our goal is to use this game to give participants a deeper understanding of the fascinating aspects of prompt engineering and natural language processing. This process will show players how to cleverly construct prompts and how to use them to trigger surprising responses from artificial intelligence systems, while also helping them better understand the incredible power of deep learning and natural language processing technologies. .
## :raising_hand: How to Submit a Custom Level
If you have interesting questions or ideas, players are welcome to submit their own ideas. You can [Initiate a Pull Request](https://github.com/opendilab/LLMRiddles/compare) and submit it to us. We will include it in the level after approval.
The question format should include the following points:
- Pull Request title, example: feature(username): Chapter X-Level Design
- The ID you want to be mentioned
- Modify the corresponding chapter question files
- Modification of \__init__.py
For a complete example, please refer to: [Submit your own level design](https://github.com/opendilab/LLMRiddles/pull/6)
## :writing_hand: Roadmap
- [x] Support custom levels
- [x] Online trial link
- [x] Hugging Face Space link
- [x] Support Mistral-7B(English version)
- [x] Support ChatGLM(Chinese and English version)
- [x] Solution blogs
- [ ] Support Baichuan2-7B(Chinese version)
- [ ] Support LLaMA2-7B(English version)
- [ ] LLM inference speed optimization
- [ ] More question levels
## :speech_balloon: Feedback and Contribution
- [Start an Issue](https://github.com/opendilab/CodeMorpheus/issues/new/choose) on GitHub
- Contact us by email (
[email protected])
- Discuss on OpenDILab's WeChat group (i.e. add us on WeChat: ding314assist)
<img src=https://github.com/opendilab/LLMRiddles/blob/main/llmriddles/assets/wechat.jpeg width=35% />
## :star2: Special Thanks
- Thanks to [Haoqiang Fan](https://www.zhihu.com/people/haoqiang-fan) for his original idea and title, which provided inspiration and motivation for the development and expansion of this project.
- Thanks to [HuggingFace](https://huggingface.co) for supporting and assisting the game.
- Thanks to [ChatGLM](https://chatglm.cn) for supporting and assisting the game, especially sufficient inference token support.
- Thanks to [LLM Riddles contributors](https://github.com/opendilab/LLMRiddles/graphs/contributors) for their implementation and support.
## :label: License
All code within this repository is under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
<p align="right">(<a href="#top">back to top</a>)</p>
", Assign "at most 3 tags" to the expected json: {"id":"4774","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"