AI prompts
base on Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vicuna, Claude, ChatGLM, MOSS, 讯飞星火, 文心一言 and more, discover the best answers <div align="center">
<img src="src/assets/logo-cover.png" width=256></img>
<p><strong>Chat with ALL AI Bots Concurrently, Discover the Best</strong></p>
[Deutsch](README_DE-DE.md) | English | [Español](README_ES-ES.md) | [Français](README_FR-FR.md) | [Italian](README_IT-IT.md) | [日本語](README_JA-JP.md) | [한국어](README_KO-KR.md) | [Русский](README_RU-RU.md) | [Tiếng Việt](README_VI-VN.md) | [简体中文](README_ZH-CN.md)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/sunner/ChatALL)
</div>
## Screenshots
![Screenshot](screenshots/screenshot-1.png?raw=true)
## Features
Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to several AI bots concurrently, help you to discover the best results. All you need to do is [download, install](https://github.com/sunner/ChatALL/releases) and ask.
### Is this you?
Typical users of ChatALL are:
- 🤠**Gurus of LLMs**, who want to find the best answers or creations from LLMs.
- 🤓**Researchers of LLMs**, who want to intuitively compare the strengths and weaknesses of various LLMs in different fields.
- 😎**Developers of LLM applications**, who want to quickly debug prompts and find the best-performing foundation models.
### Supported bots
| AI Bots | Web Access | API | Notes |
| ------------------------------------------------------------------------------ | ----------- | ----------- | ------------------------------------------- |
| [360 AI Brain](https://ai.360.cn/) | Yes | No API | |
| [Baidu ERNIE](https://yiyan.baidu.com/) | No | Yes | |
| [Character.AI](https://character.ai/) | Yes | No API | |
| [ChatGLM2 6B & 130B](https://chatglm.cn/) | Yes | No API | No Login required |
| [ChatGPT](https://chatgpt.com) | Yes | Yes | Web Browsing, Azure OpenAI service included |
| [Claude](https://www.anthropic.com/claude) | Yes | Yes | |
| [Code Llama](https://ai.meta.com/blog/code-llama-large-language-model-coding/) | Yes | No API | |
| [Cohere Aya 23](https://cohere.com/blog/aya23) | No | Yes | |
| [Cohere Command R Models](https://cohere.com/command) | No | Yes | |
| [Copilot](https://copilot.microsoft.com/) | Yes | No API | |
| [Dedao Learning Assistant](https://ai.dedao.cn/) | Coming soon | No API | |
| [Falcon 180B](https://huggingface.co/tiiuae/falcon-180B-chat) | Yes | No API | |
| [Gemini](https://gemini.google.com/) | Yes | Yes | |
| [Gemma 2B & 7B](https://blog.google/technology/developers/gemma-open-models/) | Yes | No API | |
| [Gradio](https://gradio.app/) | Yes | No API | For Hugging Face space/self-deployed models |
| [Groq Cloud](https://console.groq.com/docs/models) | No | Yes | |
| [HuggingChat](https://huggingface.co/chat/) | Yes | No API | |
| [iFLYTEK SPARK](http://xinghuo.xfyun.cn/) | Yes | Coming soon | |
| [Kimi](https://kimi.moonshot.cn/ ) | Yes | No API | |
| [Llama 2 13B & 70B](https://ai.meta.com/llama/) | Yes | No API | |
| [MOSS](https://moss.fastnlp.top/) | Yes | No API | |
| [Perplexity](https://www.perplexity.ai/) | Yes | No API | |
| [Phind](https://www.phind.com/) | Yes | No API | |
| [Pi](https://pi.ai) | Yes | No API | |
| [Poe](https://poe.com/) | Yes | Coming soon | |
| [SkyWork](https://neice.tiangong.cn/) | Yes | Coming soon | |
| [Tongyi Qianwen](http://tongyi.aliyun.com/) | Yes | Coming soon | |
| [Vicuna 13B & 33B](https://lmsys.org/blog/2023-03-30-vicuna/) | Yes | No API | No Login required |
| [WizardLM 70B](https://github.com/nlpxucan/WizardLM) | Yes | No API | |
| [YouChat](https://you.com/) | Yes | No API | |
| [You](https://you.com/) | Yes | No API | |
| [Zephyr](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) | Yes | No API | |
More is coming. Upvote your favorite bots in [these issues](https://github.com/sunner/ChatALL/labels/more%20LLMs).
### Other features
- Quick-prompt mode: send the next prompt without waiting for the previous request to complete
- Save chat history locally, protect your privacy
- Highlight the response you like, delete the bad
- Enable/disable any bots at any time
- Switch between one, two, or three-column view
- Auto update to the latest version
- Dark mode (contributed by @tanchekwei)
- Short keys. Press <kbd>Ctrl</kbd> + <kbd>/</kbd> to know all of them (contributed by @tanchekwei)
- Multiple chats (contributed by @tanchekwei)
- Proxy setting (contributed by @msaong)
- Prompt management (contributed by @tanchekwei)
- Supports multiple languages (Chinese, English, German, French, Russian, Vietnamese, Korean, Japanese, Spanish, Italian)
- Supports Windows, macOS and Linux
Planned features:
You are welcome to contribute to these features.
- [ ] Deploy front-end to GitHub Pages
## Privacy
All chat history, settings and login data are saved locally on your computer.
ChatALL collects anonymous usage data to help us improve the product. Including:
- Which AI bots are prompted and how long the prompt is. Not including the prompt content.
- How long the response is, and which response is deleted/highlighted. Not including the response content.
## Prerequisites
ChatALL is a client, not a proxy. Therefore, you must:
1. Have working accounts and/or API tokens for the bots.
2. Have reliable network connections to the bots.
## Download / Install
Download from https://github.com/sunner/ChatALL/releases
### On Windows
Just download the \*-win.exe file and proceed with the setup.
### On macOS
For Apple Silicon Mac (M1, M2 CPU), download the \*-mac-arm64.dmg file.
For other Macs, download \*-mac-x64.dmg file.
If you are using [Homebrew](https://brew.sh/), you can also install it with:
```bash
brew install --cask chatall
```
### On Linux
Debian-based Distributions: Download the .deb file, double click it and install the software.
Arch-based Distributions: You can clone ChatALL from the AUR [here](https://aur.archlinux.org/packages/chatall-bin). You can install it manually or using an AUR helper like yay or paru.
Other Distributions: Download the .AppImage file, make it executable, and enjoy the click-to-run experience. You can also use [AppimageLauncher](https://github.com/TheAssassin/AppImageLauncher).
## Troubleshooting
If you encounter any problems while using ChatALL, you can try the following methods to resolve them:
1. **Refresh** - press <kbd>Ctrl</kbd> + <kbd>R</kbd> or <kbd>⌘</kbd> + <kbd>R</kbd>.
2. **Restart** - exit ChatALL and run it again.
3. **Re-login** - click the settings button in the upper right corner, then click the corresponding login/logout link to relogin the website.
4. **Create a new chat** - click the `New Chat` button and send prompt again.
If none of the above methods work, you can try **resetting ChatALL**. Note that this will delete all your settings and message history.
You can reset ChatALL by deleting the following directories:
- Windows: `C:\Users\<user>\AppData\Roaming\chatall\`
- Linux: `/home/<user>/.config/chatall/`
- macOS: `/Users/<user>/Library/Application Support/chatall/`
If the problem persists, please [submit an issue](https://github.com/sunner/ChatALL/issues).
## For developers
### Contribute a Bot
[The guide](https://github.com/sunner/ChatALL/wiki/%E5%A6%82%E4%BD%95%E6%B7%BB%E5%8A%A0%E4%B8%80%E4%B8%AA%E6%96%B0%E7%9A%84-AI-%E5%AF%B9%E8%AF%9D%E6%9C%BA%E5%99%A8%E4%BA%BA) may help you.
### Run
```bash
npm install
npm run electron:serve
```
### Build
Build for your current platform:
```bash
npm run electron:build
```
Build for all platforms:
```bash
npm run electron:build -- -wml --x64 --arm64
```
## Credits
### Contributors
<a href="https://github.com/sunner/ChatALL/graphs/contributors">
<img src="https://contrib.rocks/image?repo=sunner/ChatALL" />
</a>
### Others
- GPT-4 contributed much of the code
- ChatGPT, Copilot and Google provide many solutions (ranked in order)
- Inspired by [ChatHub](https://github.com/chathub-dev/chathub). Respect!
## Sponsor
If you like this project, please consider:
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/F1F8KZJGJ)", Assign "at most 3 tags" to the expected json: {"id":"12054","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"