AI prompts
base on AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day. # π£ GodMode - the smol AI Chat Browser
This is a dedicated chat browser that only does one thing: help you quickly access **the full webapps** of ChatGPT, Claude 2, Perplexity, Bing and more **with a single keyboard shortcut (Cmd+Shift+G)**.
![image](https://github.com/smol-ai/GodMode/assets/6764957/90f4bab4-e406-4507-b37e-8c8d80d18f15)
([click for video](https://twitter.com/swyx/status/1692988634364871032))
Whatever is typed at the bottom is entered into all **web apps** simultaneously, however if you wish to explore one further than the other you can do so independently since they are just webviews.
## Installation
**Install [here](https://github.com/smol-ai/GodMode/releases/latest)!** And then log in to Google on any one of the providers + refreshing logs you into most of the rest.
Google Bard seems to have weird auth requirements [we havent figured out yet](https://github.com/smol-ai/GodMode/issues/201) - logging into Google via Anthropic Claude first seems to be the most reliable right now while we figure it out.
Download:
- Arm64 for Apple Silicon Macs, non Arm64 (universal) for Intel Macs.
- We [just added Windows/Linux support](https://github.com/smol-ai/GodMode/pull/162), but it needs a lot of work. Help wanted!
You can also build from source, see instructions below.
## Mixture of Mixture of Experts
It's well discussed by now that [GPT4 is a mixture of experts model](https://twitter.com/swyx/status/1671272883379908608), which explains its great advancement over GPT3 while not sacrificing speed. It stands to reason that if you can run one chat and get results from all the top closed/open source models, you will get that much more diversity in results for what you seek. As a side benefit, we will add opt-in data submission soon so we can crowdsource statistics on win rates, niche advantages, and show them over time.
> βThat's why it's always worth having a few philosophers around the place. One minute it's all is truth beauty and is beauty truth, and does a falling tree in the forest make a sound if there's no one there to hear it, and then just when you think they're going to start dribbling one of 'em says, incidentally, putting a thirty-foot parabolic reflector on a high place to shoot the rays of the sun at an enemy's ships would be a very interesting demonstration of optical principles.β
>
> β [Terry Pratchett, Small Gods](https://www.goodreads.com/work/quotes/1636629-small-gods)
## Oh so this is like nat.dev?
Yes and no:
1. SOTA functionality is often released without API (eg: ChatGPT Code Interpreter, Bing Image Creator, Bard Multimodal Input, Claude Multifile Upload). **We insist on using webapps** so that you have full access to all functionality on launch day. We also made light/dark mode for each app, just for fun (`Cmd+Shift+L` (Aug update: currently broken in the GodMode rewrite, will fix))
2. This is a **secondary browser** that can be pulled up with a keyboard shortcut (`Cmd+Shift+G`, customizable). Feels a LOT faster than having it live in a browser window somewhere and is easy to pull up/dismiss during long generations.
3. Supports no-API models like Perplexity and Poe, and local models like LLaMa and Vicuna (via [OobaBooga](https://github.com/oobabooga/text-generation-webui)).
4. No paywall, build from source.
5. Fancy new features like PromptCritic (AI assisted prompt improvement)
## Supported LLM Providers
| Provider (default in **bold**) | Notes |
| --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **ChatGPT** | Defaults to "[GPT4.5](https://www.latent.space/p/code-interpreter#details)"! |
| **Claude 2** | Excellent, long context, multi document, fast model. |
| **Perplexity** | The login is finnicky - login to Google on any of the other chats, and then reload (cmd+R) - it'll auto login. Hopefully they make it more intuitive/reliable in future. |
| **Bing** | Microsoft's best. [It's not the same as GPT-4!](https://twitter.com/jeremyphoward/status/1666593682676662272?s=20). We could use help normalizing its styling. |
| Bard | Google's best. [Bard's updates are... flaky](https://twitter.com/swyx/status/1678495067663925248) |
| Llama2 via Perplexity | Simple model host. Can run [the latest CodeLlama 34B model](https://twitter.com/swyx/status/1694870138984747449?s=20)! try it! |
| Llama2 via Lepton.ai | Simple model host. Is very [fast](https://twitter.com/rauchg/status/1692638409230094644) |
| Quora Poe | Great at answering general knowledge questions |
| Inflection Pi | Very unique long-memory clean conversation style |
| You.com Chat | great search + chat answers, one of the first |
| HuggingChat | Simple model host. Offers Llama2, OpenAssistant |
| Vercel Chat | Simple open source chat wrapper for GPT3 API |
| Local/GGML Models (via [OobaBooga](https://github.com/oobabooga/text-generation-webui)) | Requires Local Setup, see oobabooga docs |
| Phind | Developer focused chat with [finetuned CodeLlama](https://www.phind.com/blog/code-llama-beats-gpt4) |
| Stable Chat | Chat interface for [Stable Beluga](https://stability.ai/blog/stable-beluga-large-instruction-fine-tuned-models), an open LLM by Stability AI. |
| [OpenRouter](https://openrouter.ai) | Access GPT4, Claude, PaLM, and open source models |
| OpenAssistant | Coming Soon βΒ [Submit a PR](https://github.com/smol-ai/GodMode/issues/37)! |
| Claude 1 | Requires Beta Access |
| ... What Else? | [Submit a New Issue](https://github.com/smol-ai/GodMode/issues)! |
## Features and Usage
- **Keyboard Shortcuts**:
- Use `Cmd+Shift+G` for quick open and `Cmd+Enter` to submit.
- Customize these shortcuts (thanks [@davej](https://github.com/smol-ai/GodMode/pull/85)!):
- Quick Open
- ![image](https://github.com/davej/smol-ai-menubar/assets/6764957/3a6d0a16-7f54-43e5-9060-ec7b2486d32d)
- Submit can be toggled to use `Enter` (faster for quick chat replies) vs `Cmd+Enter` (easier to enter multiline prompts)
- `Cmd+Shift+L` to toggle light/dark mode (not customizable for now)
- Remember you can customize further by building from source!
- **Pane Resizing and Rearranging**:
- Resize the panes by clicking and dragging.
- Use `Cmd+1/2/3` to pop out individual webviews
- Use `Cmd +/-` to zoom in/out globally
- open up the panel on the bottom right to reorder panes or reset them to default
- `Cmd p` to pin/unpin the window Always on Top
- **Model Toggle**:
- Enable/disable providers by accessing the context menu. The choice is saved for future sessions.
- Supported models: ChatGPT, Bing, Bard, Claude 1/2, and more (see Supported LLM Providers above)
- **Support for oobabooga/text-generation-webui**:
- Initial support for [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) has been added.
- Users need to follow the process outlined in the text-generation-webui repository, including downloading models (e.g. [LLaMa-13B-GGML](https://huggingface.co/TheBloke/LLaMa-13B-GGML/blob/main/llama-13b.ggmlv3.q4_0.bin), or [GPT4-x-alpaca](https://www.youtube.com/watch?v=nVC9D9fRyNU)).
- Run the model on `http://127.0.0.1:7860/` before running it inside of the smol GodMode browser.
- The UI only supports one kind of prompt template. Contributions are welcome to make the templating customizable (see the Oobabooga.js provider).
- **Starting New Conversations**:
- Use `Cmd+R` to start a new conversation with a simple window refresh.
- **Prompt Critic**: Uses Llama 2 to improve your prompting when you want it!
## video demo
- original version https://youtu.be/jrlxT1K4LEU
- Jun 1 version https://youtu.be/ThfFFgG-AzE
- https://twitter.com/swyx/status/1658403625717338112
- https://twitter.com/swyx/status/1663290955804360728?s=20
- July 11 version https://twitter.com/swyx/status/1678944036135260160
- Aug 19 godmode rewrite https://twitter.com/swyx/status/1692988634364871032
## Download and Setup
You can:
- download the precompiled binaries: https://github.com/smol-ai/GodMode/releases/latest (sometimes Apple/Windows marks these as untrusted/damaged, just open them up in Applications and right-click-open to run it).
- for Macs, you can use the "-universal.dmg" versions and it will choose between Apple Silicon/Intel architectures. We recommend installing this, but just fyi:
- Apple Silicon M1/M2 macs use the "arm64" version
- Intel Macs use the ".dmg" versions with no "arm64"
- for Windows, use ".exe" version. It will be marked as untrusted for now as we haven't done Windows codesigning yet
- for Linux, use ".AppImage".
- for Arch Linux, there is a [third party](https://github.com/smol-ai/GodMode/issues/47) AUR package: aur.archlinux.org/packages/godmode
- Or run it from source (instructions below)
When you first run the app:
1. log into your Google account (once you log into your google account for chatgpt, you'l also be logged in to Bard, Perplexity, Anthropic, etc). logging into Google via Anthropic Claude first seems to be the most reliable right now while we figure it out
2. For Bing, after you log in to your Microsoft account, you'll need to refresh to get into the Bing Chat screen. It's a little finnicky at first try but it works.
Optional: You can have GodMode start up automatically on login - just go to Settings and toggle it on. Thanks [@leeknowlton](https://github.com/smol-ai/GodMode/pull/188)!
![image](https://github.com/smol-ai/GodMode/assets/6764957/99c3426f-d306-469c-98fb-88c80fb12a41)
## seeking contributors!
please see https://github.com/smol-ai/GodMode/blob/main/CONTRIBUTING.md
## build from source
If you want to build from source, you will need to clone the repo and open the project folder:
1. Clone the repository and navigate to the project folder:
```bash
git clone https://github.com/smol-ai/GodMode.git
cd GodMode
npm install --force
npm run start # to run in development, locally
```
2. Generate binaries:
```bash
npm run package # https://electron-react-boilerplate.js.org/docs/packaging
# ts-node scripts/clean.js dist clears the webpackPaths.distPath, webpackPaths.buildPath, webpackPaths.dllPath
# npm run build outputs to /release/app/dist/main
# electron-builder build --publish never builds and code signs the app.
# this is mostly for swyx to publish the official codesigned and notarized releases
```
The outputs will be located in the `/release/build` directory.
## Related project
I only later heard about https://github.com/sunner/ChatALL which is cool but I think defaulting to a menbuar/webview experience is better - you get to use full features like Code Interpreter and Claude 2 file upload when they come out, without waiting for API
", Assign "at most 3 tags" to the expected json: {"id":"2119","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"