AI prompts
base on Browse the web with GPT-4V and Vimium # vimGPT
Giving multimodal models an interface to play with.
https://github.com/ishan0102/vimGPT/assets/47067154/467be2ac-7e8d-47de-af89-5bb6f51c1c31
## Overview
LLMs as a way to browse the web is being explored by numerous startups and open-source projects. With this project, I was interested in seeing if we could only use [GPT-4V](https://openai.com/research/gpt-4v-system-card)'s vision capabilities for web browsing.
The issue with this is it's hard to determine what the model wants to click on without giving it the browser DOM as text. [Vimium](https://vimium.github.io/) is a Chrome extension that lets you navigate the web with only your keyboard. I thought it would be interesting to see if we could use Vimium to give the model a way to interact with the web.
## Usage
Install Python requirements:
```
pip install -r requirements.txt
```
Download Vimium locally (have to load the extension manually when running Playwright):
```
./setup.sh
```
Run the script:
```
python main.py
```
## Voice Mode
Voice Mode: Engage with the browser using voice commands. Simply say your objective, and watch vimGPT perform actions in real-time.
```
python main.py --voice
```
## Ideas
Feel free to collaborate with me on this, I have a number of ideas:
- Use [Assistant API](https://platform.openai.com/docs/assistants/overview) once it's released for automatic context retrieval. The Assistant API will create a thread that we can add messages too, to keep the history of actions, but it doesn't support the Vision API yet.
- Vimium fork for overlaying elements. A specialized version of Vimium that selectively overlays elements based on context could be useful, effectively pruning based on the user query. Might be worth testing if different sized boxes/colors help.
- Use higher resolution images, as it seems to fail at low res. I noticed that below a certain threshold, the model wouldn't detect anything. This might be improved by using higher resolution images but that would require more tokens.
- Fine-tune [LLaVa](https://github.com/haotian-liu/LLaVA) or [CogVLM](https://github.com/THUDM/CogVLM) to do this or [Fuyu-8B](https://www.adept.ai/blog/fuyu-8b). Could be faster/cheaper. CogVLM can accurately specify pixel coordinates which may be a good way to augment this.
- Use JSON mode once it's released for Vision API. Currently the Vision API doesn't support JSON mode or function calling, so we have to rely on more primitive prompting methods.
- Have the Vision API return general instructions, formalized by another call to the JSON mode version of the API. This is a workaround for the JSON mode issue but requires another LLM call, which is slower/more expensive.
- Add speech-to-text with Whisper or another model to eliminate text input and make this more accessible.
- Make this work for your own browser instead of spinning up an artificial one. I want to be able to order food with my credit card.
- Provide the frames with and without Vimium enabled in case the model can't see what's under the yellow square.
- Pass the Chrome accessibility tree in as input in addition to the image. This provides a layout of interactive elements that can be mapped to the Vimium bindings.
- Have it write longer things based on the context of the page or return information to the user based on the query. Examples are replying to an email, summarizing a news article, etc. Visual question answering.
- Make this a useful tool for blind people by adding voice mode and a key that creates an Assistant API for a given page. Something where you can "speak to an agent" about a page content in natural language.
- Use Javascript to label DOM elements with colored boxes, similar to [this](https://x.com/DivGarg9/status/1659270501498523648?s=20).
- Build a graph-based retry mechanism that makes sure we aren't falling into cycles, i.e. recursively clicking on the same element.
## Shoutouts
- HackerNews: https://news.ycombinator.com/item?id=38200308
- VisualWebArena - Evaluating Multimodal Agents on Realistic Visual Web Tasks (page 9): https://arxiv.org/abs/2401.13649
- WIRED: https://www.wired.com/story/fast-forward-tested-next-gen-ai-assistant/
## References
- https://github.com/Globe-Engineer/globot
- https://github.com/nat/natbot
", Assign "at most 3 tags" to the expected json: {"id":"4811","tags":[]} "only from the tags list I provide: [{"id":39,"name":"3d-generation","display_name":"3D generation","slug":"3d-generation"},{"id":3,"name":"ai-agent","display_name":"AI agent","slug":"ai-agent"},{"id":8,"name":"ai-coding","display_name":"AI coding assistant","slug":"ai-coding"},{"id":5,"name":"ai-image","display_name":"AI image generation","slug":"ai-image"},{"id":9,"name":"ai-infrastructure","display_name":"AI infrastructure","slug":"ai-infrastructure"},{"id":10,"name":"ai-memory","display_name":"AI memory","slug":"ai-memory"},{"id":11,"name":"ai-skills","display_name":"AI skills","slug":"ai-skills"},{"id":12,"name":"ai-translation","display_name":"AI translation","slug":"ai-translation"},{"id":6,"name":"ai-video","display_name":"AI video generation","slug":"ai-video"},{"id":4,"name":"ai-voice","display_name":"AI voice","slug":"ai-voice"},{"id":7,"name":"ai-workflow","display_name":"AI workflow","slug":"ai-workflow"},{"id":22,"name":"audio-processing","display_name":"Audio processing","slug":"audio-processing"},{"id":29,"name":"authentication","display_name":"Authentication","slug":"authentication"},{"id":51,"name":"bundler","display_name":"Bundler","slug":"bundler"},{"id":41,"name":"chatbot","display_name":"Chatbot","slug":"chatbot"},{"id":27,"name":"cloud-native","display_name":"Cloud native","slug":"cloud-native"},{"id":1,"name":"computer-vision","display_name":"Computer vision","slug":"computer-vision"},{"id":37,"name":"crypto-trading","display_name":"Crypto trading","slug":"crypto-trading"},{"id":57,"name":"curated-list","display_name":"Curated list","slug":"curated-list"},{"id":54,"name":"data-streaming","display_name":"Data streaming","slug":"data-streaming"},{"id":35,"name":"data-visualization","display_name":"Data visualization","slug":"data-visualization"},{"id":16,"name":"database-backup","display_name":"Database backup","slug":"database-backup"},{"id":49,"name":"design-system","display_name":"Design system","slug":"design-system"},{"id":38,"name":"digital-human","display_name":"Digital human","slug":"digital-human"},{"id":34,"name":"document-processing","display_name":"Document processing","slug":"document-processing"},{"id":44,"name":"ecommerce","display_name":"E-commerce","slug":"ecommerce"},{"id":45,"name":"emulator","display_name":"Emulator","slug":"emulator"},{"id":46,"name":"file-management","display_name":"File management","slug":"file-management"},{"id":32,"name":"fintech","display_name":"Fintech","slug":"fintech"},{"id":31,"name":"game-development","display_name":"Game development","slug":"game-development"},{"id":24,"name":"headless-browser","display_name":"Headless browser","slug":"headless-browser"},{"id":52,"name":"headless-cms","display_name":"Headless CMS","slug":"headless-cms"},{"id":36,"name":"home-automation","display_name":"Home automation","slug":"home-automation"},{"id":20,"name":"image-editing","display_name":"Image editing","slug":"image-editing"},{"id":28,"name":"iot","display_name":"IoT","slug":"iot"},{"id":13,"name":"local-llm","display_name":"Local LLM","slug":"local-llm"},{"id":17,"name":"mcp","display_name":"MCP","slug":"mcp"},{"id":47,"name":"monitoring","display_name":"Monitoring","slug":"monitoring"},{"id":2,"name":"nlp","display_name":"NLP","slug":"nlp"},{"id":26,"name":"observability","display_name":"Observability","slug":"observability"},{"id":40,"name":"pentesting","display_name":"Pentesting","slug":"pentesting"},{"id":48,"name":"programming-examples","display_name":"Programming examples","slug":"programming-examples"},{"id":42,"name":"proxy","display_name":"Proxy","slug":"proxy"},{"id":14,"name":"rag","display_name":"RAG","slug":"rag"},{"id":56,"name":"resume-building","display_name":"Resume building","slug":"resume-building"},{"id":33,"name":"robotics","display_name":"Robotics","slug":"robotics"},{"id":30,"name":"search","display_name":"Search","slug":"search"},{"id":43,"name":"self-hosted","display_name":"Self-hosted","slug":"self-hosted"},{"id":50,"name":"static-analysis","display_name":"Static analysis","slug":"static-analysis"},{"id":18,"name":"synthetic-data","display_name":"Synthetic data","slug":"synthetic-data"},{"id":19,"name":"text-to-speech","display_name":"Text to speech","slug":"text-to-speech"},{"id":53,"name":"ui-components","display_name":"UI components","slug":"ui-components"},{"id":15,"name":"vector-database","display_name":"Vector database","slug":"vector-database"},{"id":21,"name":"video-editing","display_name":"Video editing","slug":"video-editing"},{"id":25,"name":"web-scraping","display_name":"Web scraping","slug":"web-scraping"},{"id":55,"name":"webassembly","display_name":"WebAssembly","slug":"webassembly"},{"id":23,"name":"workflow-automation","display_name":"Workflow automation","slug":"workflow-automation"}]" returns me the "expected json"