AI prompts
base on Dockerized Computer Use Agents with Production Ready API’s - MCP Client for Langchain - GCA
<a href="#">
<img src="https://github.com/user-attachments/assets/27778034-29f5-4a71-b696-4e3f70760b26" >
</a>
</p>
## What is GCA?
Hi, this is an open source framework to build vertical AI agent. We just support many llms and new technologies like mcp. You can build your own vertical ai agent army in few commands with the stucturized API.
<p>
<p >
<a href="https://www.producthunt.com/posts/gpt-computer-assistant?embed=true&utm_source=badge-top-post-badge&utm_medium=badge&utm_souce=badge-gpt-computer-assistant" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/top-post-badge.svg?post_id=465468&theme=dark&period=daily" alt="GPT Computer Assistant - Create intelligence for your products | Product Hunt" width="200" /></a>
.
<a href="https://discord.gg/qApFmWMt8x"><img alt="Static Badge" src="https://img.shields.io/badge/Discord-Join?style=social&logo=discord" width=120></a>
.
<a href="https://x.com/GPTCompAsst"><img alt="Static Badge" src="https://img.shields.io/badge/X_App-Join?style=social&logo=x" width=100></a>
</p>
<p>
<a href="https://www.python.org/">
<img src="https://img.shields.io/badge/Made%20with-Python-1f425f.svg" alt="Made_with_python">
</a>
.
<img src="https://static.pepy.tech/personalized-badge/gpt-computer-assistant?period=total&units=international_system&left_color=grey&right_color=blue&left_text=PyPI%20Downloads" alt="pypi_downloads">
</p>
<p align="center">
<br>
<br>
</p>
# Playground of GCA | NEW
With [playground.gca.dev](https://playground.gca.dev/) you are ready to test and create your own strategies for creating an Vertical AI Agent.
- Playground sessions limited to <b>10 minute</b>.
<a href="https://playground.gca.dev/">
<img src="https://github.com/user-attachments/assets/125a1a15-0fee-4c7e-bfc5-1a23ef83c92d" alt="Playground" width=1000>
</a>
<p align="center">
<br>
<br>
</p>
# GPT Computer Assistant(GCA)
GCA is an AI agent framework designed to make computer use across Windows, macOS, and Ubuntu. GCA enables you to replace repetitive, small-logic-based tasks worker to an AI. There is an really important potential that we believe. Whether you’re a developer, analyst, or IT professional, GCA can empower you to accomplish more in less time.
Imagine this:
- <b>Extract the tech stacks of xxx Company</b> | Sales Development Representer
- <b>Identify Relevant tables for Analysis for xxx</b> | Data Analytics
- <b>Check the logs to find core cause of this incident</b> | Technical Support Engineer
- <b>Making CloudFlare Security Settings</b> | Security Specialist
These examples shows how GCA is realize the concept of <b>Vertical AI Agents</b> solutions that not only replicate human tasks, GCA also in the beyond of human speed at same cases.
<p align="center">
<br>
<br>
</p>
## How GCA Works?
GCA is a Python-based project that runs on multiple operating systems, including Windows, macOS, and Ubuntu. It integrates external concepts, like the Model Context Protocol (MCP), along with its own modules, to interact with and control a computer efficiently. The system performs both routine and advanced tasks by mimicking human-like actions and applying computational precision.
### 1. Human-like Actions:
GCA can replicate common user actions, such as:
- <b>Clicking</b>: Interact with buttons or other UI elements.
- <b>Reading</b>: Recognize and interpret text on the screen.
- <b>Scrolling</b>: Navigate through documents or web pages.
- <b>Typing</b>: Enter text into forms or other input fields.
### 2. Advanced Capabilities:
Through MCP and GCA’s own modules, it achieves tasks that go beyond standard human interaction, such as:
- <b>Updating dependencies</b> of a project in seconds.
- <b>Analyzing entire database</b> tables to locate specific data almost instantly.
- <b>Automating cloud security</b> configurations with minimal input.
<p align="center">
<br>
<br>
<br>
</p>
## Prequisites
- Python 3.10
<p align="center">
<br>
<br>
</p>
## Using GCA.dev Cloud
<b>Installation</b>
```console
pip install gpt-computer-assistant
```
Single Instance:
```python
from gpt_computer_assistant import cloud
# Starting instance
instance = cloud.instance()
# Show Screenshot
instance.current_screenshot()
# Asking and getting result
result = instance.request("Extract the tech stacks of gpt-computer-assitant Company", "i want a list")
print(result)
instance.close()
```
<img src="https://github.com/user-attachments/assets/3fd70530-6b86-43b4-9025-dce7853e4a38" alt="Cloud" width=1000>
<p align="center">
<br>
<br>
<br>
</p>
## Self-Hosted GCA Server
### Docker
**Pulling Image**
* If you are using ARM computer like M Chipset macbooks you should use *ARM64* at the end.
```console
docker pull upsonic/gca_docker_ubuntu:dev0-AMD64
```
**Starting container**
```console
docker run -d -p 5901:5901 -p 7541:7541 upsonic/gca_docker_ubuntu:dev0-AMD64
```
**LLM Settings&Using**
```python
from gpt_computer_assistant import docker
# Starting instance
instance = docker.instance("http://localhost:7541/")
# Connecting to OpenAI and Anthropic
instance.client.save_models("gpt-4o")
instance.client.save_openai_api_key("sk-**")
instance.client.save_anthropic_api_key("sk-**")
# Asking and getting result
result = instance.request("Extract the tech stacks of gpt-computer-assitant Company", "i want a list")
print(result)
instance.close()
```
<p align="center">
<br>
<br>
</p>
### Local
<b>Installation</b>
```console
pip install 'gpt-computer-assistant[base]'
pip install 'gpt-computer-assistant[api]'
```
<b>LLM Settings&Using</b>
```python
from gpt_computer_assistant import local
# Starting instance
instance = local.instance()
# Connecting to OpenAI and Anthropic
instance.client.save_models("gpt-4o")
instance.client.save_openai_api_key("sk-**")
instance.client.save_anthropic_api_key("sk-**")
# Asking and getting result
result = instance.request("Extract the tech stacks of gpt-computer-assitant Company", "i want a list")
print(result)
instance.close()
```
<img width="1000" src="https://github.com/user-attachments/assets/327cdceb-49e7-4a8a-a724-e386553f43d8">
<p align="center">
<br>
<br>
<br>
</p>
## Adding Custom MCP Server to GCA
```python
instance.client.add_mcp_server("websearch", "npx", ["-y", "@mzxrai/mcp-webresearch"])
```
## Roadmap
| Feature | Status | Target Release |
|---------------------------------|--------------|----------------|
| Clear Chat History | Completed | Q2 2024 |
| Long Audios Support (Split 20mb) | Completed | Q2 2024 |
| Text Inputs | Completed | Q2 2024 |
| Just Text Mode (Mute Speech) | Completed | Q2 2024 |
| Added profiles (Different Chats) | Completed | Q2 2024 |
| More Feedback About Assistant Status | Completed | Q2 2024 |
| Local Model Vision and Text (With Ollama, and vision models) | Completed | Q2 2024 |
| **Our Customizable Agent Infrastructure** | Completed | Q2 2024 |
| Supporting Groq Models | Completed | Q2 2024 |
| **Adding Custom Tools** | Completed | Q2 2024 |
| Click on something on the screen (text and icon) | Completed | Q2 2024 |
| New UI | Completed | Q2 2024 |
| Native Applications, exe, dmg | Completed | Q3 2024 |
| **Collaborated Speaking Different Voice Models on long responses.** | Completed | Q2 2024 |
| **Auto Stop Recording, when you complate talking** | Completed | Q2 2024 |
| **Wakeup Word** | Completed | Q2 2024 |
| **Continuously Conversations** | Completed | Q2 2024 |
| **Adding more capability on device** | Completed | Q2 2024 |
| **Local TTS** | Completed | Q3 2024 |
| **Local STT** | Completed | Q3 2024 |
| Tray Menu | Completed | Q3 2024 |
| New Line (Shift + Enter) | Completed | Q4 2024 |
| Copy Pasting Text Compatibility | Completed | Q4 2024 |
| **Global Hotkey** | On the way | Q3 2024 |
| DeepFace Integration (Facial Recognition) | Planned | Q3 2024 |
## Capabilities
At this time we have many infrastructure elements. We just aim to provide whole things that already in ChatGPT app.
| Capability | Status |
|------------------------------------|----------------------------------|
| **Local LLM with Vision (Ollama)** | OK |
| Local text-to-speech | OK |
| Local speech-to-text | OK |
| **Screen Read** | OK |
| **Click to and Text or Icon in the screen** | OK |
| **Move to and Text or Icon in the screen** | OK |
| **Typing Something** | OK |
| **Pressing to Any Key** | OK |
| **Scrolling** | OK |
| **Microphone** | OK |
| **System Audio** | OK |
| **Memory** | OK |
| **Open and Close App** | OK |
| **Open a URL** | OK |
| **Clipboard** | OK |
| **Search Engines** | OK |
| **Writing and running Python** | OK |
| **Writing and running SH** | OK |
| **Using your Telegram Account** | OK |
| **Knowledge Management** | OK |
| **[Add more tool](https://github.com/onuratakan/gpt-computer-assistant/blob/master/gpt_computer_assistant/standard_tools.py)** | ? |
### Predefined Agents
If you enable it your assistant will work with these teams:
| Team Name | Status |
|------------------------------------|----------------------------------|
| **search_on_internet_and_report_team** | OK |
| **generate_code_with_aim_team_** | OK |
| **[Add your own one](https://github.com/onuratakan/gpt-computer-assistant/blob/master/gpt_computer_assistant/teams.py)** | ? |
<a href="#">
<img src="https://github.com/onuratakan/gpt-computer-assistant/assets/41792982/ba590bf8-6059-4cb6-8c4e-6d105ce4edd2" alt="Logo" >
</a>
## Contributors
<a href="https://github.com/onuratakan/gpt-computer-assistant/graphs/contributors">
<img src="https://contrib.rocks/image?repo=onuratakan/gpt-computer-assistant" />
</a>
", Assign "at most 3 tags" to the expected json: {"id":"12515","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"