AI prompts
base on Deobfuscate Javascript code using ChatGPT # HumanifyJS
> Deobfuscate Javascript code using LLMs ("AI")
This tool uses large language modeles (like ChatGPT & llama) and other tools to
deobfuscate, unminify, transpile, decompile and unpack Javascript code. Note
that LLMs don't perform any structural changes ā they only provide hints to
rename variables and functions. The heavy lifting is done by Babel on AST level
to ensure code stays 1-1 equivalent.
### Version 2 is out! š
v2 highlights compared to v1:
* Python not required anymore!
* A lot of tests, the codebase is actually maintanable now
* Renewed CLI tool `humanify` installable via npm
### ā”ļø Check out the [introduction blog post][blogpost] for in-depth explanation!
[blogpost]: https://thejunkland.com/blog/using-llms-to-reverse-javascript-minification
## Example
Given the following minified code:
```javascript
function a(e,t){var n=[];var r=e.length;var i=0;for(;i<r;i+=t){if(i+t<r){n.push(e.substring(i,i+t))}else{n.push(e.substring(i,r))}}return n}
```
The tool will output a human-readable version:
```javascript
function splitString(inputString, chunkSize) {
var chunks = [];
var stringLength = inputString.length;
var startIndex = 0;
for (; startIndex < stringLength; startIndex += chunkSize) {
if (startIndex + chunkSize < stringLength) {
chunks.push(inputString.substring(startIndex, startIndex + chunkSize));
} else {
chunks.push(inputString.substring(startIndex, stringLength));
}
}
return chunks;
}
```
šØ **NOTE:** šØ
Large files may take some time to process and use a lot of tokens if you use
ChatGPT. For a rough estimate, the tool takes about 2 tokens per character to
process a file:
```shell
echo "$((2 * $(wc -c < yourscript.min.js)))"
```
So for refrence: a minified `bootstrap.min.js` would take about $0.5 to
un-minify using ChatGPT.
Using `humanify local` is of course free, but may take more time, be less
accurate and not possible with your existing hardware.
## Getting started
### Installation
Prerequisites:
* Node.js >=20
The preferred whay to install the tool is via npm:
```shell
npm install -g humanifyjs
```
This installs the tool to your machine globally. After the installation is done,
you should be able to run the tool via:
```shell
humanify
```
If you want to try it out before installing, you can run it using `npx`:
```
npx humanifyjs
```
This will download the tool and run it locally. Note that all examples here
expect the tool to be installed globally, but they should work by replacing
`humanify` with `npx humanifyjs` as well.
### Usage
Next you'll need to decide whether to use `openai`, `gemini` or `local` mode. In a
nutshell:
* `openai` or `gemini` mode
* Runs on someone else's computer that's specifically optimized for this kind
of things
* Costs money depending on the length of your code
* Is more accurate
* `local` mode
* Runs locally
* Is free
* Is less accurate
* Runs as fast as your GPU does (it also runs on CPU, but may be very slow)
See instructions below for each option:
### OpenAI mode
You'll need a ChatGPT API key. You can get one by signing up at
https://openai.com/.
There are several ways to provide the API key to the tool:
```shell
humanify openai --apiKey="your-token" obfuscated-file.js
```
Alternatively you can also use an environment variable `OPENAI_API_KEY`. Use
`humanify --help` to see all available options.
### Gemini mode
You'll need a Google AI Studio key. You can get one by signing up at
https://aistudio.google.com/.
You need to provice the API key to the tool:
```shell
humanify gemini --apiKey="your-token" obfuscated-file.js
```
Alternatively you can also use an environment variable `GEMINI_API_KEY`. Use
`humanify --help` to see all available options.
### Local mode
The local mode uses a pre-trained language model to deobfuscate the code. The
model is not included in the repository due to its size, but you can download it
using the following command:
```shell
humanify download 2b
```
This downloads the `2b` model to your local machine. This is only needed to do
once. You can also choose to download other models depending on your local
resources. List the available models using `humanify download`.
After downloading the model, you can run the tool with:
```shell
humanify local obfuscated-file.js
```
This uses your local GPU to deobfuscate the code. If you don't have a GPU, the
tool will automatically fall back to CPU mode. Note that using a GPU speeds up
the process significantly.
Humanify has native support for Apple's M-series chips, and can fully utilize
the GPU capabilities of your Mac.
## Features
The main features of the tool are:
* Uses ChatGPT functions/local models to get smart suggestions to rename
variable and function names
* Uses custom and off-the-shelf Babel plugins to perform AST-level unmanging
* Uses Webcrack to unbundle Webpack bundles
## Contributing
If you'd like to contribute, please fork the repository and use a feature
branch. Pull requests are warmly welcome.
## Licensing
The code in this project is licensed under MIT license.
", Assign "at most 3 tags" to the expected json: {"id":"11621","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"