AI prompts
base on DORA (Dataflow-Oriented Robotic Architecture) is middleware designed to streamline and simplify the creation of AI-based robotic applications. It offers low latency, composable, and distributed dataflow capabilities. Applications are modeled as directed graphs, also referred to as pipelines. #
<p align="center">
<img src="https://raw.githubusercontent.com/dora-rs/dora/main/docs/src/logo.svg" width="400"/>
</p>
<h2 align="center">
<a href="https://www.dora-rs.ai">Website</a>
|
<a href="https://dora-rs.ai/docs/guides/getting-started/conversation_py/">Python API</a>
|
<a href="https://docs.rs/dora-node-api/latest/dora_node_api/">Rust API</a>
|
<a href="https://www.dora-rs.ai/docs/guides/">Guide</a>
|
<a href="https://discord.gg/6eMGGutkfE">Discord</a>
</h2>
<div align="center">
<a href="https://github.com/dora-rs/dora/actions">
<img src="https://github.com/dora-rs/dora/workflows/CI/badge.svg" alt="Build and test"/>
</a>
<a href="https://crates.io/crates/dora-rs">
<img src="https://img.shields.io/crates/v/dora_node_api.svg"/>
</a>
<a href="https://docs.rs/dora-node-api/latest/dora_node_api/">
<img src="https://docs.rs/dora-node-api/badge.svg" alt="rust docs"/>
</a>
<a href="https://pypi.org/project/dora-rs/">
<img src="https://img.shields.io/pypi/v/dora-rs.svg" alt="PyPi Latest Release"/>
</a>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9190" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9190" alt="dora-rs%2Fdora | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## Highlights
- π dora-rs is a framework to run realtime multi-AI and multi-hardware applications.
- π¦ dora-rs internals are 100% Rust making it extremely fast compared to alternative such as being β‘οΈ [10-17x faster](https://github.com/dora-rs/dora-benchmark) than `ros2`.
- βοΈ Includes a large set of pre-packaged nodes for fast prototyping which simplifies integration of hardware, algorithms, and AI models.
<p align="center">
<picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/dora-rs/dora/main/docs/src/bar_chart_dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/dora-rs/dora/main/docs/src/bar_chart_light.svg">
<img src="https://raw.githubusercontent.com/dora-rs/dora/main/docs/src/bar_chart_light.svg">
</picture>
</p>
<p align="center">
<a href="https://github.com/dora-rs/dora-benchmark/" >
<i>Latency benchmark with Python API for both framework, sending 40M of random bytes.</i>
</a>
</p>
## Latest News π
<details open>
<summary><b>2025</b></summary>
\[04/05\] Add support for dora-cotracker to track any point on a frame, dora-rav1e AV1 encoding up to 12bit and dora-dav1d AV1 decoding,
- \[03/05\] Add support for dora async Python.
- \[03/05\] Add support for Microsoft Phi4, Microsoft Magma.
- \[03/05\] dora-rs has been accepted to [**GSoC 2025 π**](https://summerofcode.withgoogle.com/programs/2025/organizations/dora-rs-tb), with the following [**idea list**](https://github.com/dora-rs/dora/wiki/GSoC_2025).
- \[03/04\] Add support for Zenoh for distributed dataflow.
- \[03/04\] Add support for Meta SAM2, Kokoro(TTS), Improved Qwen2.5 Performance using `llama.cpp`.
- \[02/25\] Add support for Qwen2.5(LLM), Qwen2.5-VL(VLM), outetts(TTS)
</details>
## Support Matrix
| | dora-rs |
| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **APIs** | Python >= 3.7 including sync ββ
<br> Rust β
<br> C/C++ π <br>ROS2 >= Foxy π |
| **OS** | Linux: Arm 32 ββ
Arm 64 ββ
x64_86 ββ
<br>MacOS: Arm 64 ββ
x64_86 β
<br>Windows: x64_86 π<br> Android: π οΈ (Blocked by: https://github.com/elast0ny/shared_memory/issues/32) <br> IOS: π οΈ |
| **Message Format** | Arrow β
<br> Standard Specification π οΈ |
| **Local Communication** | Shared Memory β
<br> [Cuda IPC](https://arrow.apache.org/docs/python/api/cuda.html) π |
| **Remote Communication** | [Zenoh](https://zenoh.io/) π |
| **Metrics, Tracing, and Logging** | Opentelemetry π |
| **Configuration** | YAML β
|
| **Package Manager** | [pip](https://pypi.org/): Python Node β
Rust Node β
C/C++ Node π οΈ <br>[cargo](https://crates.io/): Rust Node β
|
> - β = Recommended
> - β
= First Class Support
> - π = Best Effort Support
> - π = Experimental and looking for contributions
> - π οΈ = Unsupported but hoped for through contributions
>
> Everything is open for contributions π
## Node Hub
> Feel free to modify this README with your own nodes so that it benefits the community.
| Type | Title | Support | Description | Downloads | License |
| ----------------------------- | --------------------------------------------------------------------------------------------------- | ------------------- | ------------------------------------------------ | ----------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| Camera | [PyOrbbeckSDK](https://github.com/dora-rs/dora/blob/main/node-hub/dora-pyorbbecksdk) | π | Image and depth from Orbbeck Camera |  |  |
| Camera | [PyRealsense](https://github.com/dora-rs/dora/blob/main/node-hub/dora-pyrealsense) | Linuxπ <br> Macπ οΈ | Image and depth from Realsense |  |  |
| Camera | [OpenCV Video Capture](https://github.com/dora-rs/dora/blob/main/node-hub/opencv-video-capture) | β
| Image stream from OpenCV Camera |  |  |
| Peripheral | [Keyboard](https://github.com/dora-rs/dora/blob/main/node-hub/dora-keyboard) | β
| Keyboard char listener |  |  |
| Peripheral | [Microphone](https://github.com/dora-rs/dora/blob/main/node-hub/dora-microphone) | β
| Audio from microphone |  |  |
| Peripheral | [PyAudio(Speaker)](https://github.com/dora-rs/dora/blob/main/node-hub/dora-pyaudio) | β
| Output audio from speaker |  |  |
| Actuator | [Feetech](https://github.com/dora-rs/dora-lerobot/blob/main/node-hub/feetech-client) | π | Feetech Client | | |
| Actuator | [Dynamixel](https://github.com/dora-rs/dora-lerobot/blob/main/node-hub/dynamixel-client) | π | Dynamixel Client | | |
| Chassis | [Agilex - UGV](https://github.com/dora-rs/dora/blob/main/node-hub/dora-ugv) | π | Robomaster Client |  |  |
| Chassis | [DJI - Robomaster S1](https://huggingface.co/datasets/dora-rs/dora-robomaster) | π | Robomaster Client | | |
| Chassis | [Dora Kit Car](https://github.com/dora-rs/dora/blob/main/node-hub/dora-kit-car) | π | Open Source Chassis |  |  |
| Arm | [Alex Koch - Low Cost Robot](https://github.com/dora-rs/dora-lerobot/blob/main/robots/alexk-lcr) | π | Alex Koch - Low Cost Robot Client | | |
| Arm | [Lebai - LM3](https://github.com/dora-rs/dora-lerobot/blob/main/node-hub/lebai-client) | π | Lebai client | | |
| Arm | [Agilex - Piper](https://github.com/dora-rs/dora/blob/main/node-hub/dora-piper) | π | Agilex arm client |  |  |
| Robot | [Pollen - Reachy 1](https://github.com/dora-rs/dora-lerobot/blob/main/node-hub/dora-reachy1) | π | Reachy 1 Client | | |
| Robot | [Pollen - Reachy 2](https://github.com/dora-rs/dora/blob/main/node-hub/dora-reachy2) | π | Reachy 2 client |  |  |
| Robot | [Trossen - Aloha](https://github.com/dora-rs/dora-lerobot/blob/main/robots/aloha) | π | Aloha client | | |
| Voice Activity Detection(VAD) | [Silero VAD](https://github.com/dora-rs/dora/blob/main/node-hub/dora-vad) | β
| Silero Voice activity detection |  |  |
| Speech to Text(STT) | [Whisper](https://github.com/dora-rs/dora/blob/main/node-hub/dora-distil-whisper) | β
| Transcribe audio to text |  |  |
| Object Detection | [Yolov8](https://github.com/dora-rs/dora/blob/main/node-hub/dora-yolo) | β
| Object detection |  |  |
| Segmentation | [SAM2](https://github.com/dora-rs/dora/blob/main/node-hub/dora-sam2) | Cudaβ
<br> Metalπ οΈ | Segment Anything |  |  |
| Large Language Model(LLM) | [Qwen2.5](https://github.com/dora-rs/dora/blob/main/node-hub/dora-qwen) | β
| Large Language Model using Qwen |  |  |
| Vision Language Model(VLM) | [Qwen2.5-vl](https://github.com/dora-rs/dora/blob/main/node-hub/dora-qwen2-5-vl) | β
| Vision Language Model using Qwen2.5 VL |  |  |
| Vision Language Model(VLM) | [InternVL](https://github.com/dora-rs/dora/blob/main/node-hub/dora-internvl) | π | InternVL is a vision language model |  |  |
| Vision Language Action(VLA) | [RDT-1B](https://github.com/dora-rs/dora/blob/main/node-hub/dora-rdt-1b) | π | Infer policy using Robotic Diffusion Transformer |  |  |
| Translation | [ArgosTranslate](https://github.com/dora-rs/dora/blob/main/node-hub/dora-argotranslate) | π | Open Source translation engine |  |  |
| Translation | [Opus MT](https://github.com/dora-rs/dora/blob/main/node-hub/dora-opus) | π | Translate text between language |  |  |
| Text to Speech(TTS) | [Kokoro TTS](https://github.com/dora-rs/dora/blob/main/node-hub/dora-kokoro-tts) | β
| Efficient Text to Speech |  |  |
| Recorder | [Llama Factory Recorder](https://github.com/dora-rs/dora/blob/main/node-hub/llama-factory-recorder) | π | Record data to train LLM and VLM |  |  |
| Recorder | [LeRobot Recorder](https://github.com/dora-rs/dora-lerobot/blob/main/node-hub/lerobot-dashboard) | π | LeRobot Recorder helper | | |
| Visualization | [Plot](https://github.com/dora-rs/dora/blob/main/node-hub/opencv-plot) | β
| Simple OpenCV plot visualization |  |  |
| Visualization | [Rerun](https://github.com/dora-rs/dora/blob/main/node-hub/dora-rerun) | β
| Visualization tool |  |  |
| Simulator | [Mujoco](https://github.com/dora-rs/dora-lerobot/blob/main/node-hub/mujoco-client) | π | Mujoco Simulator | | |
| Simulator | [Carla](https://github.com/dora-rs/dora-drives) | π | Carla Simulator | | |
| Simulator | [Gymnasium](https://github.com/dora-rs/dora-lerobot/blob/main/gym_dora) | π | Experimental OpenAI Gymnasium bridge | | |
## Examples
| Type | Title | Description | Last Commit |
| -------------- | ------------------------------------------------------------------------------------------------------------ | --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
| Audio | [Speech to Text(STT)](https://github.com/dora-rs/dora/blob/main/examples/speech-to-text) | Transform speech to text. |  |
| Audio | [Translation](https://github.com/dora-rs/dora/blob/main/examples/translation) | Translate audio in real time. |  |
| Vision | [Vision Language Model(VLM)](https://github.com/dora-rs/dora/blob/main/examples/vlm) | Use a VLM to understand images. |  |
| Vision | [YOLO](https://github.com/dora-rs/dora/blob/main/examples/python-dataflow) | Use YOLO to detect object within image. |  |
| Vision | [Camera](https://github.com/dora-rs/dora/blob/main/examples/camera) | Simple webcam plot example |  |
| Model Training | [Piper RDT](https://github.com/dora-rs/dora/blob/main/examples/piper) | Piper RDT Pipeline |  |
| Model Training | [LeRobot - Alexander Koch](https://raw.githubusercontent.com/dora-rs/dora-lerobot/refs/heads/main/README.md) | Training Alexander Koch Low Cost Robot with LeRobot |  |
| ROS2 | [C++ ROS2 Example](https://github.com/dora-rs/dora/blob/main/examples/c++-ros2-dataflow) | Example using C++ ROS2 |  |
| ROS2 | [Rust ROS2 Example](https://github.com/dora-rs/dora/blob/main/examples/rust-ros2-dataflow) | Example using Rust ROS2 |  |
| ROS2 | [Python ROS2 Example](https://github.com/dora-rs/dora/blob/main/examples/python-ros2-dataflow) | Example using Python ROS2 |  |
| Benchmark | [GPU Benchmark](https://github.com/dora-rs/dora/blob/main/examples/cuda-benchmark) | GPU Benchmark of dora-rs |  |
| Benchmark | [CPU Benchmark](https://github.com/dora-rs/dora-benchmark/blob/main) | CPU Benchmark of dora-rs |  |
| Tutorial | [Rust Example](https://github.com/dora-rs/dora/blob/main/examples/rust-dataflow) | Example using Rust |  |
| Tutorial | [Python Example](https://github.com/dora-rs/dora/blob/main/examples/python-dataflow) | Example using Python |  |
| Tutorial | [CMake Example](https://github.com/dora-rs/dora/blob/main/examples/cmake-dataflow) | Example using CMake |  |
| Tutorial | [C Example](https://github.com/dora-rs/dora/blob/main/examples/c-dataflow) | Example with C node |  |
| Tutorial | [CUDA Example](https://github.com/dora-rs/dora/blob/main/examples/cuda-benchmark) | Example using CUDA Zero Copy |  |
| Tutorial | [C++ Example](https://github.com/dora-rs/dora/blob/main/examples/c++-dataflow) | Example with C++ node |  |
## Getting Started
### Installation
```bash
pip install dora-rs-cli
```
<details close>
<summary><b>Additional installation methods</b></summary>
Install dora with our standalone installers, or from [crates.io](https://crates.io/crates/dora-cli):
### With cargo
```bash
cargo install dora-cli
```
### With Github release for macOS and Linux
```bash
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/dora-rs/dora/releases/latest/download/dora-cli-installer.sh | sh
```
### With Github release for Windows
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://github.com/dora-rs/dorareleases/latest/download/dora-cli-installer.ps1 | iex"
```
### With Source
```bash
git clone https://github.com/dora-rs/dora.git
cd dora
cargo build --release -p dora-cli
PATH=$PATH:$(pwd)/target/release
```
</details>
### Run
- Run the yolo python example:
```bash
## Create a virtual environment
uv venv --seed -p 3.11
## Install nodes dependencies of a remote graph
dora build https://raw.githubusercontent.com/dora-rs/dora/refs/heads/main/examples/object-detection/yolo.yml --uv
## Run yolo graph
dora run yolo.yml --uv
```
> Make sure to have a webcam
To stop your dataflow, you can use <kbd>ctrl</kbd>+<kbd>c</kbd>
- To understand what is happening, you can look at the dataflow with:
```bash
cat yolo.yml
```
- Resulting in:
```yaml
nodes:
- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/20
outputs:
- image
env:
CAPTURE_PATH: 0
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
- id: object-detection
build: pip install dora-yolo
path: dora-yolo
inputs:
image: camera/image
outputs:
- bbox
- id: plot
build: pip install dora-rerun
path: dora-rerun
inputs:
image: camera/image
boxes2d: object-detection/bbox
```
- In the above example, we can understand that the camera is sending image to both the rerun viewer as well as a yolo model that generates bounding box that is visualized within rerun.
### Documentation
The full documentation is available on [our website](https://dora-rs.ai/).
A lot of guides are available on [this section](https://dora-rs.ai/docs/guides/) of our website.
## What is Dora? And what features does Dora offer?
**D**ataflow-**O**riented **R**obotic **A**rchitecture (`dora-rs`) is a framework that makes creation of robotic applications fast and simple.
`dora-rs` implements a declarative dataflow paradigm where tasks are split between nodes isolated as individual processes.
The dataflow paradigm has the advantage of creating an abstraction layer that makes robotic applications modular and easily configurable.
### TCP Communication and Shared Memory
Communication between nodes is handled with shared memory on a same machine and TCP on distributed machines. Our shared memory implementation tracks messages across processes and discards them when obsolete. Shared memory slots are cached to avoid new memory allocation.
### Arrow Message Format
Nodes communicate with Apache Arrow Data Format.
[Apache Arrow](https://github.com/apache/arrow-rs) is a universal memory format for flat and hierarchical data. The Arrow memory format supports zero-copy reads for lightning-fast data access without serialization overhead. It defines a C data interface without any build-time or link-time dependency requirement, that means that `dora-rs` has **no compilation step** beyond the native compiler of your favourite language.
### Opentelemetry
dora-rs uses Opentelemetry to record all your logs, metrics and traces. This means that the data and telemetry can be linked using a shared abstraction.
[Opentelemetry](https://opentelemetry.io/) is an open source observability standard that makes dora-rs telemetry collectable by most backends such as elasticsearch, prometheus, Datadog...
Opentelemetry is language independent, backend agnostic, and easily collect distributed data, making it perfect for dora-rs applications.
### ROS2 Bridge
**Note**: this feature is marked as unstable.
- Compilation Free Message passing to ROS 2
- Automatic conversion ROS 2 Message <-> Arrow Array
```python
import pyarrow as pa
# Configuration Boilerplate...
turtle_twist_writer = ...
## Arrow Based ROS2 Twist Message
## which does not require ROS2 import
message = pa.array([{
"linear": {
"x": 1,
},
"angular": {
"z": 1
},
}])
turtle_twist_writer.publish(message)
```
> You might want to use ChatGPT to write the Arrow Formatting: https://chat.openai.com/share/4eec1c6d-dbd2-46dc-b6cd-310d2895ba15
## Zenoh Integration for Distributed Dataflow (Experimental)
Zenoh is a high-performance pub/sub and query protocol that unifies data in motion and at rest. In **dora-rs**, Zenoh is used for remote communication between nodes running on different machines, enabling distributed dataflow across networks.
### What is Zenoh?
- **Definition:**
[Zenoh](https://zenoh.io) is an open-source communication middleware offering pub/sub and query capabilities.
- **Benefits in DORA:**
- Simplifies communication between distributed nodes.
- Handles NAT traversal and inter-network communication.
- Integrates with DORA to manage remote data exchange while local communication still uses efficient shared memory.
### Enabling Zenoh Support
1. **Run a Zenoh Router (`zenohd`):**
Launch a Zenoh daemon to mediate communication. For example, using Docker:
```bash
docker run -p 7447:7447 -p 8000:8000 --name zenoh-router eclipse/zenohd:latest
```
````markdown
## Create a Zenoh Configuration File ποΈ
Create a file (e.g., `zenoh.json5`) with the router endpoint details:
```json5
{
connect: {
endpoints: ["tcp/203.0.113.10:7447"],
},
}
```
````
---
## Launch DORA Daemons with Zenoh Enabled π
On each machine, export the configuration and start the daemon:
```bash
export ZENOH_CONFIG=/path/to/zenoh.json5
dora daemon --coordinator-addr <COORD_IP> --machine-id <MACHINE_NAME>
```
---
## Deploy Distributed Nodes via YAML π
Mark nodes for remote deployment using the `_unstable_deploy` key:
```yaml
nodes:
- id: camera_node
outputs: [image]
- id: processing_node
_unstable_deploy:
machine: robot1
path: /home/robot/dora-nodes/processing_node
inputs:
image: camera_node/image
outputs: [result]
```
---
## Start the Coordinator and Dataflow π
Run the coordinator on a designated machine and start the dataflow:
```bash
dora coordinator
dora start dataflow.yml
```
---
## YAML Example for Distributed Dataflow π
```yaml
communication:
zenoh: {}
nodes:
- id: camera_node
custom:
run: ./camera_driver.py
outputs:
- image
- id: processing_node
_unstable_deploy:
machine: robot1
path: /home/robot/dora-nodes/processing_node
inputs:
image: camera_node/image
outputs:
- result
```
## Contributing
We are passionate about supporting contributors of all levels of experience and would love to see
you get involved in the project. See the
[contributing guide](https://github.com/dora-rs/dora/blob/main/CONTRIBUTING.md) to get started.
## Discussions
Our main communication channels are:
- [Our Discord server](https://discord.gg/6eMGGutkfE)
- [Our Github Project Discussion](https://github.com/orgs/dora-rs/discussions)
Feel free to reach out on any topic, issues or ideas.
We also have [a contributing guide](CONTRIBUTING.md).
## License
This project is licensed under Apache-2.0. Check out [NOTICE.md](NOTICE.md) for more information.
---
## Further Resources π
- [Zenoh Documentation](https://zenoh.io/docs/)
- [DORA Zenoh Discussion (GitHub Issue #512)](https://github.com/dora-rs/dora/issues/512)
- [Dora Autoware Localization Demo](https://github.com/dora-rs/dora-autoware-localization-demo)
```
```
", Assign "at most 3 tags" to the expected json: {"id":"9190","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"