AI prompts
base on Imitation learning benchmark focusing on complex locomotion tasks using MuJoCo. <p align="center">
<img width="70%" src="https://github.com/robfiras/loco-mujoco/assets/69359729/bd2a219e-ddfd-4355-8024-d9af921fb92a">
</p>

[](https://loco-mujoco.readthedocs.io/en/latest/?badge=latest)
[](https://opensource.org/licenses/MIT)
[](https://discord.gg/gEqR3xCVdn)
[//]: # ([](https://pypi.org/project/loco-mujoco/))
> 🚀 **Latest News:**
> A **major release (v1.0)** just dropped! 🎉
> LocoMuJoCo now supports MJX and comes with new Jax algorithms. We also added many new environments and +22k datasets! 🚀
**LocoMuJoCo** is an **imitation learning benchmark** specifically designed for **whole-body control**.
It features a diverse set of environments, including **quadrupeds**, **humanoids**, and **(musculo-)skeletal human models**,
each provided with comprehensive datasets (over 22,000 samples per humanoid).
Although primarily focused on imitation learning, LocoMuJoCo also supports custom reward function classes,
making it suitable for pure reinforcement learning as well.
<div align="center">
<img src="imgs/main_lmj.gif"/>
</div>
### Key Advantages
✅ Supports **MuJoCo** (single environment) and **MJX** (parallel environments) \
✅ Includes **12 humanoid and 4 quadruped environments**, featuring 4 **biomechanical human models** \
✅ Clean single-file JAX algorithms for quick benchmarking (**PPO**, **GAIL**, **AMP**, **DeepMimic**)\
✅ Combined training and environment into one JIT‑compiled function for lightning‑fast training 🚀 \
✅ **Over 22,000 motion capture datasets** (AMASS, LAFAN1, native LocoMuJoCo) retargeted for each humanoid \
✅ **Robot-to-robot retargeting** allows to retarget any existing dataset from one robot to another \
✅ Powerful **trajectory comparison metrics** including dynamic time warping and discrete Fréchet distance, all in JAX \
✅ Interface for Gymnasium \
✅ Built-in **domain and terrain randomization** \
✅ Modular design: define, swap, and reuse components like observation types, reward functions, terminal state handlers, and domain randomization \
✅ [Documentation](https://loco-mujoco.readthedocs.io/)
---
## Installation
[//]: # (You have the choice to install the latest release via PyPI by running )
[//]: # ()
[//]: # ()
[//]: # (```bash)
[//]: # ()
[//]: # (pip install loco-mujoco )
[//]: # ()
[//]: # (```)
Clone this repo and do an editable installation:
```bash
cd loco-mujoco
pip install -e .
```
By default, both will install the CPU-version of Jax. If you want to use Jax on the GPU, you need to install the following:
```bash
pip install jax["cuda12"]
````
> [!NOTE]
> If you want to run the **MyoSkeleton** environment, you need to additionally run
> `loco-mujoco-myomodel-init` to accept the license and download the model.
### Datasets
LocoMuJoCo provides three sources of motion capture (mocap) data for humanoid environments: default (provided by us), LAFAN1, and AMASS. The first two datasets
are available on the [LocoMujoCo HuggingFace dataset repository](https://huggingface.co/datasets/robfiras/loco-mujoco-datasets)
and will downloaded and cached automatically for you. AMASS needs to be downloaded and installed separately due to
their licensing. See [here](loco_mujoco/smpl) for more information about the installation.
This is how you can visualize the datasets:
```python
from loco_mujoco.task_factories import ImitationFactory, LAFAN1DatasetConf, DefaultDatasetConf, AMASSDatasetConf
# # example --> you can add as many datasets as you want in the lists!
env = ImitationFactory.make("UnitreeH1",
default_dataset_conf=DefaultDatasetConf(["squat"]),
lafan1_dataset_conf=LAFAN1DatasetConf(["dance2_subject4", "walk1_subject1"]),
# if SMPL and AMASS are installed, you can use the following:
#amass_dataset_conf=AMASSDatasetConf(["DanceDB/DanceDB/20120911_TheodorosSourmelis/Capoeira_Theodoros_v2_C3D_poses"])
)
env.play_trajectory(n_episodes=3, n_steps_per_episode=500, render=True)
```
#### Speeding up Dataset Loading
LocoMuJoCo only stores datasets with joint positions and velocities to save memory. All other attributes are calculated
using forward kinematics upon loading. If you want to speed up the dataset loading, you can define caches for the datasets. This will
store the forward kinematics results in a cache file, which will be loaded on the next run:
```bash
loco-mujoco-set-all-caches --path <path to cache>
```
For instance, you could run:
```bash
loco-mujoco-set-all-caches --path "$HOME/.loco-mujoco-caches"
````
---
## Environments
You want a quick overview of all **environments** available? You can find it
[here](/loco_mujoco/environments) and more detailed in the [Documentation](https://loco-mujoco.readthedocs.io/).
<div align="center">
<img src="imgs/lmj_envs.gif"/>
</div>
And stay tuned! There are many more to come ...
---
## Tutorials
We provide a set of tutorials to help you get started with LocoMuJoCo. You can find them in the [tutorials folder](./examples/tutorials)
or with more explanation in the [documentation](https://loco-mujoco.readthedocs.io/).
If you want to check out training examples of a PPO, GAIL, AMP, or DeepMimic agent, you can find them
in the [training examples folder](./examples/training_examples). For instance, [here](./examples/training_examples/jax_rl_mimic) is an example of a DeepMimic agent
you can train to achieve a human-like walking in all directions, which was trained in 36 min on an RTX 3080 Ti:
<div align="center">
<img src="imgs/unitree_h1_walk_anydir.gif"/>
</div>
---
## Citation
```
@inproceedings{alhafez2023b,
title={LocoMuJoCo: A Comprehensive Imitation Learning Benchmark for Locomotion},
author={Firas Al-Hafez and Guoping Zhao and Jan Peters and Davide Tateo},
booktitle={6th Robot Learning Workshop, NeurIPS},
year={2023}
}
```
", Assign "at most 3 tags" to the expected json: {"id":"13531","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"