base on Production First and Production Ready End-to-End Speech Recognition Toolkit # WeNet [![License](https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg)](https://opensource.org/licenses/Apache-2.0) [![Python-Version](https://img.shields.io/badge/Python-3.7%7C3.8-brightgreen)](https://github.com/wenet-e2e/wenet) [![WeChat](https://img.shields.io/badge/WeChat-07C160?style=flat&logo=wechat&logoColor=white)](#discussion--communication) [**Roadmap**](https://github.com/wenet-e2e/wenet/issues/1683) | [**Docs**](https://wenet-e2e.github.io/wenet) | [**Papers**](https://wenet-e2e.github.io/wenet/papers.html) | [**Runtime**](https://github.com/wenet-e2e/wenet/tree/main/runtime) | [**Pretrained Models**](docs/pretrained_models.md) | [**HuggingFace**](https://huggingface.co/spaces/wenet/wenet_demo) | [**Ask WeNet Guru**](https://gurubase.io/g/wenet) **We** share **Net** together. ## Highlights * **Production first and production ready**: The core design principle, WeNet provides full stack production solutions for speech recognition. * **Accurate**: WeNet achieves SOTA results on a lot of public speech datasets. * **Light weight**: WeNet is easy to install, easy to use, well designed, and well documented. ## Install ### Install python package ``` sh pip install git+https://github.com/wenet-e2e/wenet.git ``` **Command-line usage** (use `-h` for parameters): ``` sh wenet -m paraformer audio.wav ``` You can set `-m` with `paraformer` or `firered` or `wenetspeech` for chinese, and set it to `whisper-large-v3` or `whisper-large-v3-turbo` for english. **Python programming usage**: ``` python import wenet model = wenet.load_model('paraformer') result = model.transcribe('audio.wav') print(result.text) ``` Please refer [python usage](docs/python_package.md) for more command line and python programming usage. ### Install for training & deployment - Clone the repo ``` sh git clone https://github.com/wenet-e2e/wenet.git ``` - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html - Create Conda env: ``` sh conda create -n wenet python=3.10 conda activate wenet conda install conda-forge::sox ``` - Install CUDA: please follow this [link](https://icefall.readthedocs.io/en/latest/installation/index.html#id1), It's recommended to install CUDA 12.1 - Install torch and torchaudio, It's recomended to use 2.2.2+cu121: ``` sh pip install torch==2.2.2+cu121 torchaudio==2.2.2+cu121 -f https://download.pytorch.org/whl/torch_stable.html ``` <details><summary><b>For Ascend NPU users:</b></summary> - Install CANN: please follow this [link](https://ascend.github.io/docs/sources/ascend/quick_install.html) to install CANN toolkit and kernels. - Install WeNet with torch-npu dependencies: ``` sh pip install -e .[torch-npu] ``` - Related version control table: | Requirement | Minimum | Recommend | | ------------ | ---------------- | ----------- | | CANN | 8.0.RC2.alpha003 | latest | | torch | 2.1.0 | 2.2.0 | | torch-npu | 2.1.0 | 2.2.0 | | torchaudio | 2.1.0 | 2.2.0 | | deepspeed | 0.13.2 | latest | </details> - Install other python packages ``` sh pip install -r requirements.txt pre-commit install # for clean and tidy code ``` - Frequently Asked Questions (FAQs) ``` sh # If you encounter sox compatibility issues RuntimeError: set_buffer_size requires sox extension which is not available. # ubuntu sudo apt-get install sox libsox-dev # centos sudo yum install sox sox-devel # conda env conda install conda-forge::sox ``` **Build for deployment** Optionally, if you want to use x86 runtime or language model(LM), you have to build the runtime as follows. Otherwise, you can just ignore this step. ``` sh # runtime build requires cmake 3.14 or above cd runtime/libtorch mkdir build && cd build && cmake -DGRAPH_TOOLS=ON .. && cmake --build . ``` Please see [doc](https://github.com/wenet-e2e/wenet/tree/main/runtime) for building runtime on more platforms and OS. ## Discussion & Communication You can directly discuss on [Github Issues](https://github.com/wenet-e2e/wenet/issues). For Chinese users, you can also scan the QR code on the left to follow our official account of WeNet. We created a WeChat group for better discussion and quicker response. Please scan the personal QR code on the right, and the guy is responsible for inviting you to the chat group. | <img src="https://github.com/robin1001/qr/blob/master/wenet.jpeg" width="250px"> |<img src="https://github.com/robin1001/qr/blob/master/chengdong.jpg" width="250px">| <img src="https://github.com/robin1001/qr/blob/master/binbin.jpeg" width="250px"> | | ---- | ---- | ---- | ## Acknowledge 1. We borrowed a lot of code from [ESPnet](https://github.com/espnet/espnet) for transformer based modeling. 2. We borrowed a lot of code from [Kaldi](http://kaldi-asr.org/) for WFST based decoding for LM integration. 3. We referred [EESEN](https://github.com/srvk/eesen) for building TLG based graph for LM integration. 4. We referred to [OpenTransformer](https://github.com/ZhengkunTian/OpenTransformer/) for python batch inference of e2e models. ## Citations ``` bibtex @inproceedings{yao2021wenet, title={WeNet: Production oriented Streaming and Non-streaming End-to-End Speech Recognition Toolkit}, author={Yao, Zhuoyuan and Wu, Di and Wang, Xiong and Zhang, Binbin and Yu, Fan and Yang, Chao and Peng, Zhendong and Chen, Xiaoyu and Xie, Lei and Lei, Xin}, booktitle={Proc. Interspeech}, year={2021}, address={Brno, Czech Republic }, organization={IEEE} } @article{zhang2022wenet, title={WeNet 2.0: More Productive End-to-End Speech Recognition Toolkit}, author={Zhang, Binbin and Wu, Di and Peng, Zhendong and Song, Xingchen and Yao, Zhuoyuan and Lv, Hang and Xie, Lei and Yang, Chao and Pan, Fuping and Niu, Jianwei}, journal={arXiv preprint arXiv:2203.15455}, year={2022} } ``` ", Assign "at most 3 tags" to the expected json: {"id":"554","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"