diff --git a/.github/actions/moshi_build/action.yml b/.github/actions/moshi_build/action.yml index 432edf4..2be3812 100755 --- a/.github/actions/moshi_build/action.yml +++ b/.github/actions/moshi_build/action.yml @@ -19,9 +19,9 @@ runs: . env/bin/activate python -m pip install --upgrade pip pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cpu - pip install -e './moshi[dev]' - name: Setup env shell: bash run: | . env/bin/activate pre-commit install + pip install -e './moshi[dev]' diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index a977dab..fa0d4a3 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -6,19 +6,23 @@ repos: language: system entry: bash -c 'cd moshi && flake8' pass_filenames: false + always_run: true - id: pyright-moshi name: pyright on moshi package language: system - entry: bash -c 'cd moshi && pyright' + entry: scripts/run_ci_when_installed.sh moshi 'cd moshi && pyright' pass_filenames: false + always_run: true - id: flake8-moshi_mlx name: flake8 on moshi_mlx package language: system entry: bash -c 'cd moshi_mlx && flake8' pass_filenames: false + always_run: true - id: pyright-moshi_mlx name: pyright on moshi_mlx package language: system - entry: bash -c 'cd moshi_mlx && pyright' + entry: scripts/run_ci_when_installed.sh moshi_mlx 'cd moshi_mlx && pyright' pass_filenames: false + always_run: true diff --git a/README.md b/README.md index fc23376..410f984 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,85 @@ -# moshi +# Moshi: a speech-text fundation model for real time dialogue + +![precommit badge](https://github.com/kyutai-labs/moshi/workflows/precommit/badge.svg) +![rust ci badge](https://github.com/kyutai-labs/moshi/workflows/Rust%20CI/badge.svg) + + [Moshi][moshi] is a speech-text foundation model and **full-duplex** spoken dialogue framework. + It uses [Mimi][moshi], a state-of-the-art streaming neural audio codec. Mimi operates at 12.5 Hz, and compresses + audio down to 1.1 kbps, in a fully streaming manner (latency of 80ms, the frame size), + yet performs better than existing, non-streaming, codec like + [SpeechTokenizer](https://github.com/ZhangXInFD/SpeechTokenizer) (50 Hz, 4 kbps), or [SemantiCodec](https://github.com/haoheliu/SemantiCodec-inference) (50 Hz, 1kbps). + + Moshi models **two streams of audio**: one corresponds to Moshi, and one to the user. + At inference, the stream from the user is taken from the audio input, +and the one for Moshi is sampled from. Along that, Moshi predicts text tokens corresponding to its own speech, its **inner monologue**, +which greatly improves the quality of its generation. A small depth transformer models inter codebook dependencies for a given step, +while a large, 7B parameter Transformer models the temporal dependencies. Moshi achieves a theoretical latency +of 160ms (80ms for the frame size of Mimi + 80ms of acoustic delay), with a practical overall latency as low as 200ms. +[Talk to Moshi](https://moshi.chat) now on our live demo. + +
+
+ +Mimi builds on previous neural audio codecs such as [SoundStream](https://arxiv.org/abs/2107.03312) +and [EnCodec](https://github.com/facebookresearch/encodec), adding a Transformer both in the encoder and decoder, +and adapting the strides to match an overall frame rate of 12.5 Hz. This allows Mimi to get closer to the +average frame rate of text tokens (~3-4 Hz), and limit the number of auto-regressive steps in Moshi. +Similarly to SpeechTokenizer, Mimi uses a distillation loss so that the first codebook tokens match +a self-supervised representation from [WavLM](https://arxiv.org/abs/2110.13900). Interestingly, while +Mimi is fully causal and streaming, it learns to match sufficiently well the non causal representation from WavLM, +without introducing any delays. Finally, and similary to [EBEN](https://arxiv.org/pdf/2210.14090), Mimi +uses **only an adversarial training loss**, along with feature matching, showing strong improvements in terms of subjective quality +despite its low bitrate. + ++
+ + +## Organisation of the repository There are three separate versions of the moshi inference stack in this repo. -- The python version using PyTorch is in the `moshi` directory. -- The python version using MLX is in the `moshi_mlx` directory. -- The rust version used in production is in the `rust` directory. +- The python version using PyTorch is in the [`moshi/`](moshi/) directory. +- The python version using MLX for M series Macs is in the [`moshi_mlx/`](moshi_mlx/) directory. +- The rust version used in production is in the [`rust/`](rust/) directory. + +Finally, the code for the live demo is provided in the [`client/`](client/) directory. + +## Requirements + +You will need at least Python 3.10. For using the rust backend, you will need a recent version of +the [Rust toolchain](https://rustup.rs/). For specific requirements, please check the individual backends +directories. You can install the PyTorch and MLX clients with the following: + +```bash +pip install moshi # moshi PyTorch, from PyPI +pip install moshi_mlx # moshi MLX, from PyPI +# Or the bleeding edge versions for Moshi and Moshi-MLX. +pip install -e "git+https://git@github.com/kyutai-labs/moshi.git#egg=moshi&subdirectory=moshi" +pip install -e "git+https://git@github.com/kyutai-labs/moshi.git#egg=moshi_mlx&subdirectory=moshi_mlx" +``` + +While we hope that the present codebase will work on Windows, we do not provide official support for it. +We have tested the MLX version with MacBook Pro M3. At the moment, we do not support quantization +for the PyTorch version, so you will need a GPU with a significant amount of memory (24GB). + + +## Development + +If you wish to install from a clone of this repository, maybe to further develop Moshi, you can do the following: +``` +# From the root of the clone of the repo +pip install -e 'moshi[dev]' +pip install -e 'moshi_mlx[dev]' +pre-commit install +``` ## Python (PyTorch) @@ -15,38 +91,37 @@ run the model, you can then use either the web UI or a command line client. Start the server with: ```bash -PYTHONPATH=moshi python -m moshi.server +python -m moshi.server [--gradio_tunnel] ``` -And then access the web UI on [localhost:8998](http://localhost:8998). - -If the server is running on a remote box, you may want to forward the 8998 port -via your ssh connection so as to be able to access the web UI locally. +And then access the web UI on [localhost:8998](http://localhost:8998). If your GPU is on a distant machine +with no direct access, `--gradio_tunnel` will create a tunnel with a URL accessible from anywhere. +Keep in mind that this tunnel goes through the US and can add significant latency (up to 500ms from Europe). +Alternatively, you might want to use SSH to redirect your connection. Accessing a server that is not localhost via http may cause issues around using the microphone in the web UI (in some browsers this is only allowed using https). -## Python (MLX) for local inference on macOS - -You can either compile and install the `rustymimi` extension or install it via -pip. +A local client is also available, as ```bash -# Install from pip: -pip install rustymimi==0.1.1 -# Alternatively, if you want to compile the package run: -maturin dev -r -m rust/mimi-pyo3/Cargo.toml +python -m moshi.client [--url URL_TO_GRADIO] ``` +However note, that unlike the web browser, this client is bare bone. It doesn't do any echo cancellation, +nor does it try to compensate for a growing lag by skipping frames. + +## Python (MLX) for local inference on macOS -Then the model can be run with: +Once you have installed `moshi_mlx`, you can run ```bash -PYTHONPATH=moshi_mlx python -m moshi_mlx.local \ - --model ~/tmp/moshiko_mlx_301e30bf@120.q8.safetensors \ - --mimi ~/tmp/tokenizer-e351c8d8-checkpoint125.safetensors \ - --quantized 8 +python -m moshi_mlx.local -q 4 # weights quantized to 4 bits +python -m moshi_mlx.local -q 8 # weights quantized to 8 bits ``` -This uses a command line interface, alternatively you can use `local_web` to use +This uses a command line interface, which is bare bone. It doesn't do any echo cancellation, +nor does it try to compensate for a growing lag by skipping frames. + +Alternatively you can use `python -m moshi_mlx.local_web` to use the web UI, connection is via http on [localhost:8998](http://localhost:8998). ## Rust @@ -102,3 +177,25 @@ npm run build ``` The web UI can then be found in the `client/dist` directory. + +## License + +The present code is provided under the MIT license for the Python parts, and Apache license for the Rust backend. +The web client code is provided under the MIT license. +Note that parts of this code is based on [AudioCraft](https://github.com/facebookresearch/audiocraft), released under +the MIT license. + +## Citation + +If you use either Mimi or Moshi, please cite the following paper, + +``` +@article{defossez2024moshi, + title={Moshi: a speech-text foundation model for real-time dialogue}, + author={Alexandre Défossez and Laurent Mazaré and Manu Orsini and Amélie Royer and Patrick Pérez and Hervé Jégou and Edouard Grave and Neil Zeghidour}, + journal={arXiv:TBC}, + year={2024}, +} +``` + +[moshi]: https://arxiv.org/ diff --git a/client/LICENSE b/client/LICENSE new file mode 100644 index 0000000..31aa793 --- /dev/null +++ b/client/LICENSE @@ -0,0 +1,23 @@ +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/client/README.md b/client/README.md index bc6ea00..99c611a 100644 --- a/client/README.md +++ b/client/README.md @@ -14,3 +14,7 @@ Frontend for the demo. ## Skipping the queue To skip the queue for standalone use, once the project is running go to `/?worker_addr={WORKER_ADDR}` where `WORKER_ADDR` is your worker instance address. For example : `https://localhost:5173/?worker_addr=0.0.0.0:8088` + +## License + +The present code is provided under the MIT license. diff --git a/mimi.png b/mimi.png new file mode 100644 index 0000000..7082e1f Binary files /dev/null and b/mimi.png differ diff --git a/moshi.png b/moshi.png new file mode 100644 index 0000000..0e6c731 Binary files /dev/null and b/moshi.png differ diff --git a/moshi/moshi/LICENSE.audiocraft b/moshi/LICENSE.audiocraft similarity index 100% rename from moshi/moshi/LICENSE.audiocraft rename to moshi/LICENSE.audiocraft diff --git a/moshi/MANIFEST.in b/moshi/MANIFEST.in new file mode 100644 index 0000000..07c08ca --- /dev/null +++ b/moshi/MANIFEST.in @@ -0,0 +1,5 @@ +include LICENSE* +include *.md +include *.cfg +include requirements.txt +include moshi/py.typed diff --git a/moshi/README.md b/moshi/README.md index 022ce8d..af1218b 100644 --- a/moshi/README.md +++ b/moshi/README.md @@ -1 +1,97 @@ -# moshi - pytorch +# Moshi - PyTorch + +See the [top-level README.md][main_repo] for more information on Moshi. + +[Moshi][moshi] is a speech-text foundation model and full-duplex spoken dialogue framework. +It uses [Mimi][moshi], a state-of-the-art streaming neural audio codec. Mimi operates at 12.5 Hz, and compress +audio down to 1.1 kbps, in a fully streaming manner (latency of 80ms, the frame size), yet performs better than existing, non-streaming, codec. + +This is the PyTorch implementation for Moshi and Mimi. + + +## Requirements + +You will need at least Python 3.10. We kept a minimal set of dependencies for the current project. +It was tested with PyTorch 2.2 or 2.4. If you need a specific CUDA version, please make sure +to have PyTorch properly installed before installing Moshi. + +```bash +pip install moshi # moshi PyTorch, from PyPI +# Or the bleeding edge versions for Moshi +pip install -e "git+https://git@github.com/kyutai-labs/moshi#egg=moshi&subdirectory=moshi" +``` + +While we hope that the present codebase will work on Windows, we do not provide official support for it. +At the moment, we do not support quantization for the PyTorch version, so you will need a GPU with a significant amount of memory (24GB). + + +## Usage + +This package provides a streaming version of the audio tokenizer (Mimi) and the lm model (Moshi). + +In order to run in interactive mode, you need to start a server which will +run the model, you can then use either the web UI or a command line client. + +Start the server with: +```bash +python -m moshi.server [--gradio_tunnel] +``` + +And then access the web UI on [localhost:8998](http://localhost:8998). If your GPU is on a distant machine +with no direct access, `--gradio_tunnel` will create a tunnel with a URL accessible from anywhere. +Keep in mind that this tunnel goes through the US and can add significant latency (up to 500ms from Europe). +Alternatively, you might want to use SSH to redirect your connection. + +Accessing a server that is not localhost via http may cause issues around using +the microphone in the web UI (in some browsers this is only allowed using +https). + +A local client is also available, as +```bash +python -m moshi.client [--url URL_TO_GRADIO] +``` +However note, that unlike the web browser, this client is bare bone. It doesn't do any echo cancellation, +nor does it try to compensate for a growing lag by skipping frames. + +## Development + +If you wish to install from a clone of this repository, maybe to further develop Moshi, you can do the following: +```bash +# From the current folder (e.g. `moshi/`) +pip install -e '.[dev]' +pre-commit install +``` + +Once locally installed, Mimi can be tested with the following command, from **the root** of the repository, +```bash +wget https://github.com/metavoiceio/metavoice-src/raw/main/assets/bria.mp3 +python scripts/mimi_test.py + +``` + +Similary, Moshi can be tested (with a GPU) with +```bash +python scripts/moshi_benchmark.py +``` + + +## License + +The present code is provided under the MIT license. +Note that parts of this code is based on [AudioCraft](https://github.com/facebookresearch/audiocraft), released under +the MIT license. + +## Citation + +If you use either Mimi or Moshi, please cite the following paper, + +``` +@article{defossez2024moshi, + title={Moshi: a speech-text foundation model for real-time dialogue}, + author={Alexandre Défossez and Laurent Mazaré and Manu Orsini and Amélie Royer and Patrick Pérez and Hervé Jégou and Edouard Grave and Neil Zeghidour}, + journal={arXiv:TBC}, + year={2024}, +} +``` + +[moshi]: https://arxiv.org/ diff --git a/moshi/moshi/testing.md b/moshi/moshi/testing.md deleted file mode 100644 index 6691af8..0000000 --- a/moshi/moshi/testing.md +++ /dev/null @@ -1,15 +0,0 @@ -# Testing -In order to test the audio tokenizer, you can run the following command. - -```bash -wget https://github.com/metavoiceio/metavoice-src/raw/main/assets/bria.mp3 -PYTHONPATH=. python scripts/mimi_test.py --weights tokenizer-e351c8d8-checkpoint125.safetensors -``` - -In order to test moshi, run the following. -```bash -PYTHONPATH=. python scripts/moshi_test.py \ - --mimi-weights tokenizer-e351c8d8-checkpoint125.safetensors \ - --tokenizer tokenizer_spm_32k_3.model \ - --moshi-weights moshiko_pt_301e30bf@120.safetensors -``` diff --git a/moshi/setup.cfg b/moshi/setup.cfg index 5bccac4..8c3f101 100644 --- a/moshi/setup.cfg +++ b/moshi/setup.cfg @@ -4,3 +4,7 @@ max-line-length = 120 [flake8] max-line-length = 120 ignore = E203,E704 +exclude = + dist + build + diff --git a/moshi_mlx/MANIFEST.in b/moshi_mlx/MANIFEST.in new file mode 100644 index 0000000..dfce301 --- /dev/null +++ b/moshi_mlx/MANIFEST.in @@ -0,0 +1,5 @@ +include LICENSE* +include *.md +include *.cfg +include requirements.txt +include moshi_mlx/py.typed diff --git a/moshi_mlx/README.md b/moshi_mlx/README.md new file mode 100644 index 0000000..2f9fd6f --- /dev/null +++ b/moshi_mlx/README.md @@ -0,0 +1,57 @@ +# Moshi - MLX + +See the [top-level README.md][main_repo] for more information on Moshi. + +[Moshi][moshi] is a speech-text foundation model and full-duplex spoken dialogue framework. +It uses [Mimi][moshi], a state-of-the-art streaming neural audio codec. Mimi operates at 12.5 Hz, and compress +audio down to 1.1 kbps, in a fully streaming manner (latency of 80ms, the frame size), yet performs better than existing, non-streaming, codec. + +This is the MLX implementation for Moshi. For Mimi, this uses our Rust based implementation through the Python binding provided in `rustymimi`, available in the [rust/](https://github.com/kyutai-labs/moshi/tree/main/rust) folder of our main repository. + +## Requirements + +You will need at least Python 3.10. + +```bash +pip install moshi_mlx # moshi MLX, from PyPI +# Or the bleeding edge versions for Moshi and Moshi-MLX. +pip install -e "git+https://git@github.com/kyutai-labs/moshi#egg=moshi_mlx&subdirectory=moshi_mlx" +``` +We have tested the MLX version with MacBook Pro M3. + + +## Usage + + +Then the model can be run with: +```bash +python -m moshi_mlx.local -q 4 # weights quantized to 4 bits +python -m moshi_mlx.local -q 8 # weights quantized to 8 bits +``` + +This uses a command line interface, which is bare bone. It doesn't do any echo cancellation, +nor does it try to compensate for a growing lag by skipping frames. + +Alternatively you can use `python -m moshi_mlx.local_web` to use +the web UI, connection is via http on [localhost:8998](http://localhost:8998). + + +## License + +The present code is provided under the MIT license. + +## Citation + +If you use either Mimi or Moshi, please cite the following paper, + +``` +@article{defossez2024moshi, + title={Moshi: a speech-text foundation model for real-time dialogue}, + author={Alexandre Défossez and Laurent Mazaré and Manu Orsini and Amélie Royer and Patrick Pérez and Hervé Jégou and Edouard Grave and Neil Zeghidour}, + journal={arXiv:TBC}, + year={2024}, +} +``` + +[moshi]: https://arxiv.org/ +[main_repo]: https://github.com/kyutai-labs/moshi diff --git a/moshi_mlx/moshi_mlx/py.typed b/moshi_mlx/moshi_mlx/py.typed new file mode 100644 index 0000000..e69de29 diff --git a/moshi_mlx/setup.cfg b/moshi_mlx/setup.cfg index 5bccac4..8c3f101 100644 --- a/moshi_mlx/setup.cfg +++ b/moshi_mlx/setup.cfg @@ -4,3 +4,7 @@ max-line-length = 120 [flake8] max-line-length = 120 ignore = E203,E704 +exclude = + dist + build + diff --git a/rust/LICENSE b/rust/LICENSE new file mode 100644 index 0000000..261eeb9 --- /dev/null +++ b/rust/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/rust/README.md b/rust/README.md new file mode 100644 index 0000000..5544a96 --- /dev/null +++ b/rust/README.md @@ -0,0 +1,62 @@ +# moshi - rust + +See the [top-level README.md](../README.md) for more information. + +This provides the Rust backend (both Mimi and Moshi) and client implementation. +The Mimi implementation is available through Python bindings, through the `rustymimi` package. + +## Requirements + +You will need a recent version of the [Rust toolchain](https://rustup.rs/). + +## Rust based Mimi with Python bindings + +First, a standalone rust based implementation of Mimi is provided, along with Python bindings. +This is the one used by `moshi_mlx`. It is automatically installed with `moshi_mlx`, but you +can install it separately as +```bash +# Install from pip: +pip install rustymimi==0.1.1 +# Alternatively, if you want to compile the package run from the root of the repo. +maturin dev -r -m rust/mimi-pyo3/Cargo.toml +``` + +## Rust server + +In order to run the rust inference server, use the following command from within +the this directory: + +```bash +cargo run --features cuda --bin moshi-backend -r -- --config moshi-backend/config.json standalone +``` + +When using macOS, you can replace `--features cuda` with `--features metal`. + +Alternatively you can use `config-q8.json` rather than `config.json` to use the +quantified q8 model. + +Once the server has printed 'standalone worker listening', you can use the web +UI. By default the rust version uses https so it will be at +[localhost:8998](https://localhost:8998). + +You will get some warnings about the site being unsafe. When using chrome you +can bypass it by selecting "Details" or "Advanced", then "Visit this unsafe +site" or "Proceed to localhost (unsafe)". + +## Rust client + +We recommend using the web UI as it provides some echo cancellation that helps +the overall model quality. Alternatively we provide some command line interfaces +for the rust and python versions, the protocol is the same as with the web UI so +there is nothing to change on the server side. + +### Rust Command Line + +From within the `rust` directory, run the following: +```bash +cargo run --bin moshi-cli -r -- tui --host localhost +``` + +## License + +The present code is provided under the Apache license. diff --git a/scripts/run_ci_when_installed.sh b/scripts/run_ci_when_installed.sh new file mode 100755 index 0000000..90abbbe --- /dev/null +++ b/scripts/run_ci_when_installed.sh @@ -0,0 +1,12 @@ +#!/bin/bash + +# This script is used to detect if moshi or moshi_mlx are installed, and run +# their CI only in that case! + +package=$1 +if python -c "from $package import models"; then + # package is installed, let's run the command + eval $2 +else + echo "Package $package not installed, skipping the CI for it." +fi