-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add device support in TTS and Synthesizer #2855
Conversation
self.manager = ModelManager(models_file=self.get_models_file_path(), progress_bar=progress_bar, verbose=False) | ||
|
||
self.synthesizer = None | ||
self.voice_converter = None | ||
self.csapi = None | ||
self.model_name = None | ||
|
||
if gpu: | ||
warnings.warn("`gpu` will be deprecated. Please use `tts.to(device)` instead.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added warning. We could add specific dates or versions to better inform users about future plans, but I left it this way because I didn't have enough context on the future releases roadmap.
@@ -5,19 +5,21 @@ | |||
from torch import nn | |||
|
|||
|
|||
def numpy_to_torch(np_array, dtype, cuda=False): | |||
def numpy_to_torch(np_array, dtype, cuda=False, device="cpu"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added new device
argument to functions called in Synthesizer
. To retain backwards compatibility, we keep the cuda
argument for now; we should probably clean them up in the future and provide a single way of configuring device/enabling CUDA.
use_gl = self.vocoder_model is None | ||
if not use_gl: | ||
vocoder_device = next(self.vocoder_model.parameters()).device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In some obscure use cases, the user could have placed the feature frontend and the vocoder on different devices.
>>> tts.synthesizer.tts_model = tts.synthesizer.tts_model.to("cuda:0")
>>> tts.synthesizer.vocoder_model = tts.synthesizer.vocoder_model.to("cuda:1")
We check the device of the vocoder, if it exists.
Hi @erogol, curious to hear your thoughts on this implementation! The guiding philosophy was to use the PyTorch I've signed the CLA, but the first few commits didn't have my GitHub email (I just got a new laptop and forgot to set up my Git user information), which is why the CLA test is marked as pending. |
@jaketae thanks for the PR. I'll review it tomorrow 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looks good!! Thanks for the PR. If you think its done I can merge.
@erogol, thanks for the quick review! I think we can go ahead with the merge unless you have second thoughts. I'll maybe open a follow-up PR to improve docs or the README where applicable. Thanks! |
@jaketae awesome thanks again |
Context
In #2282, we proposed up the possibility of implementing a
tts.to(device)
interface as a substitute foruse_cuda
orgpu
flags. The current flags do not allow users to specify the specific GPU device (e.g.,cuda:3
). It also does not allow users to use other accelerated backends, such as Apple Silicon GPUs (MPS), which PyTorch now supports.Solution
We make
TTS
andSynthesizer
classes inherit fromnn.Module
. This gives us.to(device)
for free for both of the classes.We can now run TTS on Apple Silicon (tested on M2 Max). Not all kernels have been implemented in MPS in PyTorch yet, so we need to set the environment variable
to enable CPU fallback. With this set, we can now run
Also tested with
make test
.