- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, My model framework is PyTorch, converted to ONNX and then to an IR file, but it cannot be inferenced on the VPU. Is it not possible to run a TTS model on the VPU?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Roxy1,
For your information, we do have Public Pre-Trained Text-to-Speech models, ForwardTacotron and WaveRNN models.
Referring to Public Pre-Trained Models Device Support, ForwardTacotron model is not supported by MYRIAD while WaveRNN model is supported by MYRIAD.
Next, I try inferencing these models with Text-to-speech Python Demo. This demo requires ForwardTacotron and WaveRNN models.
python text_to_speech_demo.py -i test.txt --model_duration forward-tacotron-duration-prediction.xml --model_forward forward-tacotron-regression.xml --model_rnn wavernn-rnn.xml --model_upsample wavernn-upsampler.xml -d MYRIAD
WaveRNN model can be loaded to MYRIAD plugin, but ForwardTacotron model is failed to be loaded due to unsupported layer by MYRIAD.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Roxy1,
This thread will no longer be monitored since we have provided answer. If you need any additional information from Intel, please submit a new question.
Regards,
Peh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page