I'm trying to run a converted Kaldi model in OpenVino.
I use the Librispeech model provided by the documentation webpage.
The conversion process did well, but I encountered the error message when the OpenVino app (offline_speech_recognition_app.exe) comes to inference.
[ERROR] Sample supports only topologies with I input
Failed to initialize speech library. Status: -5
I have tried a lot of solutions but still cannot deal with these problems.
Is there any solution to this problem?
I will be grateful for any help you can provide.
This is because Offline Speech Recognition Demo supports only topologies with one input. I have checked the inputs of Librispeech nnet3 using Netron. It has the following inputs:
For your information, Offline Speech Recognition Demo supports lspeech_s5_ext model, an example of pre-trained LibriSpeech DNN. You may download the lspeech_s5_ext model by executing the following script:
- On Windows OS: <INSTALL_DIR>\deployment_tools\demo\demo_speech_recognition.bat
- On Linux OS:<INSTALL_DIR>/deployment_tools/demo/demo_speech_recognition.sh
On another note, steps to run Speech Recognition Demos with Pre-trained Models is available at the following page:
In fact, I face the same problem when I use my model.
I really want to try to inference my model on OpenVino.
Is it possible to provide a Kaldi recipe for the lspeech_s5_ext model to let me know how to retraining the model (how to concat input feature for OpenVino feature extractor) or another solution to solve this problem?
Thank you for considering my request.
Thanks for your patience.
From the OpenVINO perspective, we don’t have any specific model retraining module for the Kaldi model.
From our observation, we notice that the model you used (Librispeech nnet3 and your custom model) has 2 inputs that can be run using the Automatic Speech Recognition C++ sample.
Note, Kaldi Aspire Chain Time Delay Neural Network (TDNN) model has 2 inputs which the steps above should be worked for both models.
This thread will no longer be monitored since we have provided a solution.
If you need any additional information from Intel, please submit a new question.