I am trying to run inference on my RPi 4 with Rasbian OS ans NC2. My model is a custom trained model made in MATLAB R2020b. There is no inference issues when testing the model in the MATLAB environment. I first converted the trained model too ONNX format using the exportONNXNetwork command in MATLAB.
The conversion between ONNX was also successful with no errors / warnings using the following command (I have attached the resulting .xml file).
python mo.py --input_model nn.onnx --data_type FP16
When i try the model on the raspberry pi i get the following error;
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): Cannot create ShapeOf layer softmax40/ShapeOf id:21
I couldn't find any help towards solving this error, I am using the same version of openvino on my desktop and raspberry pi 2021.1. Any help towards solving this issue would be greatly appreciated.
Thank you for posting on the Intel® communities.
Your query will be best answered by our Open VINO support team. We will help you to move this post to the designated team so they can further assist you.
Have you try to run any of the OpenVINO samples or demos and check if the same error arises or not. Please come back to me with the result of the samples or demos.
I have tested a couple openvino models which work.
I have also retrained the Vgg19 nerual network on my data from within MATLAB, exported to openvino IR and works as expected. The issue seems to be relating my custom architecture.
inputlayer (with normalisation) > convolution2d > batch norm >relu > max pooling > convolution2d > batch norm > relu > fully connected > softmax (error occurs on this layer) > classification output.
Please share more details about your custom model, is it an object/classification model, the layers in use, and environment details (versions of Python, CMake, etc.).
If possible, please share the trained model files for us to reproduce your issue.
I have the following installed on the raspberry pi;
On my desktop I have the following;
The architecture is for binary image classification.
MATLAB code for the above;
layers = [
imageInputLayer([50 275 3],"Name","imageinput","Normalization","zerocenter")
convolution2dLayer([25 138],6,"Name","conv_1","Padding","same","Stride",[13 69])
convolution2dLayer([13 69],6,"Name","conv_2","Padding","same","Stride",[7 35])
We need further information from you. What's the topology of your model (and source repository name if possible) ?
Also, are you able to run inference on your custom model using CPU?
Yes the model runs on the CPU (the final confusion matrix is no where near the same as it is in MATLAB though, might be the pre-processing need to investigate this).
I have uploaded a minimal example here (without the MATLAB stuff);
How do i get the model to run on MYDRID correctly?
Also connecting the Neural compute stick 2 to my desktop using MYDRID the error is gone but the inference probaility is always [1, 0] (might be failing siliently)?
The topology is a convolutional neural network, specific details can be found in the .xml file (downloadable by following the link).
It’s great to know that you are able to run inference using CPU. I’ve tested your model and I’m able to run in my desktop by adding -d MYRIAD and obtain similar result. Are you still facing issues to run inference on your model using NCS2?
Yes, inference on the NC2 using MYRIAD always results in the same class probabilities regardless of the input image, being [1, 0] in every case.
Using the code supplied above I get a confusion matrix of [352, 21; 95, 504] on the CPU. Using MYRIAD I get [373, 0; 599, 0].
Printing out the prob variable It is always [1. 0.], using the openvino python classification_sample.py with -d MYRIAD gives exactly the same results.
OpenVINO™ toolkit provides a set of pre-trained and public models that can be used for learning and demo purposes or for developing deep learning software. We’ve validated and tested limited set of model topologies for these purposes. You can get more information about these at the following links:
Please also note that VPU devices (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs) do not support all available topologies that are supported by CPU. The link below shows the correlation between pre-trained models, demos, and supported plugins.
If your model is a generic model without any specific topology, we can’t guarantee that it’s going to work. Otherwise, if it’s based on some existing model topology, please specify your model topology or provide us a link of from where you downloaded this model.
We tested your model and your sample. We tried to change the device argument from inside your code, because apparently, your code does not support -d argument.
We share the results here.
Testing results: Custom code
Model: Custom model
CPU – [[373,0], [599,0]]
MYRIAD – 0.0
We also tested your model with Image Classification Python Sample Async.
Testing results: Image Classification Python Sample Async
Model: Custom model
CPU – probability value varies according to input images
MYRIAD – probability value is [1,0] for all input images
We used Intel® Core™ i7-10610U Processor for testing.