Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
86 Views

query_network() and supported layers

Jump to solution

I was trying to run inference on a mobilenet v1 customised model. I observed that the layers output by ie.query_network(net,device) and net.layers.keys() are different. I know that this means that some layers are not supported for inference on that particular device. 

My question is despite this, net.infer() doesn't throw an error and runs successfully. I wanted to know if a layer is not supported on a particular device then will that layer be run on CPU ? Or how does the whole process happen in this case.

Tags (1)
0 Kudos

Accepted Solutions
Highlighted
Moderator
66 Views

Greetings,


If you are Running the sample application on hardware other than CPU, it requires performing additional hardware configuration steps.


If your model does runs in CPU in spite of the compatibility, it might be a lucky shot. However, you might face some other problems when running your application afterwards.


You can refer to the model optimizer to convert (your model) section here:

https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/get-started.html



Sincerely,

Iffa


View solution in original post

0 Kudos
1 Reply
Highlighted
Moderator
67 Views

Greetings,


If you are Running the sample application on hardware other than CPU, it requires performing additional hardware configuration steps.


If your model does runs in CPU in spite of the compatibility, it might be a lucky shot. However, you might face some other problems when running your application afterwards.


You can refer to the model optimizer to convert (your model) section here:

https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/get-started.html



Sincerely,

Iffa


View solution in original post

0 Kudos