- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was trying to run inference on a mobilenet v1 customised model. I observed that the layers output by ie.query_network(net,device) and net.layers.keys() are different. I know that this means that some layers are not supported for inference on that particular device.
My question is despite this, net.infer() doesn't throw an error and runs successfully. I wanted to know if a layer is not supported on a particular device then will that layer be run on CPU ? Or how does the whole process happen in this case.
- Tags:
- OpenVino
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
If you are Running the sample application on hardware other than CPU, it requires performing additional hardware configuration steps.
If your model does runs in CPU in spite of the compatibility, it might be a lucky shot. However, you might face some other problems when running your application afterwards.
You can refer to the model optimizer to convert (your model) section here:
https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/get-started.html
Sincerely,
Iffa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
If you are Running the sample application on hardware other than CPU, it requires performing additional hardware configuration steps.
If your model does runs in CPU in spite of the compatibility, it might be a lucky shot. However, you might face some other problems when running your application afterwards.
You can refer to the model optimizer to convert (your model) section here:
https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/get-started.html
Sincerely,
Iffa
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page