- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Consider the following code:
IExtensionPtr inPlaceExtension; inPlaceExtension = std::make_shared<YoloV3InPlaceExtension>(); m_ie.AddExtension(inPlaceExtension, "CPU"); m_network.AddExtension(inPlaceExtension); for ( auto it = m_network.begin(); it != m_network.end(); it++ ) { auto& layer = *it; string affinity = "GPU,CPU"; if ( layer->type == CUSTOM_YOLOV3DETECTION_OUTPUT_TYPE ) { affinity = "CPU"; } m_network.getLayerByName(layer->name.c_str())->affinity = affinity; }
I believe this creates a custom layer, and sets its affinity to CPU-only (letting the plugin select a device for the other layers). However, later on either of causes an error:
m_executableNetwork = m_ie.LoadNetwork(m_network, "MULTI:GPU,CPU", {});
m_executableNetwork = m_ie.LoadNetwork(m_network, "HETERO:CPU,GPU", {});
The former fails with
20/02/21-12:11:48.694 E <5548> [root] Exception in loadNetwork into MULTI:CPU,GPU: Unknown Layer Type: Yolov3DetectionOutput
The latter causes the error:
20/02/21-12:07:39.237 E <3532> [root] Exception in loadNetwork into HETERO:CPU,GPU: Network passed to LoadNetwork has affinity assigned, but some layers eg:
(Name:conv0, Type: Convolution) were not assigned to any device.
It might happen if you assigned layers amnually and missed some layers or
if you used some automatic assigning mode which decided that these layers are not
supported by any plugin
So, is there a way for network with CPU-only custom layer work with either MULTI or HETERO plugin?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Update: loading the network into the HETERO plugin works, if not setting the affinities at all. Not the case with MULTI. Any pointers on how to better understand the distinction between the two, and their respective interoperability with custom layers?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, Alex A.
Please kindly check replies in the following thread - https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/815219
Hope this helps.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So, to follow up: does it mean, that in case of MULTI the expectation is that every layer of the network supports every specified device? So it'd never work with a CPU-only extension, unless only CPU is being used?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Alex A.
No, not necessary that it's going to be the case of every layer on every device. MULTI plugin automatically assigns inference requests to available computational devices.
Also, please take into account that not all the layers are supported by all the devices. You could find more information about it here https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#supported_layers
If some layer is supported by CPU only or some other device that is not presented in your system, so it would be processed by CPU only.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page