- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When calling IENetwork.outputs() the resulting list is sorted by the name of the output and does not preserve the order specified using the model optimiser --output option (even though in python the resulting list is an OrderedDict...). Is there any way to get the output order to be preserved when loading the model?
I'm using 2020.2.117
Thanks
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Rycharde,
Thanks for reaching out.
Reading networks using IENetwork constructor is deprecated. More information is available here:
https://docs.openvinotoolkit.org/2020.2/ie_python_api/classie__api_1_1IENetwork.html#details
I would suggest you try reading the network using the following function, ie_api.IECore.read_network.
On a separate note, we would also like you to share some information regarding the function that you’re using as well.
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Apologies, I am actually already using IECore.read_network(). What more information do you need? I provide the two mandatory parameters and calling IENetwork.outputs() on the value returned by read_network() provides the outputs in alphabetical order, not the order as expressed in the IR xml. I've coded around the matter, but it seems odd that the order is not respected, especially since the python API uses an OrderedDict.
UPDATE: I've since used compile_tool with the same IR for use with a different application and observe that outputs are sorted alphabetically. So this must be done by the model optimiser and not in the API. Presumably the sorting of outputs is intended but there is no documentation to confirm this.
Thanks,
Rych
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Rycharde,
Thanks for providing updates. Please share Inference Engine application code, as well as details about your model (topology) and if possible, please share the Intermediate Representation (IR) files for us to reproduce your issue.
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Rycharde,
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page