Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Model Optimizer changes number of output nodes when converting MXNet model into OpenVINO IR format

Moiseev__Nikita
Beginner
534 Views

Hi,
I have faced a problem with converting MXNet model into OpenVINO IR format.
The conversion itself went OK, but resulting IR model has different number of outputs compared to original MXNet model:
the original has 3 output nodes, and converted - only 1; number of total output bytes is also different.
It seems like output data from IR format model is the same output data from original model's 3 outputs, but interleaved like this:
0,(1 element from 1st output),(1 element from 2nd output),(4 elements from 3rd output) <repeat>

Here's an example of output data from converted model:

###########################################################################

Output blobs count = 1 (expected 3)
Output blob name = 518/DetectionOutput_
Output blob 0: data size = 11200 bytes
First 100 output data elements: 
0 0 0.998962 38.7622 28.7598 289.165 225.351 0 0 0.0160533 24.5206 86.6807 87.5874 119.69 0 0 0.0150969 49.2879 79.5475 95.7129 125.699 0 0 0.0150924 26.0119 23.7352 88.9664 56.5492 0 0 0.0143007 90.3354 50.0289 117.248 125.312 0 0 0.0141576 55.6979 55.6724 88.5047 118.517 0 0 0.0134601 -33.0526 94.1575 66.4337 197.006 0 0 0.0133488 45.1735 84.4904 74.8475 99.5169 0 0 0.0133439 52.4331 77.0465 67.5804 106.977 0 0 0.0133112 -13.6721 32.5152 45.3699 194.977 0 0 0.013197 45.1731 92.4919 74.8475 107.515 0 0 0.0131195 -10.8331 140.522 18.8482 155.481 0 0 0.0131189 -3.51254 133.088 11.5209 162.93 0 0 0.0131181 43.5862 74.9004 76.6063 109.299 0 0

###########################################################################

And here's from original MXNet model:

###########################################################################

ssd2_slice_axis3_output's output shape = (100, 1); data size = 400 bytes
First 14 output data elements: 
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]

ssd2_slice_axis4_output's output shape = (100, 1); data size = 400 bytes
First 14 output data elements: 
[0.998962 0.016053 0.015097 0.015092 0.014301 0.014158 0.01346  0.013349 0.013344 0.013311 0.013197 0.01312  0.013119 0.013118]

ssd2_slice_axis5_output's output shape = (100, 4); data size = 1600 bytes
First 56 output data elements: 
[ 38.762215  28.759842 289.16476  225.35081   24.520573  86.68068   87.58739  119.68973   49.28791   79.547455  95.7129   125.69919   26.011848  23.735195  88.966385  56.549206  90.335365  50.02889  117.248314 125.311935  55.69793   55.67238   88.50471  118.5175   -33.052597  94.1575    66.43374  197.00601   45.173553  84.49038   74.84749   99.516884  52.433056  77.04646   67.580444 106.977325 -13.672087  32.51516   45.36994  194.97662   45.173138  92.49194   74.8475   107.515015 -10.833103 140.52196   18.84819  155.48097   -3.512536 133.08763   11.520887 162.93007   43.58619   74.90039   76.60632  109.29921 ]

###########################################################################

Practically speaking, I can obtain the desired output by selecting elements on specific places, but
can I obtain output data from only one of original model's outputs without doing such 'de-interleaving'?
i.e. somehow specify that I am only interested in data from output <N>?

To convert my model I use

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_mxnet.py \
    --input_symbol model-symbol.json \
    --input_model model-0001.params \
    --input_shape [1,3,300,300] \
    --enable_ssd_gluoncv \
    --output ssd2_slice_axis3,ssd2_slice_axis4,ssd2_slice_axis5

I've attached an archive with Dockerfile that demonstrates above-mentioned behavior.
There's a small readme.txt file inside which says how to run examples

Thanks

0 Kudos
2 Replies
Anna_B_Intel
Employee
534 Views

Hi Nikita, 

That's correct behavior. Gluoncv splits classic SSD layers like DetectionOutput and PriorBoxes into many elementary operations. --enable_ssd_gluoncv key fuses operations back. Inference Engine doesn't support  layers like _contrib_box_nms, so the explained way needed to enable gluoncv ssd support.

This means you need to handle model output from DetectionOutput layer, which is differently from how you processed from original model.

 

Best wishes, 

Anna

 

0 Kudos
Moiseev__Nikita
Beginner
534 Views

Thank you very much for your reply!

But I still have a couple of questions:

  • does it mean that there is nothing wrong with my model from OpenVINO perspective and such difference in output will happen to every model converted with --enable_ssd_gluoncv? Or is it the problem with my particular model whose topology is not currently supported?
  • can I do something about my model to nullify this difference in output?

Thank you

0 Kudos
Reply