Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Problem with converting a mxnet model to IR, using outputs from multiple layers

Chen_S_
Beginner
332 Views

Hi there,

I am trying to output both the result of intermediate layers and the result of the final layer of a network. My network is trained using mxnet, and then converted to IR using the following cmd line in Ubuntu-16.04:
python3 mo_mxnet.py --input_model simple-multi-0000.params' --input_shape [1,3,128,128] --output pool1_fwd,pool2_fwd,fc_fwd

The net structure is as follows:
data -> conv1 -> relu1 -> pool1 -> conv2 -> relu2 -> pool2 -> fc

The following error happens when converting the model to IR:

[ ERROR ]  Cannot infer shapes or values for node "conv2_fwd".
[ ERROR ]  0
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function mxnet_conv2d_infer at 0x7f60cfe771e0>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.

The model can be converted successfully when only one output is specified, being pool1_fwd, pool2_fwd, or fc_fwd. It seems that the mo_mxnet.py stops parsing the graph when reaching the first specified output node, pool1_fwd in this case.

Besides, the mo/pipeline/mx.py is buggy so that it cannot process the output names correctly. Function "driver" in mx.py calls function "add_output_ops" from mo.front.extractor, passing variable "outputs" as the second argument. "add_output_ops" expects a dict as the second argument, but variable "outputs" is a list. To fix this problem, the code is modified as follows:

    _outputs = output_user_data_repack(graph, outputs)
    graph, output_op_nodes = add_output_ops(graph, _outputs)

i.e. variable "outputs" is converted to a dict using function "output_user_data_repack" and then passed to "add_output_ops".
The model optimizer can work with a single output name after the modifications above, but still, output from both a intermediate layer and any of the later layers depending on it is not possible, due to the problem reported above. Hope the community can fix this problem soon.

 

 

 

 

 

 

 

0 Kudos
1 Reply
Shubha_R_Intel
Employee
332 Views

This question was addressed by this github issue:

dldt github

0 Kudos
Reply