Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OPENVINO MXNet inference on FPGA: "Failure due to generic standard exception"

ZLIN5
New Contributor I
499 Views

Hello,

I encountered a "Failure due to generic standard exception" error when I'm trying to do inference on FPGA with an MXNet model I trained on the GTSDB dataset. The deploy model is running well with the MXNet python API and able to detect traffic signs.

Here I attach the .params and .json files of the deploy model together with the scripts for converting MXNet models and doing inference:

https://drive.google.com/file/d/1uTjev2s8smG4hbzC_tFB96vV3dHeeI6q/view?usp=sharing

The conversion works well yet has a warning saying:

/opt/intel/computer_vision_sdk_fpga_2018.1.267/deployment_tools/model_optimizer/venv/lib/python3.5/site-packages/mxnet/module/base_module.py:54: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:

    relu4_3_scale

    data

  warnings.warn(msg)

 

Any ideas on how to solve this problem? Did I miss anything in converting and using the MXNet model? Thanks!

 

0 Kudos
1 Solution
ZLIN5
New Contributor I
499 Views

The problem has been resolved after upgrading from R2 to R3.

View solution in original post

0 Kudos
4 Replies
Mark_L_Intel1
Moderator
499 Views

Hi Zhongyi,

This seems to be an environment problem, I tried your param file and did the convert and I didn't get any issues. Or you can try the latest release I am using.

Here is my output:

~/Downloads/mxnet$ python3 /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/mo_mxnet.py --input_model deploy_gtsdb_ssd_vgg16_reduced_300_510-0210.params --mean_values [125,127,130] --input_shape [1,3,300,510]
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/aplbuild/Downloads/mxnet/deploy_gtsdb_ssd_vgg16_reduced_300_510-0210.params
    - Path for generated IR:     /home/aplbuild/Downloads/mxnet/.
    - IR output name:     deploy_gtsdb_ssd_vgg16_reduced_300_510-0210
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,3,300,510]
    - Mean values:     [125,127,130]
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
MXNet specific parameters:
    - Load the model trained with MXNet with version lower than 1.0.0:     False
    - Prefix name for args.nd and argx.nd files:     
    - Pretrained model which will be merged with .nd files:     
    - Enable save built params file from nd files:     False
Model Optimizer version:     1.2.110.59f62983

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/aplbuild/Downloads/mxnet/./deploy_gtsdb_ssd_vgg16_reduced_300_510-0210.xml
[ SUCCESS ] BIN file: /home/aplbuild/Downloads/mxnet/./deploy_gtsdb_ssd_vgg16_reduced_300_510-0210.bin
[ SUCCESS ] Total execution time: 1.25 seconds.

 

0 Kudos
ZLIN5
New Contributor I
499 Views

Hi Mark:

Thank you for the reply! Yes I was able to get the same result. The bug I encountered actually came out during inferencing. Maybe I didn't make myself clear enough. Have you tried the inference script?

Thanks,

Zhongyi

0 Kudos
ZLIN5
New Contributor I
499 Views

I updated OPENVINO yesterday and I got the exact error message of the bug: 

Error: Failure due to generic standard exception => Parameter {%relu4_3_norm = fp32[1, 512, 38, 64] param(0) } should have exactly 1 user, not 2

 

Any ideas on how to solve this?

Thanks,

Zhongyi

0 Kudos
ZLIN5
New Contributor I
500 Views

The problem has been resolved after upgrading from R2 to R3.

0 Kudos
Reply