Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

inference engine error "Supported primitive descriptors list is empty for node"

Sheng_G_Intel1
Employee
2,547 Views

hi, i have used model_optimizer to transform a FingerNet on tensorflow to openvino

the cmd is :

python3 mo_tf.py --input_model /media/data/code/fingernet/prcsnet_v1.pb --output_dir /media/data/code/fingernet --input_shape [1,152,152,1] --tensorflow_subgraph_patterns "phase_img/.*" --offload_unsupported_operations_to_tf 

then use 

  ./tf_call_ie_layer/build.sh to build an libtensorflow_call_layer.so

but when i call the inference engine, there is an error like below:

./testvino -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png -m /media/data/code/fingernet/prcsnet_v1.xml -d CPU -l /media/data/code/tensorflow/bazel-bin/tensorflow/cc/inference_engine_layer/libtensorflow_call_layer.so -pp /opt/intel/computer_vision_sdk/inference_engine/lib/ubuntu_16.04/intel64/


[ INFO ] InferenceEngine: 
    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png
[ INFO ] Loading plugin: FLAGS_pp=/opt/intel/computer_vision_sdk/inference_engine/lib/ubuntu_16.04/intel64/, FLAGS_d=CPU, FLAGS_l=/media/data/code/tensorflow/bazel-bin/tensorflow/cc/inference_engine_layer/libtensorflow_call_layer.so
[ INFO ] CPU (MKLDNN) extensions is loaded /media/data/code/tensorflow/bazel-bin/tensorflow/cc/inference_engine_layer/libtensorflow_call_layer.so
[ INFO ] CPU (MKLDNN) extensions is loaded /opt/intel/computer_vision_sdk/inference_engine/lib/ubuntu_16.04/intel64/
[ INFO ] 
    API version ............ 1.5
    Build .................. lnx_20181004
    Description ....... MKLDNNPlugin

[ INFO ] Loading network files
[ INFO ] Loading network files:
    /media/data/code/fingernet/prcsnet_v1.xml
    /media/data/code/fingernet/prcsnet_v1.bin
[ INFO ] Network batch size: 1, precision: FP32
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] FLAGS_nthreads=0
[ ERROR ] Supported primitive descriptors list is empty for node: img_norm/Mean_1
/teamcity/work/scoring_engine_build/releases_2018_R5/src/mkldnn_plugin/mkldnn_node.cpp:232
/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/include/details/ie_exception_conversion.hpp:71
 

how could i fix this error? or any suggestions?

0 Kudos
10 Replies
Shubha_R_Intel
Employee
2,547 Views

Dear Sheng:

First I see that you are using an older release of OpenVino. Can you kindly download 2019 R1 and try again ? Many issues have been fixed in the latest release.

Thanks for using OpenVino !

Shubha

0 Kudos
Sheng_G_Intel1
Employee
2,547 Views

hi  Shubha,

 

        i have upgraded OpenVINO to 2019 r1, and still got the error:

[ INFO ] InferenceEngine: 
    API version ............ 1.6
    Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2

[Step 1/8] Parsing and validation of input args
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car.png
Progress: [....................] 100.00% done

[Step 2/8] Loading plugin
[ INFO ] CPU (MKLDNN) extensions is loaded /root/tensorflow/bazel-bin/tensorflow/cc/inference_engine_layer/libtensorflow_call_layer.so
[ INFO ] 
    API version ............ 1.6
    Build .................. 22443
    Description ....... MKLDNNPlugin
Progress: [....................] 100.00% done

[Step 3/8] Read IR network
[ INFO ] Loading network files
[ INFO ] Network batch size: 1, precision: FP32
Progress: [....................] 100.00% done

[Step 4/8] Configure input & output of the model
[ INFO ] Preparing output blobs
Progress: [....................] 100.00% done

[Step 5/8] Loading model to the plugin 
[ ERROR ] Supported primitive descriptors list is empty for node: img_norm/Mean
 

0 Kudos
He__Zhenwei
Beginner
2,547 Views

Hi, I am experiencing same issue recently,

Is there any solution or suggestion?

I have already using Version 2019.

 

my output is:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Supported primitive descriptors list is empty for node: Mean
/teamcity/work/scoring_engine_build/releases_2018_R5/src/mkldnn_plugin/mkldnn_node.cpp:232
/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/inference_engine/include/details/ie_exception_conversion.hpp:71
 

0 Kudos
Shubha_R_Intel
Employee
2,547 Views

Dear G. Sheng, it looks like you are an intel employee. 

Here is what I advise you to do:

Please clone this repo:

https://github.com/opencv/dldt

Then, Follow this https://github.com/opencv/dldt/blob/2018/inference-engine/README.md  to make a DEBUG build of Inference Engine.

Step through your code and root cause what the problem could be.

It could be a bug in fact.

If you want me to do this for you I can do so, but you will have to wait until I have the time - external customers are highest priority. Just look me up and email me directly. 

Thanks,

Shubha

 

 

0 Kudos
Shubha_R_Intel
Employee
2,547 Views

Dear He, Zhenwei,

Please give me the url from where you're getting the FingerNet model. I see FingerNet Link - is that where you're downloading the model from ? Are you using the same method as Sheng (above) to generate IR ?

python3 mo_tf.py --input_model /media/data/code/fingernet/prcsnet_v1.pb --output_dir /media/data/code/fingernet --input_shape [1,152,152,1] --tensorflow_subgraph_patterns "phase_img/.*" --offload_unsupported_operations_to_tf 

Please confirm and I will debug this issue for you.

Thanks,

Shubha

0 Kudos
Sheng_G_Intel1
Employee
2,547 Views

OK, i will follow your advice to debug this error myself, thanks

0 Kudos
Sheng_G_Intel1
Employee
2,547 Views

hi Shubha,

    i found the error occupied at 

      /media/data/code/dldt/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp:382

            primitive_desc_iterator itpd = desc.createPrimitiveDescriptorIterator(engine);

  it will throw a std::exception.

  deep into the function, it shows error is in MKLDNN:

/media/data/code/dldt/inference-engine/thirdparty/mkl-dnn/include/mkldnn.hpp:462

        error::wrap_c_api(mkldnn_primitive_attr_create(&result),
                "could not create a primitive attr");

that mkldnn_primitive_attr_create got an error.

i don't think mkldnn will have a bug for pooling layer, so maybe there are some wrong in my progam.

 

do you have any time to debug this error ? if so, i will send your the source code and IR files

0 Kudos
Sheng_G_Intel1
Employee
2,547 Views

hi Shubha,

   i found that the input data type of pooling layer is U8 and output is F32, and MKLDNN is not supported this.

   but in the xml the type of the layers are all F32, so the question is why the inference_engine got the type U8, where does it come from?

 

                <layer id="7" name="img_norm/mul" precision="FP32" type="Power">
                        <data power="1" scale="1.0" shift="0"/>
                        <input>
                                <port id="0">
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>152</dim>
                                        <dim>152</dim>
                                </port>
                        </input>
                        <output>
                                <port id="1">
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>152</dim>
                                        <dim>152</dim>
                                </port>
                        </output>
                </layer>
                <layer id="8" name="img_norm/Mean_1" precision="FP32" type="Pooling">
                        <data exclude-pad="true" kernel="152,152,1" pads_begin="0,0,0" pads_end="0,0,0" pool-method="avg" rounding-type="ceil" strides="1,1,1"/>
                        <input>
                                <port id="0">
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>152</dim>
                                        <dim>152</dim>
                                </port>
                        </input>
                        <output>
                                <port id="1">
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>1</dim>
                                </port>
                        </output>
                </layer>
                <layer id="9" name="img_norm/sub/negate_" precision="FP32" type="Power">
                        <data power="1" scale="-1" shift="0"/>
                        <input>
                                <port id="0">
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>1</dim>
                                </port>
                        </input>
                        <output>
                                <port id="1">
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>1</dim>
                                        <dim>1</dim>
                                </port>
                        </output>
                </layer>

 

0 Kudos
Sheng_G_Intel1
Employee
2,547 Views

hi Zhenwei,

        i have fixed this issue, you may find "Precision::U8" in your sample code, and replace it as "Precision::FP32".

 

 

He, Zhenwei wrote:

Hi, I am experiencing same issue recently,

Is there any solution or suggestion?

I have already using Version 2019.

 

my output is:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Supported primitive descriptors list is empty for node: Mean
/teamcity/work/scoring_engine_build/releases_2018_R5/src/mkldnn_plugin/mkldnn_node.cpp:232
/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/inference_engine/include/details/ie_exception_conversion.hpp:71
 

0 Kudos
Shubha_R_Intel
Employee
2,547 Views

Dear Sheng, 

Thank you for finding a fix and reporting it here. Carry on with OpenVino !

Thanks everyone -

Shubha

0 Kudos
Reply