Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

[2019R1] Unsupported primitive

rudakov__mikhail
Beginner
1,075 Views

I'm trying to run a slightly modified version of a U-Net converted from Keras to Tensorflow. Conversion to IR runs flawlessly, but when I try to load it in my sample program, I get this error:

[ INFO ] InferenceEngine:
        API version ............ 1.4
        Build .................. 19154
Loading plugin

        API version ............ 1.5
        Build .................. win_20181005
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
        h:/=bin-fcnet/bin.xml
        h:/=bin-fcnet/bin.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ ERROR ] Unsupported primitive of type: Interp name: up_sampling2d_4/ResizeBilinear
..\src\mkldnn_plugin\mkldnn_node.cpp:175

 

Indeed, I have such operation in my model graph. But, as far as I know, both ResizeBilinear  adn Upsample2D should be supported by OpenVINO. Whan should I do to get rid of this error?

0 Kudos
14 Replies
Shubha_R_Intel
Employee
1,075 Views

Dear Mikhail, actually according to the below document ResizeBilinear and Upsample2D are supported. Search for ResizeBilinear and Upsample you will find them. If Model Optimizer successfully generated IR but your inference fails with Unsupported Primitive then this feels like an Inference Engine bug.

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html

I have PM'd you. Please send me your frozen tensorflow model and the exact MO command you used so that I can reproduce your issue. Also send me your inference script. Can you kindly try one of our OpenVino samples [for inference] if any of them are applicable ? 

Thanks,

Shubha

 

 

0 Kudos
om77
New Contributor I
1,075 Views

Hi,

to run this on CPU device an additional plugin still may be required (libcpu_extension_sse4.so or libcpu_extension_avx2.so or libcpu_extension_avx512.so).

-d CPU -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so

0 Kudos
rudakov__mikhail
Beginner
1,075 Views

Attached the frozen model. The command I used for converting is below:

C:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo.py --input_model "H:\=bin-fcnet\bin.pb" --output_dir H:\=bin-fcnet\ --input_shape [1,256,256,1]

I'll soon try to run one of the sample models (or the model I previously used to test OpenVINO) and tell you the results.

0 Kudos
rudakov__mikhail
Beginner
1,075 Views

Followed the instructions from this page https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models.html

And I managed to convert the sample Inception model into IR, but loading it results in the following error:

[ INFO ] InferenceEngine:
        API version ............ 1.4
        Build .................. 19154
Loading plugin

        API version ............ 1.5
        Build .................. win_20181005
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
        H:\models\inception_v1_2016_08_28.tar\ir\inception_v1_inference_graph.xml
        H:\models\inception_v1_2016_08_28.tar\ir\inception_v1_inference_graph.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ ERROR ] Can't find a sigmoid layer in the topology

I've attached the converted IR files to this post

0 Kudos
rudakov__mikhail
Beginner
1,075 Views

Also, I'm running Windows 10 x64 if it matters.

0 Kudos
om77
New Contributor I
1,075 Views

Betting for some model optimizer issue on Windows.

I'm under Linux with OpenVino 2019R1 and observed errors during inference of attached inception_v1_inference_graph.xml.

Also tried to convert source bin.pb on my Linux first, and next it's running ok for inference.

0 Kudos
om77
New Contributor I
1,075 Views

Looking at attached file inception_v1_inference_graph.xml:

        <layer id="0" name="input" precision="FP32" type="Input">
            <output>
                <port id="0">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>224</dim>
                    <dim>224</dim>
                </port>
            </output>
        </layer>

It's not the input_shape [1,256,256,1] like was noted during the model conversion here.

So my Linux triggers the error:

ValueError: cannot reshape array of size 50176 into shape (1,3,224,224).

0 Kudos
Shubha_R_Intel
Employee
1,075 Views

Dear Mikhail, did you run one of the OpenVino samples ? How are you accomplishing inference ?

Looking forward to hearing more. Thanks for your *.zip attached containing IR but I still cannot help you until i know how you performed inference.

Thanks a lot !

Shubha

0 Kudos
rudakov__mikhail
Beginner
1,075 Views

Shubha R. (Intel) wrote:

Dear Mikhail, did you run one of the OpenVino samples ? How are you accomplishing inference ?

Looking forward to hearing more. Thanks for your *.zip attached containing IR but I still cannot help you until i know how you performed inference.

Thanks a lot !

Shubha

Yes, I'm running one of the OpenVINO samples (as far as I remember it's the Benchmark) modified to take in an arbitrary model.

0 Kudos
Shubha_R_Intel
Employee
1,075 Views

Dear rudakov, mikhail,

Please send me that modified sample, either via PM or attach it to this forum post. I have PM'd you.

Thanks,

Shubha

0 Kudos
rudakov__mikhail
Beginner
1,075 Views

Ok, here it is. Thank you in advance.

0 Kudos
Shubha_R_Intel
Employee
1,075 Views

Dearest rudakov, mikhail,

Thank you for attaching all the stuff. I will reproduce your error today and report back on this forum.

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
1,075 Views

Dearest rudakov, mikhail,

I reproduced your bug on inception_v1. Thank you kindly for all of your cooperation. I really appreciate it. I have filed a bug on your behalf and I will keep you posted here.

Sincerely,

Shubha

0 Kudos
Delfrate__Jacques
1,075 Views

Hi, I have a similar issue,

I want to load my own pre-trained model inside a CPU plug-in, however one of my last layer is resizeBilinear which throw an exception error. Any update ?

Thank you.

0 Kudos
Reply