Community
cancel
Showing results for 
Search instead for 
Did you mean: 
348 Views

MYRIAD loader not accepting Model Optimizer output - "unsupported layer type Resample"

I'm using OpenVINO toolkit 2018.4.420, on Ubuntu 16.04.

I downloaded the model pointed to in Converting a TensorFlow Model documentation page, model called:
   ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03

Model Optimizer ran successfully, with command:

python3  /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py  --input_model  frozen_inference_graph.pb --tensorflow_use_custom_operations_config  /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json  --tensorflow_object_detection_api_pipeline_config  pipeline.config  --data_type FP16

I used --data_type FP16 as the Neural Compute Stick 2 seems to require that, while the CPU engine requires FP32.

But then running:
  ./object_detection_demo_ssd_async -i "cam"  -m /home/rich/models_dir/frozen_inference_graph.xml    -d MYRIAD

I get this error, seemingly rejecting the mo_tf output,

 ./object_det_ssd.sh
InferenceEngine:
    API version ............ 1.4
    Build .................. 17328
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin

    API version ............ 1.4
    Build .................. 17328
    Description ....... myriadPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
[ ERROR ] Cannot convert layer "Resample_" due to unsupported layer type "Resample"

 

How do I use Model Optimizer to generate output from the Intel supported models that the Neural Compute Stick 2 will accept?

Thanks!
Rich

 

 

 

0 Kudos
7 Replies
J__Niko
Beginner
348 Views

I have the same problem. It seems that NCS2 has a bug or OpenVINO has incomplete documentation related to Resampling layer.

There was no problem when I converted ssd_mobilenet_v2_coco for NCS2 as it does not include Resample -layers.

[setupvars.sh] OpenVINO environment initialized [ INFO ] Loading network files:     yolov3/yolov3.xml     yolov3/yolov3.bin [ INFO ] Preparing input blobs [ WARNING ] Image test.jpg is resized from (482, 848) to (608, 608) [ INFO ] Batch size is 1 [ INFO ] Loading model to the plugin Traceback (most recent call last):   File "object_detection_demo_yolov3.py", line 247, in <module>     sys.exit(main() or 0)   File "object_detection_demo_yolov3.py", line 187, in main     exec_net = plugin.load(network=net)   File "ie_api.pyx", line 305, in inference_engine.ie_api.IEPlugin.load   File "ie_api.pyx", line 318, in inference_engine.ie_api.IEPlugin.load RuntimeError: Cannot convert layer "detector/yolo-v3/ResizeNearestNeighbor" due to unsupported layer type "Resample" /teamcity/work/scoring_engine_build/releases_openvino-2018-r4/ie_bridges/python/inference_engine/ie_api_impl.cpp:260

After a bit of research I found this: https://software.intel.com/en-us/blogs/2018/05/14/how-to-port-your-application-from-intelr-computer-..."Extension mechanism has also been changed. At the samples folder, you can find the library with "standard extensions" for CPU - layers that are not included in the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) plugin like ArgMax, Resample, PriorBox (check full standard extensions list here).  To use these layers, please include ext_list.hpp and call the extension from the list:......."

So I proceeded to try passing different libraries to the object_detection_demo_yolov3.py by -l option. Here are the libraries I have tried:

$ find /opt/intel/ -iname "*.so" -exec grep "Resample" {} \; | awk '{ print $3 }'

/opt/intel/computer_vision_sdk_2018.4.420/opencv/lib/libopencv_dnn.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/centos_7.4/intel64/libcpu_extension_avx2.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/centos_7.4/intel64/libclDNNPlugin.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/centos_7.4/intel64/libinference_engine.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/centos_7.3/intel64/libcpu_extension_avx2.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/centos_7.3/intel64/libclDNNPlugin.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/centos_7.3/intel64/libinference_engine.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_avx2.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libclDNNPlugin.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libinference_engine.so
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_sse4.so

$ find ~/inference_engine_samples/ -iname "*.so" -exec grep "Resample" {} \; | awk '{ print $3 }'

~/inference_engine_samples/intel64/Release/lib/libcpu_extension.so

 

It still throws the same error message even though "Resample" is found in the libraries (though I only searched for a string).

 

Resample is listed as supported for Myriad and "Supported**" for CPU ("**- support is implemented via custom kernels mechanism."). I ran the script with "-d MYRIAD".

 

https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer#inpage-nav-5-4-5

This also seems to imply that it should work with OpenVINO.

 

If anyone manages to find a solution, please do not forget to share it.

Severine_H_Intel
Employee
348 Views

Dear Richard, 

I could reproduce the issue, I will escalate it to our dev team. 

Best, 

Severine

Severine_H_Intel
Employee
348 Views

Dear Richard and Niko, 

our documentation is not correct regarding the support of the Resample layer with the Movidius plugin. Therefore, it is not supported which explains your error. I will keep you updated when we will support this layer on Movidius. 

Best, 

Severine

Vurc
Beginner
348 Views

Hi,

i have successfully run mobilenet_v1( which also has Resample layers) on NCS1 with openVino. 

How did it execute if it is not supported ? 

Thanks in advance.

wang__xudong
Beginner
348 Views

Hi, Habert, Serverine

Any solution with this. The document said it support YOLO V3. Hope you can provide us a solution ASAP :).

 

Best

om77
New Contributor I
348 Views

Hi,

>>Vurc

I looked through our mobilenet_v1 and can't find Resample layer.

>>Severine

Is there any estimation to support Resample layer by NCS/NCS2 ?

om77
New Contributor I
348 Views

Hm,

Finally I was able to execute our objects detection network with several Resample layers and OpenVino R5 on NCS2.

For me the stick hanged because of some memory limit since the network started to work after reducing input image resolution (and input shapes accordingly).

Reply