Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

RPI4 (raspbian) RuntimeError: Cannot create ROIPooling layer CropAndResize/CropAndResize

MZhun
Novice
1,139 Views

Hi everyone! While waiting for delivery NCS I convert (on desktop) my faster_rcnn_inception tensorflow model to openvino (fp32 and fp16 data types). It's convering well (success in command line). 

In RPI (Raspbian + Rpi4 B) install OpenVino toolkit by this manual: https://www.pyimagesearch.com/2019/04/08/openvino-opencv-and-movidius-ncs-on-the-raspberry-pi/ (without install virtual environment)

Now I get error message: RuntimeError: Cannot create ROIPooling layer CropAndResize/CropAndResize id:303 when start this: 

python3 classification_sample.py -m /home/pi/Desktop/OpenVINO/fp16/frozen_inference_graph.xml -i /home/pi/Desktop/OpenVINO/recognize_samples/192.168.1.200_01_20200217081850641_MOTION_DETECTION.jpg -d CPU

 Full error:

[ INFO ] Creating Inference Engine
[ INFO ] Loading network files:
    /home/pi/Desktop/OpenVINO/fp16/frozen_inference_graph.xml
    /home/pi/Desktop/OpenVINO/fp16/frozen_inference_graph.bin
Traceback (most recent call last):
  File "classification_sample.py", line 135, in <module>
    sys.exit(main() or 0)
  File "classification_sample.py", line 65, in main
    net = IENetwork(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 980, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Cannot create ROIPooling layer CropAndResize/CropAndResize id:303

Help please... I don’t know which way to look: bad converting model, bad installation openvino or something else...

P.S. sorry for my poor english :)

0 Kudos
1 Solution
MZhun
Novice
1,139 Views

Hi Randall,

Thanks for your answer! I train model manually. I find a solution with your help. 

First I try, as you suggested, object detection sample. Compile all demos (Windows): "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\inference_engine\demos\build_demos_msvc.bat" and run C:\Users\Ryzan\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release\object_detection_demo_faster_rcnn.exe

but get error: [ ERROR ] Cannot add output! Layer bbox_pred wasn't found!

Solution was to run object_detection_sample_ssd.exe that can compile with: "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\inference_engine\samples\cpp\build_samples_msvc.bat" it works (find this solution in that topic)

Thank you for your help!

Now I will try start inference in RPI4B CPP and python.

By the way, cpp compiled example is working, but python example (object_detection_sample_ssd.py) still error: "ValueError: not enough values to unpack (expected 4, got 2)"

View solution in original post

0 Kudos
4 Replies
MZhun
Novice
1,139 Views

Little more info:

OpenVINO version is 2020.2.117

Here is command to convert model (model is faster_rcnn_inception retrain): 

(tensorflow_cpu) C:\Users\Ryzan\Desktop\modelconvert>python "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\mo_tf.py" --input_model "C:\Users\Ryzan\Desktop\modelconvert\tf_model\frozen_inference_graph.pb" --transformations_config  "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.15.json" --tensorflow_object_detection_api_pipeline_config "C:\Users\Ryzan\Desktop\modelconvert\tf_model\faster_rcnn_inception_v2_pets.config"

Run on windows 10

Here is output cmd:

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Users\Ryzan\Desktop\modelconvert\tf_model\frozen_inference_graph.pb
        - Path for generated IR:        C:\Users\Ryzan\Desktop\modelconvert\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\Users\Ryzan\Desktop\modelconvert\tf_model\faster_rcnn_inception_v2_pets.config
        - Use the config file:  None
Model Optimizer version:        2020.2.0-60-g0bc66e26ff
2020-06-03 16:03:07.083776: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2020-06-03 16:03:07.100631: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ WARNING ]  Network has 2 inputs overall, but only 1 of them are suitable for input channels reversing.
Suitable for input channel reversing inputs are 4-dimensional with 3 channels
All inputs: {'image_tensor': [1, 3, 600, 600], 'image_info': [1, 3]}
Suitable inputs {'image_tensor': [1, 3, 600, 600]}

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: C:\Users\Ryzan\Desktop\modelconvert\.\frozen_inference_graph.xml
[ SUCCESS ] BIN file: C:\Users\Ryzan\Desktop\modelconvert\.\frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 37.92 seconds.

 

Look like good, bin and xml files generated succesfully.  Today I try run inference on windows... I compiled some samples to run inference:

command: hello_classification.exe "C:\Users\Ryzan\Desktop\modelconvert\fp32\frozen_inference_graph.xml" 1.jpg CPU

Output: Size of dims(2) and format(NHWC) are inconsistent. 

 Next try:

command: classification_sample_async.exe -m "C:\Users\Ryzan\Desktop\modelconvert\fp32\frozen_inference_graph.xml" -i 1.jpg

 output: 

[ INFO ] InferenceEngine:
        API version ............ 2.1
        Build .................. 42025
        Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     1.jpg
[ INFO ] Creating Inference Engine
        CPU
        MKLDNNPlugin version ......... 2.1
        Build ........... 42025

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ ERROR ] Sample supports topologies with 1 input only

Next try in python:

command: (tensorflow_cpu) C:\Users\Ryzan\Desktop\modelconvert>object_detection_sample_ssd.py -m "C:\Users\Ryzan\Desktop\modelconvert\fp32\frozen_inference_graph.xml" -i "C:\TensorFLow\workspace\training_demo2(colabs)\images\train\pigs_2020_6_16_189679_gvri.jpg" -d CPU 

Output:

[ INFO ] Loading Inference Engine
[ INFO ] Loading network files:
        C:\Users\Ryzan\Desktop\modelconvert\fp32\frozen_inference_graph.xml
        C:\Users\Ryzan\Desktop\modelconvert\fp32\frozen_inference_graph.bin
[ INFO ] Device info:
        CPU
        MKLDNNPlugin version ......... 2.1
        Build ........... 42025
Traceback (most recent call last):
  File "C:\Users\Ryzan\Desktop\modelconvert\object_detection_sample_ssd.py", line 188, in <module>
    sys.exit(main() or 0)
  File "C:\Users\Ryzan\Desktop\modelconvert\object_detection_sample_ssd.py", line 83, in main
    n, c, h, w = net.inputs[input_blob].shape
ValueError: not enough values to unpack (expected 4, got 2) 

Intel, please, help me understand where is a bug. Four try run inference give me four different errors :)

0 Kudos
RandallMan_B_Intel
1,139 Views

Hi Maksim,

Thanks for reaching out.

You can't use CPU on Raspberry Pi because uses ARM and ARM will not run inference using OpenVino tools. The Raspberry PI board itself is just a host. It needs an NCS stick plugged in to do actual inference, and NCS  is the device that performs OpenVino inference. 

python3 classification_sample.py -m /home/pi/Desktop/OpenVINO/fp16/frozen_inference_graph.xml -i /home/pi/Desktop/OpenVINO/recognize_samples/192.168.1.200_01_20200217081850641_MOTION_DETECTION.jpg -d CPU

You need to use a Myriad device(NCS2) and specify the flag -d MYRIAD on the command. Check the official guide to Install OpenVINO toolkit for Raspbian* OS.

Also, check that the Faster R-CNN Inception V2 COCO is an Object detection model, and is unsupported with a Classification sample/demo, you need to use Object detection sample/demo, too.

Did you train manually the model or use a pre-train model from here?

On the commands used are required different flags, check this object detection demo for additional information.

Regards,

Randall.

0 Kudos
MZhun
Novice
1,140 Views

Hi Randall,

Thanks for your answer! I train model manually. I find a solution with your help. 

First I try, as you suggested, object detection sample. Compile all demos (Windows): "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\inference_engine\demos\build_demos_msvc.bat" and run C:\Users\Ryzan\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release\object_detection_demo_faster_rcnn.exe

but get error: [ ERROR ] Cannot add output! Layer bbox_pred wasn't found!

Solution was to run object_detection_sample_ssd.exe that can compile with: "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\inference_engine\samples\cpp\build_samples_msvc.bat" it works (find this solution in that topic)

Thank you for your help!

Now I will try start inference in RPI4B CPP and python.

By the way, cpp compiled example is working, but python example (object_detection_sample_ssd.py) still error: "ValueError: not enough values to unpack (expected 4, got 2)"

0 Kudos
RandallMan_B_Intel
1,139 Views

Hi Maksim,

Glad to hear that it works for you.

Please check the Tensorflow Object Detection API Custom Input Shape Documentation and this thread.

Hope this helps!

Regards,

Randall.

0 Kudos
Reply