Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Failed to initialize Inference Engine backend 2019R3

es__we
Beginner
694 Views
I try to run the mobilenet model on raspberry pi 3B+, however, I have this probelm: The file information

#test.py is the script for testing
import cv2 as cv
# Load the model.
net = cv.dnn.readNet('mobilenetv2-int8-sparse-v2-tf-0001.xml',
                     'mobilenetv2-int8-sparse-v2-tf-0001.bin')
# Specify target device.
net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)
# Read an image.
frame = cv.imread('image1/car/1.jpg')
if frame is None:
    raise Exception('Image not found!')
# Prepare input blob and perform an inference.
blob = cv.dnn.blobFromImage(frame, size=(224, 224), ddepth=cv.CV_8U)
net.setInput(blob)
out = net.forward()
print(out)
# Save the frame to an image file.
cv.imwrite('out.png', frame)

The file information:
pi@raspberrypi:~/Downloads/openvino/open_model_zoo/models/intel/mobilenetv2-int8-sparse-v2-tf-0001 $ ls
description  mobilenetv2-int8-sparse-v2-tf-0001.bin  test.py
image1       mobilenetv2-int8-sparse-v2-tf-0001.xml
image1.zip   model.yml

pi@raspberrypi:~/Downloads/openvino/open_model_zoo/models/intel/mobilenetv2-int8-sparse-v2-tf-0001 $ source /opt/intel/openvino/bin/setupvars.sh
[setupvars.sh] OpenVINO environment initialized
pi@raspberrypi:~/Downloads/openvino/open_model_zoo/models/intel/mobilenetv2-int8-sparse-v2-tf-0001 $ python3 test.py 
Traceback (most recent call last):
  File "test.py", line 14, in <module>
    out = net.forward()
cv2.error: OpenCV(4.1.2-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/op_inf_engine.cpp:704: error: (-215:Assertion failed) Failed to initialize Inference Engine backend: AssertionFailed: node_stats_it != stats.end() in function 'initPlugin'

The vesion is l_openvino_toolkit_runtime_raspbian_p_2019.3.334.tgz, how can I solve this probelm?

0 Kudos
3 Replies
JesusE_Intel
Moderator
694 Views

Hi we,

Thanks for reaching out! The mobilenetv2-int8-sparse-v2-tf-0001 is not supported by the Myriad Plugin (NCS), it's currently only supported on CPU/GPU. Alternatively, you could try using a different mobilenetv2 such as SSD MobileNet V2 COCO. You will need to convert the frozen Tensorflow model to IR format with the model optimizer as follows.This will need to be done on a full version of the OpenVINO toolkit as the Raspberry Pi package does not include the model optimizer. 

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--data_type FP16 \
--reverse_input_channels \
--batch 1 \
--tensorflow_use_custom_operations_config  /opt/intel/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \
--tensorflow_object_detection_api_pipeline_config pipeline.config

Regards,

Jesus

0 Kudos
es__we
Beginner
694 Views

Jesus E. (Intel) wrote:

Hi we,

Thanks for reaching out! The mobilenetv2-int8-sparse-v2-tf-0001 is not supported by the Myriad Plugin (NCS), it's currently only supported on CPU/GPU. Alternatively, you could try using a different mobilenetv2 such as SSD MobileNet V2 COCO. You will need to convert the frozen Tensorflow model to IR format with the model optimizer as follows.This will need to be done on a full version of the OpenVINO toolkit as the Raspberry Pi package does not include the model optimizer. 

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--data_type FP16 \
--reverse_input_channels \
--batch 1 \
--tensorflow_use_custom_operations_config  /opt/intel/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \
--tensorflow_object_detection_api_pipeline_config pipeline.config

Regards,

Jesus

Thanks, it works, however, I don't know what the result means. It seems that it is an array or matrix.  I have searched on website and in the file downloaded.  Where can I look for some information about it? Thank you in advance. 

0 Kudos
JesusE_Intel
Moderator
694 Views

Hi we,

Good question! The output is an array of summary detection information, you can find more details for the Output of SSD Mobilenet V2 Coco model on Github:

https://github.com/opencv/open_model_zoo/blob/master/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.md

[image_id, label, conf, x_min, y_min, x_max, y_max]

Hope this helps!

Regards,

Jesus

 

0 Kudos
Reply