Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

Unexpected CNNNetwork format with NCS2 & OpenVino OpenCV MobileNet model prediction (forward)

Fernandez__Marcos
3,159 Views

Hi,

I created a MobileNet net with Keras as follows:

base_model = keras.applications.mobilenet.MobileNet(input_shape=(224, 224, 3), alpha=1.0, include_top=True, weights=None, depth_multiplier=1, classes=4)

for layer in base_model.layers:
       layer.trainable = True # return the constructed network architecture
return model

Then I trained the model using keras Adam algo and made some prediction on my PC. I saved the model (.hf5 file). This net works fine on my PC. 

I then did a Freezing to convert from hf5 to .pb and .pbtxt:

  • frozen_graph = FreezeSession.convert(sess, #K.get_session(), output_names=[out.op.name for out in model_to_pb.outputs])

The result is:

  • _________________________________________________________________
  • Use `tf.compat.v1.graph_util.extract_sub_graph`
  • conv_pw_2_bn (BatchNormaliza (None, 56, 56, 128)       512       
  • _________________________________________________________________
  • conv_pw_2_relu (ReLU)        (None, 56, 56, 128)       0        
  • _________________________________________________________________
  • conv_dw_3 (DepthwiseConv2D)  (None, 56, 56, 128)       1152      
  • _________________________________________________________________
  • conv_dw_3_bn (BatchNormaliza (None, 56, 56, 128)       512       
  • _________________________________________________________________
  • conv_dw_3_relu (ReLU)        (None, 56, 56, 128)       0        
  • _________________________________________________________________
  • conv_pw_3 (Conv2D)           (None, 56, 56, 128)       16384     
  • _________________________________________________________________
  • conv_pw_3_bn (BatchNormaliza (None, 56, 56, 128)       512       
  • _________________________________________________________________
  • conv_pw_3_relu (ReLU)        (None, 56, 56, 128)       0        
  • _________________________________________________________________
  • conv_pad_4 (ZeroPadding2D)   (None, 57, 57, 128)       0         
  • _________________________________________________________________
  • conv_dw_4 (DepthwiseConv2D)  (None, 28, 28, 128)       1152      
  • _________________________________________________________________
  • conv_dw_4_bn (BatchNormaliza (None, 28, 28, 128)       512       
  • _________________________________________________________________
  • conv_dw_4_relu (ReLU)        (None, 28, 28, 128)       0        
  • _________________________________________________________________
  • conv_pw_4 (Conv2D)           (None, 28, 28, 256)       32768     
  • _________________________________________________________________
  • conv_pw_4_bn (BatchNormaliza (None, 28, 28, 256)       1024      
  • _________________________________________________________________
  • conv_pw_4_relu (ReLU)        (None, 28, 28, 256)       0        
  • _________________________________________________________________
  • conv_dw_5 (DepthwiseConv2D)  (None, 28, 28, 256)       2304      
  • _________________________________________________________________
  • conv_dw_5_bn (BatchNormaliza (None, 28, 28, 256)       1024      
  • _________________________________________________________________
  • conv_dw_5_relu (ReLU)        (None, 28, 28, 256)       0        
  • _________________________________________________________________
  • conv_pw_5 (Conv2D)           (None, 28, 28, 256)       65536     
  • _________________________________________________________________
  • conv_pw_5_bn (BatchNormaliza (None, 28, 28, 256)       1024      
  • _________________________________________________________________
  • conv_pw_5_relu (ReLU)        (None, 28, 28, 256)       0        
  • _________________________________________________________________
  • conv_pad_6 (ZeroPadding2D)   (None, 29, 29, 256)       0         
  • _________________________________________________________________
  • conv_dw_6 (DepthwiseConv2D)  (None, 14, 14, 256)       2304      
  • _________________________________________________________________
  • conv_dw_6_bn (BatchNormaliza (None, 14, 14, 256)       1024      
  • _________________________________________________________________
  • conv_dw_6_relu (ReLU)        (None, 14, 14, 256)       0        
  • _________________________________________________________________
  • conv_pw_6 (Conv2D)           (None, 14, 14, 512)       131072    
  • _________________________________________________________________
  • conv_pw_6_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_pw_6_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_dw_7 (DepthwiseConv2D)  (None, 14, 14, 512)       4608      
  • _________________________________________________________________
  • conv_dw_7_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_dw_7_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_pw_7 (Conv2D)           (None, 14, 14, 512)       262144    
  • _________________________________________________________________
  • conv_pw_7_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_pw_7_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_dw_8 (DepthwiseConv2D)  (None, 14, 14, 512)       4608      
  • _________________________________________________________________
  • conv_dw_8_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_dw_8_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_pw_8 (Conv2D)           (None, 14, 14, 512)       262144    
  • _________________________________________________________________
  • conv_pw_8_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_pw_8_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_dw_9 (DepthwiseConv2D)  (None, 14, 14, 512)       4608      
  • _________________________________________________________________
  • conv_dw_9_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_dw_9_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_pw_9 (Conv2D)           (None, 14, 14, 512)       262144    
  • _________________________________________________________________
  • conv_pw_9_bn (BatchNormaliza (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_pw_9_relu (ReLU)        (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_dw_10 (DepthwiseConv2D) (None, 14, 14, 512)       4608      
  • _________________________________________________________________
  • conv_dw_10_bn (BatchNormaliz (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_dw_10_relu (ReLU)       (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_pw_10 (Conv2D)          (None, 14, 14, 512)       262144    
  • _________________________________________________________________
  • conv_pw_10_bn (BatchNormaliz (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_pw_10_relu (ReLU)       (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_dw_11 (DepthwiseConv2D) (None, 14, 14, 512)       4608      
  • _________________________________________________________________
  • conv_dw_11_bn (BatchNormaliz (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_dw_11_relu (ReLU)       (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_pw_11 (Conv2D)          (None, 14, 14, 512)       262144    
  • _________________________________________________________________
  • conv_pw_11_bn (BatchNormaliz (None, 14, 14, 512)       2048      
  • _________________________________________________________________
  • conv_pw_11_relu (ReLU)       (None, 14, 14, 512)       0        
  • _________________________________________________________________
  • conv_pad_12 (ZeroPadding2D)  (None, 15, 15, 512)       0        
  • _________________________________________________________________
  • conv_dw_12 (DepthwiseConv2D) (None, 7, 7, 512)         4608      
  • _________________________________________________________________
  • conv_dw_12_bn (BatchNormaliz (None, 7, 7, 512)         2048      
  • _________________________________________________________________
  • conv_dw_12_relu (ReLU)       (None, 7, 7, 512)         0         
  • _________________________________________________________________
  • conv_pw_12 (Conv2D)          (None, 7, 7, 1024)        524288    
  • _________________________________________________________________
  • conv_pw_12_bn (BatchNormaliz (None, 7, 7, 1024)        4096      
  • _________________________________________________________________
  • conv_pw_12_relu (ReLU)       (None, 7, 7, 1024)        0         
  • _________________________________________________________________
  • conv_dw_13 (DepthwiseConv2D) (None, 7, 7, 1024)        9216      
  • _________________________________________________________________
  • conv_dw_13_bn (BatchNormaliz (None, 7, 7, 1024)        4096      
  • _________________________________________________________________
  • conv_dw_13_relu (ReLU)       (None, 7, 7, 1024)        0         
  • _________________________________________________________________
  • conv_pw_13 (Conv2D)          (None, 7, 7, 1024)        1048576   
  • _________________________________________________________________
  • conv_pw_13_bn (BatchNormaliz (None, 7, 7, 1024)        4096      
  • _________________________________________________________________
  • conv_pw_13_relu (ReLU)       (None, 7, 7, 1024)        0         
  • _________________________________________________________________
  • global_average_pooling2d_1 ( (None, 1024)              0         
  • _________________________________________________________________
  • reshape_1 (Reshape)          (None, 1, 1, 1024)        0         
  • _________________________________________________________________
  • dropout (Dropout)            (None, 1, 1, 1024)        0         
  • _________________________________________________________________
  • conv_preds (Conv2D)          (None, 1, 1, 4)           4100      
  • _________________________________________________________________
  • reshape_2 (Reshape)          (None, 4)                 0         
  • _________________________________________________________________
  • act_softmax (Activation)     (None, 4)                 0         
  • =================================================================
  • Total params: 3,232,964
  • Trainable params: 3,211,076
  • Non-trainable params: 21,888
  • _________________________________________________________________
  • None
  • Process finished with exit code 0

Then I used the following to optimise and convert to a couple of .bin and .xml files for the open vino inference in my raspberry:

  • python3 /opt/intel/openvino_2020.2.130/deployment_tools/model_optimizer/mo_tf.py --input_model tf_model.pbtxt  --input_model_is_text --input_shape [1,224,224,3] --data_type FP16

Runs fine, I get:

  • Model Optimizer arguments:
  • Common parameters:
  • - Path to the Input Model: /home/marcos/OpenVino/tfworkspace/tf_model_mobileNetOriginal.pbtxt
  • - Path for generated IR: /home/marcos/OpenVino/tfworkspace/.
  • - IR output name: tf_model_mobileNetOriginal
  • - Log level: ERROR
  • - Batch: Not specified, inherited from the model
  • - Input layers: Not specified, inherited from the model
  • - Output layers: Not specified, inherited from the model
  • - Input shapes: [1,224,224,3]
  • - Mean values: Not specified
  • - Scale values: Not specified
  • - Scale factor: Not specified
  • - Precision of IR: FP16
  • - Enable fusing: True
  • - Enable grouped convolutions fusing: True
  • - Move mean values to preprocess section: False
  • - Reverse input channels: False
  • TensorFlow specific parameters:
  • - Input model in text protobuf format: True
  • - Path to model dump for TensorBoard: None
  • - List of shared libraries with TensorFlow custom layers implementation: None
  • - Update the configuration file with input/output node names: None
  • - Use configuration file used to generate the model with Object Detection API: None
  • - Use the config file: None
  • Model Optimizer version: 2020.2.0-60-g0bc66e26ff
  • [ SUCCESS ] Generated IR version 10 model.
  • [ SUCCESS ] XML file: /home/marcos/OpenVino/tfworkspace/./tf_model_mobileNetOriginal.xml
  • [ SUCCESS ] BIN file: /home/marcos/OpenVino/tfworkspace/./tf_model_mobileNetOriginal.bin
  • [ SUCCESS ] Total execution time: 78.75 seconds. 
  • [ SUCCESS ] Memory consumed: 2590 MB. 

Then I went to my Raspberry where I installed OpenVino and loaded this .bin and .xml and executed the prediction, with OpenCV as follows: 

class OpenVinoObjectDetection:

	def __init__(self, prototxt=None, model=None, confidence=0.2, movidius=1):
		# store the accumulated weight factor
		self.prototxt = prototxt
		self.model = model
		self.confidence = confidence
		self.movidius = movidius

		# initialize the list of class labels MobileNet was trained to
		self.CLASSES = ["hammer", "screwdriver", "allen", "pliers"]

		# load our serialized model from disk
		print("[INFO] loading model...")
		self.net = cv2.dnn.readNetFromModelOptimizer(xml= prototxt, bin= model) #model is the .bin and prototxt is the .xml

		# specify the target device as the Myriad processor on the NCS
		self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)


	def detect(self, image):
		# grab the frame dimensions and convert it to a blob
		(h, w) = image.shape[:2]
		blob = cv2.dnn.blobFromImage(image, size=(224, 224), swapRB=True, crop=False)
		# pass the blob through the network and obtain the detections and
		# predictions
		self.net.setInput(blob)
		detections = self.net.forward()

#more, other code below


After executing this, it fails. This is what I get:

  • cv2.error: OpenCV(4.3.0-openvino) ../opencv/modules/dnn/src/ie_ngraph.cpp:600: error: (-2:Unspecified error) Failed to initialize Inference Engine backend (device = MYRIAD): Unexpected CNNNetwork format: it was converted to deprecated format prior plugin's call in function 'initPlugin'

I tried everything in the scope of OpenCV and OpenVino options, alternatives. But unsuccessfully. OpenVino is not giving no more detail on what is failing. I even tried with MobileNet V2, but get the same error.

 

Thanks

 

 

0 Kudos
15 Replies
SIRIGIRI_V_Intel
Employee
3,159 Views

Have you followed the additional steps for Neural compute stick? If yes, can you run the model using the benchmark app and let us know the results.

Regards,

Ram prasad

0 Kudos
Fernandez__Marcos
3,159 Views

Thanks.

Yes, I have followed the additional steps for NCS. I run the following example in python + openCV and runs fine:

import cv2
weights_path = "face-reidentification-retail-0095.bin"
config_path = "face-reidentification-retail-0095.xml"
net = cv2.dnn.readNet(weights_path, config=config_path, framework="DLDT")
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
#net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
img_path = "00000014.jpg"
ref_image = cv2.imread(img_path)
blob = cv2.dnn.blobFromImage(ref_image, 1., (128, 128), True, crop=True, swapRB=True)
print(blob.shape)
net.setInput(blob)
res = net.forward()
print(res[0])

No error. So the problem with my model execution must be with the net itself I guess, when executing "forward"

0 Kudos
SIRIGIRI_V_Intel
Employee
3,159 Views

Could you try to specify the output layer in the below api

res = net.forward(‘<name_of_the_output_layer>’)

If you are still facing the issue, can you share the model and necessary files with us. If required, I can send you a PM.

Regards,

Ram prasad

0 Kudos
Fernandez__Marcos
3,159 Views

Hi,

 

thanks. I did 

detections = self.net.forward("act_softmax/Softmax/sink_port_0")

However it did not work either. Same error. Quite consistent error. I tried to attached the bin and xml model to this post, but did not work the Upload either. Below the detect class and function:

# import the necessary packages
import numpy as np
import imutils
import time
import cv2

class OpenVinoObjectDetection:

	def __init__(self, prototxt=None, model=None, confidence=0.2, movidius=1):
		# store the accumulated weight factor
		self.prototxt = prototxt
		self.model = model
		self.confidence = confidence
		self.movidius = movidius

		# initialize the list of class labels MobileNet was trained to
		self.CLASSES = ["hammer", "screwdriver", "allen", "pliers"]

		# load our serialized model from disk
		print("[INFO] loading model...")
		#self.net = cv2.dnn.readNetFromTensorflow(model=model, config=prototxt)
		self.net = cv2.dnn.readNetFromModelOptimizer(xml=prototxt, bin=model)
		print("[INFO] model loaded...")

		# specify the target device as the Myriad processor on the NCS
		self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)


	def detect(self, image):
		# pre-process the image for classification
		image = cv2.resize(image, (224, 224))
		#image = image.astype("float") / 255.0
		#image = img_to_array(image)
		#image = np.expand_dims(image, axis=0)

		# grab the frame dimensions and convert it to a blob
		#(h, w) = image.shape[:2]
		blob = cv2.dnn.blobFromImage(image, size=(224, 224), swapRB=True, crop=False)

		# pass the blob through the network and obtain the detections and
		# predictions
		self.net.setInput(blob)
		detections = self.net.forward("act_softmax/Softmax/sink_port_0")

		# sort the indexes of the probabilities in descending order (higher
                    ...

		return text_pred

 

0 Kudos
Fernandez__Marcos
3,159 Views

Here's the model (xml and bin)

0 Kudos
minyiky
Beginner
3,159 Views

Hi there,

I am getting the same issue (exactly the same error message).

Although i was able to get the network running on a a windows PC using the NCS2 I have not been able to do so on the pi

 

0 Kudos
SIRIGIRI_V_Intel
Employee
3,159 Views

Thanks for sharing the model. I have reproduced same issue using the details provided. However I was able to run the model on NCS2 using inference engine API’s. You may refer to object_detection_demo_ssd_async to use the Inference Engine API's

Regards,

Ram prasad

0 Kudos
Fernandez__Marcos
3,159 Views

Thanks. Ok, so if using cv2 we are dead end so far.

Then, if I use Inference Engine API as you propose, I face another problem (dead end also so far). Who running "from openvino.inference_engine import IENetwork, IECore" on python I get:

>>> import cv2
>>> from openvino.inference_engine import IENetwork, IECore
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/intel/openvino/python/python3.5/openvino/inference_engine/__init__.py", line 1, in <module>
    from .ie_api import *
ImportError: /opt/intel/openvino/python/python3.5/openvino/inference_engine/ie_api.so: undefined symbol: PyFPE_jbuf
>>> import fpectl
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'fpectl'

Which as far as I was able to analyse, It is due to python3 being compiled without --with-fpectl flag initially but needed by this API. It seems python and NCS2 are not best friends. Any alternative Intel can suggest? Note that compiling with that flag is not recommended.

0 Kudos
jgilewski
Beginner
3,159 Views

Hi!

The similar problem is on fresh Raspberry PI 4 with Rasbian Buster.

I installed OpenVINO 2020.2.120 as on https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html

Run CPP example as follow (the documentation on the page is not up to date):

$ mkdir -p ~/openvino/build && cd ~/openvino/buid
$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
$ make -j2
$ cd armv7l/Release
$ mkdir models
$ wget --no-check-certificate -P models https://download.01.org/opencv/2020/openvinotoolkit/2020.1/open_model_zoo/models_bin/1/person-detection-retail-0002/FP16/person-detection-retail-0002.bin
$ wget --no-check-certificate -P models https://download.01.org/opencv/2020/openvinotoolkit/2020.1/open_model_zoo/models_bin/1/person-detection-retail-0002/FP16/person-detection-retail-0002.xml
# Run example (download some example image with people within)
$ ./object_detection_sample_ssd -m models/person-detection-retail-0002.xml -d MYRIAD -i ~/Downloads/shopping-mall-522619_640.jpg

It is working fine.

Let see Python 3 with OpenCV:

$ python3 - << "EOF"
import cv2 as cv
# Load the model.
net = cv.dnn_DetectionModel('models/person-detection-retail-0002.xml',
                            'models/person-detection-retail-0002.bin')
# Specify target device.
net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)
# Read an image.
frame = cv.imread('/home/pi/Downloads/shopping-mall-522619_640.jpg')
if frame is None:
    raise Exception('Image not found!')
# Perform an inference.
_, confidences, boxes = net.detect(frame, confThreshold=0.5)
# Draw detected faces on the frame.
for confidence, box in zip(list(confidences), boxes):
    cv.rectangle(frame, box, color=(0, 255, 0))
# Save the frame to an image file.
cv.imwrite('out.png', frame)
EOF

Traceback (most recent call last):
  File "<stdin>", line 13, in <module>
cv2.error: OpenCV(4.3.0-openvino) ../opencv/modules/dnn/src/ie_ngraph.cpp:600: error: (-2:Unspecified error) Failed to initialize Inference Engine backend (device = MYRIAD): Unexpected CNNNetwork format: it was converted to deprecated format prior plugin's call in function 'initPlugin'

The same model, the same input image and failed! Sad :(

What we can do with that?

0 Kudos
SIRIGIRI_V_Intel
Employee
3,159 Views

Hi Safer,

It seems the issue is due to the python. I am unable to replicate the issue. Please try to install the fpectl or install the python using the apt repository.

Regards,

Ram prasad

0 Kudos
jgilewski
Beginner
3,159 Views

Hi,

I get a similar problem with examples provided by OpenVINO on Raspberry PI with Rasbian Buster.

Here are the steps to replicate:

# Install OpenVINO
# ----------------
# Source: https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html
#
# Download OpenVINO
$ cd ~/Downloads
$ wget https://download.01.org/opencv/2020/openvinotoolkit/2020.2/l_openvino_toolkit_runtime_raspbian_p_2020.2.120.tgz
# Create an installation folder unpack the archive there
$ sudo mkdir -p /opt/intel/openvino
$ sudo tar -xf  l_openvino_toolkit_runtime_raspbian_p_2020.2.120.tgz --strip 1 -C /opt/intel/openvino
#
# Install External Software Dependencies
#
# CMake* version 3.7.2 or higher is required for building the Inference Engine sample application.
$ sudo apt install cmake
# Set the Environment Variables
$ source /opt/intel/openvino/bin/setupvars.sh
# Add USB Rules
$ sudo usermod -a -G users "$(whoami)"
# Log out and log in for it to take effect.
# ...
$ source /opt/intel/openvino/bin/setupvars.sh
# Install the USB rules to perform inference on NCS
$ sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
#
# Run examples to check everything is working
# (instruction from the source page is not up to date)
#
$ mkdir -p openvino/build && cd openvino/buid
$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
$ make -j2
$ cd armv7l/Release
$ mkdir models
$ wget --no-check-certificate -P models https://download.01.org/opencv/2020/openvinotoolkit/2020.1/open_model_zoo/models_bin/1/person-detection-retail-0002/FP16/person-detection-retail-0002.bin
$ wget --no-check-certificate -P models https://download.01.org/opencv/2020/openvinotoolkit/2020.1/open_model_zoo/models_bin/1/person-detection-retail-0002/FP16/person-detection-retail-0002.xml
# Run example (download some example image with people within)
# Image source: https://pixabay.com/photos/shopping-mall-woman-shopping-store-522619/
$ ./object_detection_sample_ssd -m models/person-detection-retail-0002.xml -d MYRIAD -i ~/Downloads/shopping-mall-522619_640.jpg
# Works OK!
#
# Check Python 3 witn OpenCV is working
$ python3 - << "EOF"
import cv2 as cv
# Load the model.
net = cv.dnn_DetectionModel('models/person-detection-retail-0002.xml',
                            'models/person-detection-retail-0002.bin')
# Specify target device.
net.setPreferableBackend(cv.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)
# Read an image.
frame = cv.imread('/home/pi/Downloads/shopping-mall-522619_640.jpg')
if frame is None:
    raise Exception('Image not found!')
# Perform an inference.
_, confidences, boxes = net.detect(frame, confThreshold=0.5)
# Draw detected faces on the frame.
for confidence, box in zip(list(confidences), boxes):
    cv.rectangle(frame, box, color=(0, 255, 0))
# Save the frame to an image file.
cv.imwrite('out.png', frame)
EOF

Traceback (most recent call last):
  File "<stdin>", line 13, in <module>
cv2.error: OpenCV(4.3.0-openvino) ../opencv/modules/dnn/src/ie_ngraph.cpp:600: error: (-2:Unspecified error) Failed to initialize Inference Engine backend (device = MYRIAD): Unexpected CNNNetwork format: it was converted to deprecated format prior plugin's call in function 'initPlugin'

The example object_detection_sample_ssd compile from cop sources works fine but Python code with the same input and models is not.

I cross compiled OpenCV as it is in https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend but the Python bug remains.

Regards

Jarek Gilewski

0 Kudos
Fernandez__Marcos
3,159 Views

Ram prasad (Intel) wrote:

Hi Safer,

It seems the issue is due to the python. I am unable to replicate the issue. Please try to install the fpectl or install the python using the apt repository.

Regards,

Ram prasad

 

I did install python3-dev with apt, but I get the same error

0 Kudos
Hawkes__Rycharde
New Contributor I
3,159 Views

I don't think this is a python issue.

I get the same error when using the C++ interactive_face_detection_demo and the FP16 models.  All run fine when using CPU but only the face-detection-adas-0001 model will load using MYRIAD, all the other models required by the demo generate the same error.

I note that there is another posted issue with the same error using a different demo and model here.

I'm using 2020.2.117

 

0 Kudos
Helli__Márton
Beginner
3,159 Views

Hawkes, Rycharde wrote:

I don't think this is a python issue.

I get the same error when using the C++ interactive_face_detection_demo and the FP16 models.  All run fine when using CPU but only the face-detection-adas-0001 model will load using MYRIAD, all the other models required by the demo generate the same error.

I note that there is another posted issue with the same error using a different demo and model here.

I'm using 2020.2.117

 

+1, exactly as he said, when executing open_model_zoo/demos/interactive_face_detection_demo

using CPU is fine for all 5 networks, but MYRIAD is only working for face-detection-adas-0001

I'm using openvino_2020.3.194

 

0 Kudos
Max_L_Intel
Moderator
3,159 Views

Hello Rycharde, Márton.

For this error could you please try the workaround provided for OpenCV here https://github.com/opencv/opencv/pull/17134 

0 Kudos
Reply