Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Paine__Russell
Beginner
121 Views

OpenCV on raspberry pi inference Errors with faster_rcnn_inception_v2_coco_2018_01_28 converted model

Hello,

I have converted faster_rcnn_inception_v2_coco_2018_01_28 from the tensorflow model zoo to .xml .bin with this command:

./mo_tf.py --input_model ~/model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config ~/model/pipeline.config --output_dir ~/model/ --input_shape [1,600,600,3] --reverse_input_channels --input=image_tensor --output=detection_scores,detection_boxes,num_detections --data_type FP16

I am trying to use this model on a raspberry pi 4 with the original NCS plugged in. Im getting an error that looks like this:

cv2.error: OpenCV(4.2.0-openvino) ../opencv/modules/dnn/src/op_inf_engine.cpp:472: error: (-213:The function/feature is not implemented) Unsupported data type 16 in function 'wrapToInfEngineDataNode'

 

The code i am using is:

net = cv2.dnn.readNet(configPath, weightsPath)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)

fame = camera.read()

in_frame = cv2.resize(frame, (600, 600))
net.setInput(in_frame)
detections = net.forward()

 

I have also attempted to do it like this:

net = cv2.dnn.readNet(configPath, weightsPath)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)

camera = cv2.VideoCapture(0)

while True:
    ret, frame = camera.read()
    blob = cv2.dnn.blobFromImage(frame, size=(600,600), ddepth=cv2.CV_8U)
    net.setInput(blob)
    detections = net.forward()

 

I then get this error:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Dims and format are inconsistent.
Aborted

I have attached the model.

Can anyone help?

Many thanks,

Russell

 

0 Kudos
2 Replies
Paine__Russell
Beginner
121 Views

After trying many many times i attempted to do the inference with OpenVINO itself which also took a bit to figure out.

 

I manged to get it working with OpenVINO but the inference time is about 2.4 seconds. Is there anyway in which i could increase it performance?

 

Many thanks,

 

Russell

David_C_Intel
Employee
121 Views

Hi Russel,

Thanks for reaching out. Could you please tell us what was your solution for the previous issue, so that the community members that get a similar issue can check on this? For your new question, you can try to use multiple Intel® NCS2 to run the inference requests in parallel, see the documentation here for the Multi-Device Plugin.

If you have any additional questions, let us know.

Best regards,

David

 

Reply