Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

RuntimeError: [VPU] Unsupported network precision : FP32

Wade1
Beginner
915 Views

Hi,

I'm using raspberrypi 3b+ with Myriad X NCS2 to run object_detection_demo_ssd_async.py.

Before run this, I have finished all the Tutorial and run successfully.

But I'm facing problems.

 

System information

1、Linux raspberrypi 4.14.71-v7+

2、raspberrypi 3b+

 

Problem Description

When I run 

python3.5 /home/pi/cwz/object_detection_demo_ssd_async.py -i cam -m /home/pi/cwz/ssd_v2/frozen_inference_graph.xml --labels /home/pi/cwz/ssd_v2/frozen_inference_graph.mapping -d MYRIAD

 

pi@raspberrypi:~ $ python3.5 /home/pi/cwz/object_detection_demo_ssd_async.py -i cam -m /home/pi/cwz/ssd_v2/frozen_inference_graph.xml --labels /home/pi/cwz/ssd_v2/frozen_inference_graph.mapping -d MYRIAD
[ INFO ] Initializing plugin for MYRIAD device...
[ INFO ] Reading IR...
[ INFO ] Loading IR to the plugin...
Traceback (most recent call last):
  File "/home/pi/cwz/object_detection_demo_ssd_async.py", line 185, in <module>
    sys.exit(main() or 0)
  File "/home/pi/cwz/object_detection_demo_ssd_async.py", line 80, in main
    exec_net = plugin.load(network=net, num_requests=2)
  File "ie_api.pyx", line 395, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 406, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: [VPU] Unsupported network precision : FP32
 

I don't know what the problem is and I can't find the solution after google.

Thanks! 

 

 

0 Kudos
1 Solution
Shubha_R_Intel
Employee
915 Views

Dear Wade,

Please see the below document. You cannot use FP32 on Myriad. You must use --data_type FP16 in the model optimizer command which generates IR.

https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html

Thanks,

Shubha

View solution in original post

0 Kudos
7 Replies
Shubha_R_Intel
Employee
916 Views

Dear Wade,

Please see the below document. You cannot use FP32 on Myriad. You must use --data_type FP16 in the model optimizer command which generates IR.

https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html

Thanks,

Shubha

0 Kudos
Wade1
Beginner
915 Views

Dear Shubha R. (Intel),

Thanks for your reply! After I add --data_type FP16 to generates IR , it works well.

But another problem occurs, 

my model is ssd_mobilenet_v1_coco, my script to generate IR 

python mo_tf.py --input_model ssd_mobilenet_v1_coco_2018_01_28\frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config ssd_mobilenet_v1_coco_2018_01_28\pipeline.config --reverse_input_channels --data_type FP16

and my input is camera stream

when I run

python3.5 /home/pi/cwz/object_detection_demo_ssd_async.py -i cam -m /home/pi/cwz/ssd_v1_coco/frozen_inference_graph.xml --labels /home/pi/cwz/ssd_v1_coco/frozen_inference_graph.mapping -d MYRIAD

the detection results are confused, it shows lots of frames at the same time. (I have screenshot and post.)

I don't know which part that I have done wrong.

Thanks!

Wade

0 Kudos
Wade1
Beginner
915 Views

Dear Shubha R. (Intel),

I change a model, ssd_mobilenet_v2_coco,and it works well.

my script to generate IR,

python mo_tf.py 
--input_model ssd_mobilenet_v2_coco_2018_03_29\frozen_inference_graph.pb 
--tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json 
--tensorflow_object_detection_api_pipeline_config ssd_mobilenet_v2_coco_2018_03_29\pipeline.config 
--data_type FP16

 

my script to run object_detection_demo_ssd_async.py,

python3.5 /home/pi/cwz/object_detection_demo_ssd_async.py 
-i cam 
-m /home/pi/cwz/ssd_v2_coco/frozen_inference_graph.xml 
--labels /home/pi/cwz/ssd_v2_coco/frozen_inference_graph.mapping 
-d MYRIAD

In my expectations. the detection frames would show the detection classes,

in object_detection_demo_ssd_async.py, line 148

det_label = labels_map[class_id] if labels_map else str(class_id)

if I give it a mapping document in my script, then 'det_label' would be given the corresponding classes according to the 'class_id'.

 

But it only shows 'map' or 'mapping', which confuses me a lot. (screenshot and post already)

I think it might be some parameters I miss when I generate IR, and I try to add  --output=detection_boxes,detection_scores,detection_classes,num_detections  to my script, 

but it didn't work either.

 

Thanks,

Wade

 

 

 

0 Kudos
Shubha_R_Intel
Employee
915 Views

Dear Wade,

First, I hope you are using the latest version of OpenVino 2019 R1. It seems that you're using the --output switch to mo_tf.py incorrectly. The below document describes it in more detail, but you should pass layer names to the --output switch.

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Cutting_Model.html

So unless detection_boxes,detection_scores,detection_classes,num_detections  are layer names, I don't see how your mo_tf.py worked.

But in fact you did successfully generate IR using a correct mo_tf.py command.

Can you post your frozen_inference_graph.mapping here ? 

Also instead of - cam, how does it work with mp4 video ? You can select one to test from here:

https://github.com/intel-iot-devkit/sample-videos

Please post your results here. 

Thanks for using OpenVino !

Shubha

 

0 Kudos
K_J__Manu
Beginner
915 Views

hello @wade can you please help me,

how to add data_type FP16 in the model optimizer command which generates IR?

0 Kudos
Shubha_R_Intel
Employee
915 Views

Dear  K J, Manu

 As Wade showed above, the mo_tf.py command should be similar to this:

python mo_tf.py

--input_model ssd_mobilenet_v2_coco_2018_03_29\frozen_inference_graph.pb

--tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json

--tensorflow_object_detection_api_pipeline_config ssd_mobilenet_v2_coco_2018_03_29\pipeline.config

--data_type FP16

Thanks,

Shubha

 

0 Kudos
K_J__Manu
Beginner
915 Views

thank you @Shubha R. (Intel) it worked

but ma'am is there a way where we can get the labels file from the model optimizer while creating bin and xml .........

because, when i run the object_detection_demo_ssd_async.py i didnt get the label of the detection

 

...query image is attached below 

0 Kudos
Reply