Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error in inference

yu__jia
Beginner
531 Views

Hello everyone
I have used the model optimizer to generate the .xml file,but I got an error on inference.

>python 6d_Inference.py -m E:\yolo6d\openvino2\yolo6d_graph.xml -i E:\yolo6d\000000.jpg  -d MYRIAD
[ INFO ] Initializing plugin for MYRIAD device...
[ INFO ] Reading IR...
[ INFO ] Loading IR to the plugin...
Traceback (most recent call last):
  File "6d_Inference.py", line 223, in <module>
    sys.exit(main() or 0)
  File "6d_Inference.py", line 119, in main
    exec_net = plugin.load(network=net, num_requests=2)
  File "ie_api.pyx", line 551, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 561, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: [VPU] Unsupported precision I32for data strided_slice_1/stack_1/Output_0/Data__const

what should I do?

Thank you

0 Kudos
8 Replies
Shubha_R_Intel
Employee
531 Views

Dear yu, jia,

You probably forgot to do --data_type FP16 when you generated your IR using Model Optimizer. FP32 is not supported on VPU (as your error indicates). The precision per device support is documented in Supported Devices

Thanks,

Shubha

0 Kudos
yu__jia
Beginner
531 Views

Hi,Shubha R,

I use the code below to generate IR.

python mo_tf.py --input_model E:\yolo6d\yolo6d_graph.pb --data_type FP16

But I still get the following error:RuntimeError: [VPU] Unsupported precision I32for data strided_slice/stack/Output_0/Data__const

I don't know what caused the error.What should I do?

Thanks ,

yujia

 

 

0 Kudos
Shubha_R_Intel
Employee
531 Views

Dear yu, jia,

For yolo (v3 I presume ?) but v2 is also ok, what you have above is not the right command. Please step through our detailed documentation on Model Optimizer Tensorflow Yolo . If you follow these steps to the letter, you will get it working.

And yes, if you wish to run on NCS2 you must add on a --data_type FP16.

Thanks,

Shubha

0 Kudos
yu__jia
Beginner
531 Views

Dear Shubha,

I read the YOLO conversion documentation in detail, now I use the following code to generate the IR file.

>python mo_tf.py --input_model E:\yolo6d\yolo6d_graph.pb --tensorflow_use_custom_operations_config E:\yolo6d\yolo_6.json --data_type FP16

It can successfully generate IR files. I still reported the previous error on inference. My code on inference as follows.

>python 6d_Inference.py -m E:\yolo6d\openvino2\yolo6d_graph.xml -i E:\yolo6d\000000.jpg  -d MYRIAD

RuntimeError: [VPU] Unsupported precision I32for data strided_slice/stack/Output_0/Data__const

My model is based on the modification of the yolov2 network. I use tensorlfow for training. I use the tensorflow to freeze the model code to generate the yolo6d_graph.pb file. The node that freezes the model selects the node name of the network output. It can successfully generate the IR file.I got above error on inference.

Thanks, 

yujia

 

0 Kudos
Shubha_R_Intel
Employee
531 Views

Dear yu, jia,

Can you attach your model as a *.zip which also contains your short inference script ? Also I hope you are using the latest and greatest OpenVino which is 2019R2.

Thanks,

Shubha

 

0 Kudos
yu__jia
Beginner
531 Views

Dear Shubha,

I have upload my model.  My Openvino is 2019R2.

Thanks,

Yujia

 

0 Kudos
Shubha_R_Intel
Employee
531 Views

Dear yu, jia,

Thank you ! I promise to take a look.

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
531 Views

Dear yu, jia,

I just now noticed this statement.

My model is based on the modification of the yolov2 network. 

May I ask what you modified and why you did so ? Did you follow the exact yolo2 steps we provide ? If you didn't, then it's understandable why your inference has failed.

Please let me know.

Thanks !

Shubha

0 Kudos
Reply