Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5764 Discussions

Probelm encounter when doing inference using Neural Compute Stick2 with R5 OpenVINO Toolkits

xue__hongfei
Beginner
134 Views

Hi,

I found an issue when I was doing inference using R5 version of OpenVINO toolkit on Intel Movidius Myriad X VPU. I used the following command to generate IR for my model: 

Python mo.py --input_model E:\frcnn_resnet101\frozen_inference_graph.pb --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.7.json --tensorflow_object_detection_api_pipeline_config E:\models-resnet\pipeline.config --data_type FP16

After getting the IR, when doing inference on Intel Movidius Myriad X VPU, the program stuck on the following step:

[ INFO ] Loading model to the plugin

More details are showing in the attachment.

Could anyone help to figure out what causes my problem?

Thanks,

0 Kudos
0 Replies
Reply