Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6575 讨论

block in loading the model to the plugin(ie.load_network) in python mode

fariver
初学者
2,324 次查看

Hi

I have converted my models to FP32 IR successfully and was able to do inference on CPU device. However, The program will be block in MYRIAD device. 

environment info:

Neural Compute Stick2

MAC OS 10.13

openvino_2021.4.582

I have check the environment by run official mobile net successfully on CPU or MARIAD device. There probably exist some bugs in my model defination, which is attacher here (baseline.xml).

My test program:

from openvino.inference_engine import IECore

ie = IECore()

xml_path = './baseline.xml'

net = ie.read_network(model=xml_path)

exec_net = ie.load_network(network=net, device_name=MYRIAD) # !!! blocked here

 

I guess there may be some operator behave abnormally, but i don't know what happen here on MYRIAD.  hope for your help.

Thanks

0 项奖励
5 回复数
fariver
初学者
2,304 次查看

update after long time loading, it outputs log:

E: [ncAPI] [ 246602] [] ncGraphAllocate:2153 Not enough memory to allocate intermediate tensors on remote device
Traceback (most recent call last):
File "stereo_inference_openvino.py", line 88, in <module>
sys.exit(main())
File "stereo_inference_openvino.py", line 52, in main
exec_net = ie.load_network(network=net, device_name=args.device)
File "ie_api.pyx", line 367, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 379, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Failed to allocate graph: NC_OUT_OF_MEMORY

0 项奖励
Peh_Intel
主持人
2,293 次查看

Hi fariver,


Thanks for reaching out to us.


You’re getting this error because your model is too large and it exceeded the memory capacity of Intel® Neural Compute Stick 2 (NCS2) for intermediate processing. 


For those large models, they can be supported by CPU plugin but not MYRIAD plugin. Check out this page for reference.


On a separate note, I noticed that you used FP32 IR for the inferencing on NCS2. Perhaps you may have a try with FP16 IR.



Regards,

Peh


0 项奖励
fariver
初学者
2,278 次查看

Thanks for your replying. I have got through the block by cutting off some operators in network. However, I fall into two more confusion: 1) my new network run slower on MYRIAD than CPU; 2).  the FP16 network runs comparable speed with its FP32 version on MYRIAD device without any speedup.

  FP16 FP32
MYRIAD 1.0966s 1.0958s
CPU 0.7722s 0.7883s

All the time consuming result is averaged by 10 iterations. Attachment is my new network definition. Hoping for further help.

0 项奖励
Peh_Intel
主持人
2,240 次查看

Hi fariver,

 

Regarding the performance between CPU and Intel® Neural Compute Stick 2 (NCS2), you can have a look at this Benchmark Results. By looking into all the chosen models, the performance of CPU is always better than NCS2. Referring to this Supported Model Formats, the VPU plugins only support FP16 models.

 

 

Regards,

Peh


0 项奖励
Peh_Intel
主持人
2,188 次查看

Hi fariver,


This thread will no longer be monitored since we have provided answers. If you need any additional information from Intel, please submit a new question. 



Regards,

Peh


0 项奖励
回复