Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Inference engine error on MYRIAD device

Nemes__Adam
Beginner
628 Views

Hi!

I use OpenVINO on the following hardware:

https://up-shop.org/upx-edge-series.html#additional

I want to use the MYRIAD device, actually i have 2 of them (not sure why?):

MYRIAD.2.1-ma2480
MYRIAD.2.3-ma2480

I have a tensorflow model optimized. I use 2 version one with 1 batch and the other with 4 batches.

Everything working fine when I work with 1batch.

But when I want to use the model with 4 batches I got the following error while attempting to load the network:

Unhandled exception at 0x00007FFCA6E39709 in *****exe: Microsoft C++ exception: InferenceEngine::details::InferenceEngineException at memory location 0x0000000E5B8FD658. occurred

This is the line where i get the exception:

"ie.LoadNetwork(network, target);"

I have OpenVINO 2020R2

What can cause such problem?

Or how can i a get a more specific error message?

My guess is maybe the net is too big to the device's memory, but we are talking about a relatively small ssd object detection net.

Thanks for any advice

!S

0 Kudos
2 Replies
Nemes__Adam
Beginner
628 Views

I massed around a bit and found that if i try to load 2 nets to the MYRIAD device i got the same error. (I tested with a couple nets similar results).

So it seems to me the device run out of memory.

Is it possible?

How much memory these devices have? Do they even have own memory?

0 Kudos
Max_L_Intel
Moderator
628 Views

Hi Adam.

For your VPU device within UP Xtreme kit it would be more proper to use HDDL plugin rather than MYRIAD one. Please have a chance to try it and do not forget to follow HDDL configuration steps - Additional Installation Steps for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs 

If this does not work yet, you could try to run your model with Benchmark C++ Tool or Benchmark Python Tool and specify the batch size with -b parameter (for example -b 4) to ensure your model is correct.

Hope this helps.
Best regards, Max.

0 Kudos
Reply