- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
I use OpenVINO on the following hardware:
https://up-shop.org/upx-edge-series.html#additional
I want to use the MYRIAD device, actually i have 2 of them (not sure why?):
MYRIAD.2.1-ma2480
MYRIAD.2.3-ma2480
I have a tensorflow model optimized. I use 2 version one with 1 batch and the other with 4 batches.
Everything working fine when I work with 1batch.
But when I want to use the model with 4 batches I got the following error while attempting to load the network:
Unhandled exception at 0x00007FFCA6E39709 in *****exe: Microsoft C++ exception: InferenceEngine::details::InferenceEngineException at memory location 0x0000000E5B8FD658. occurred
This is the line where i get the exception:
"ie.LoadNetwork(network, target);"
I have OpenVINO 2020R2
What can cause such problem?
Or how can i a get a more specific error message?
My guess is maybe the net is too big to the device's memory, but we are talking about a relatively small ssd object detection net.
Thanks for any advice
!S
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I massed around a bit and found that if i try to load 2 nets to the MYRIAD device i got the same error. (I tested with a couple nets similar results).
So it seems to me the device run out of memory.
Is it possible?
How much memory these devices have? Do they even have own memory?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Adam.
For your VPU device within UP Xtreme kit it would be more proper to use HDDL plugin rather than MYRIAD one. Please have a chance to try it and do not forget to follow HDDL configuration steps - Additional Installation Steps for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
If this does not work yet, you could try to run your model with Benchmark C++ Tool or Benchmark Python Tool and specify the batch size with -b parameter (for example -b 4) to ensure your model is correct.
Hope this helps.
Best regards, Max.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page