I am a newbie with OpenVINO (2019 R3), here is my environments:
Description: Ubuntu 18.04.3 LTS
cmake version 3.10.2
gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
I use super_resolution_demo as my template, by the way, I was able to compile and run super_resolution_demo on MYRIAD without any issues.
My trained colorization model is ONNX model, I converted it to XML using mo.py. When I ran inference on CPU mode, it worked well. But when I ran inference on MYRIAD (Intel Stick 2, NCS2), I got following errors
myriadPlugin version ......... 2.1
Build ........... 32974
E: [xLink] [ 97778] [EventRead00Thr] eventReader:218 eventReader thread stopped (err -1)
E: [xLink] [ 97778] [Scheduler00Thr] eventSchedulerRun:576 Dispatcher received NULL event!
E: [global] [ 97779] [colorization] XLinkReadDataWithTimeOut:1494
E: [watchdog] [ 97779] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR Event data is invalid
E: [ncAPI] [ 97779] [colorization] ncGraphAllocate:1947 Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR
[ ERROR ] Failed to allocate graph: NC_ERROR
Any ideas what was going wrong? Also, in general, how do I debug this kind of inference errors?
Any suggestions are highly welcomed.
I think I have figured out what was going wrong with MYRIAD.
My input image size is 448 X 448, if I changed the size to 224 X 224, I would reference my colorization model to MYRIAD. But the color is not nearly as good as the CPU mode, which is a surprise to me, I thought the result from CPU mode should be the same as MYRIAD, if not, can we trust CPU mode? Anyway, if I further reduced the input image size to 112 X 112, then I could get good results on both MYRIAD and CPU mode.
The conclusion is the input image size matters, but the problem is there are absolutely no information about what happened when the image size exceeded the threshold.
Intel folks, can you please tell me how can I dig out those information?
Many many thanks
Thank you for reaching out.
Do you see the same behavior on the OpenVINO™ toolkit latest release 2020.1? If the issue persists, could you provide us your model to test it from our end?
Also, what base model did you use for your custom trained ONNX model?
Thanks for your reply.
I am installing 2020.1 now, will give it another try. If I still have the same issue, what's the best way to upload my model to your place? The size is a few hundreds MB, too big for email attachment, I don't want to upload it to the public forum either, its a commercial development.
The base model is similiar to https://github.com/jantic/DeOldify, we made some tweaks to fit in our applications.
I've downloaded and installed 2020.1, still can't run inference on MYRIAD, but this time, at least, I got some useful information:
E: [ncAPI] [ 83689] [colorization] ncGraphAllocate:2136 Not enough memory to allocate intermediate tensors on remote device
[ ERROR ] Failed to allocate graph: NC_OUT_OF_MEMORY
Looks like my application is too big for MYRIAD internal memory.
I've heard from other source of Intel that MYRIAD only supports up to 200 X 200 input image size, is it true? Also, the next generation "KeemBay" will have bigger memory, can you share some information about it? How much bigger the "KeemBay" memory size will be? 4X or even more?
Thanks a lot.
I will send you a PM where you can send your model privately so I can take a look at it.
More technical details about Keem Bay will be released at a later date.