I have already asked the question here, but didn't get any proper response yet. In the hope to get some useful response, I am again raising the question.
Since the compiled model with 12 SHAVEs is running on x86 but not on TX2, I guess, it is a hardware issue.
Help me if anyone knows the reason. You can guess also if you are not sure.
Hi @Akhilesh ,
Thanks for contacting us. I've attempted to install the NCSDK on a TX2 and although I was able to work around errors/warnings thrown by the install process, I haven't been able to get it to function properly (or at least as far as you have gotten it to work). My guess is that it may be a hardware problem, as one of the pre-reqs in the installation guide states "x86-64 with Ubuntu (64 bit) 16.04 Desktop". This might be the reason behind it, as I don't think the install on a non-x86 board has been tested (aside from an RPi 3). On the other hand I am interested in getting the install further, I'd appreciate if you could share a detailed list of the steps you followed to get the NCSDK to "work", maybe there is something I can do to help.
Hi @Luis_at_Intel ,
I followed https://ncsforum.movidius.com/discussion/380/how-to-install-the-ncsdk-on-an-armv8-linux-kernel-ubunt... to install SDK on TX2.
PS: The code is working properly with 5 or less number of shaves.
Hi @Luis_at_Intel ,
I have few more information to share.
I tried with small network, some 2 convolution and 2 dense layers, it's working with 12 SHAVEs. But the big network, like VGG16 is not working with 12 SHAVEs.
If you read 7th point of Errata in https://github.com/movidius/ncsdk/releases, you will see "Convolution may fail to find a solution for very large inputs.". I think this may be the reason that vgg16 is not working.
If you got any hint, please let me know.
If the size of intermediate generated data is too large relative to the size of internal memory, it will not work well.
Internal memory is only 500 MB.
According to the results I benchmarked long ago, I will consume "(number of Shave) x (model size)".
Thanks @PINTO , This is really helpful information for me. But my actual model is using VGG16 as a feature extractor (upto flatten layer) and 2 dense layer for classifying 7 classes. So, total size of the model is 35.9 MB.
My explanation was inadequate.
The intermediate generation data is larger than the size of the model file.
There is no verification method as to what size the actual intermediate generation data is.