I am working on a real time deep learning model with NCS 2 with beaglebone black.
I successfully did inference, however, "loading the model to device" is quite slow. It takes about 15 minutes with beaglebone and about 1 minute with a mini PC. I want to learn the reason why it takes such time, and want to decrease the time.
In addition, the time is varying according to the model I used. For example faster r-cnn takes less time than mask r-cnn.
Thanks in advance,
The loading time may vary from model to model depending on the framework, topology and size. Could you share additional information about the model that you are using?
- What framework is the model based on?
- What topology is the model based on?
- What is the size of the model?
- Is it a pre-trained model or a custom trained model?
For your loading time comparison, are you also loading the model to the Intel NCS 2 with mini PC?