- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am working on a real time deep learning model with NCS 2 with beaglebone black.
I successfully did inference, however, "loading the model to device" is quite slow. It takes about 15 minutes with beaglebone and about 1 minute with a mini PC. I want to learn the reason why it takes such time, and want to decrease the time.
In addition, the time is varying according to the model I used. For example faster r-cnn takes less time than mask r-cnn.
Thanks in advance,
Best regards
Taylan
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Taylan,
The loading time may vary from model to model depending on the framework, topology and size. Could you share additional information about the model that you are using?
- What framework is the model based on?
- What topology is the model based on?
- What is the size of the model?
- Is it a pre-trained model or a custom trained model?
For your loading time comparison, are you also loading the model to the Intel NCS 2 with mini PC?
Regards,
Jesus

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page