Hello dear community,
I'm a newbie to Compute Stick 2, so please bear with me, if the question sounds stupid.
I'm having a running application of an interesting Open Source project named "monodepth2" (https://github.com/nianticlabs/monodepth2) which calculates depth maps from monocular images. This project relies on the power of a GPU. It make extensively use of pytorch and numpy.
Since not anybody nowadays has a compatible GPU I was wondering, if it would be possible to make use of the NCS2 (https://software.intel.com/en-us/neural-compute-stick), since I have read, that there are ports of pytorch to the Myriad VPU.
Is anybody able to give me a "could work, worth a try" for this idea?
Link Copied
Anybody?
Thank you for reaching out.
OpenVINO toolkit supports different multiple hardware platforms and not all frameworks/topologies are supported. For additional information about Intel® Neural Compute Stick 2, please check the MYRIAD plugin for supported frameworks/topologies.
Regards,
Mauricio R.
Thanks for your answer. It doesn't help me much, since I don't want to use OpenVINO things.
I apologize for the confusion. The model you are trying to use is not supported by the OpenVINO Toolkit.
The link in my previous response points to the MYRIAD plugin which is used by the OpenVINO toolkit to inference with the Intel Neural Compute Sticks. The documentation has a list of supported topologies/models.
Please let me know if you have additional questions.
Regards,
Mauricio R.
:) Now it's me who is confused. Of course, the Intel Neural Compute Stick NCS2 _is_ supported by OpenVINO (or at least by the Intel version of it).
Thank you.
To clarify, the Intel® Neural Compute Stick 2 is supported by the OpenVINO toolkit. However, the OpenVINO toolkit does not support the model that you are trying to use. There is a list of supported models here.
Regards,
Mauricio R.
No, sorry, this is wrong. Anyway, thanks for your patience.
I mean, they say:
The Inference Engine MYRIAD plugin is developed for inference of neural networks on Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2.
Thank you.
Correct, the Inference Engine MYRIAD plugin is responsible for inference of neural networks on the Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2. Both versions of the Intel Neural Compute Stick are supported by the OpenVINO Toolkit.
However, the monodepth2 project you are trying to use with the Intel® Neural Compute Stick 2 is not supported.
Regards,
Mauricio R.
Yes, agree. But I'm still looking for ways to make MD use the NCS2
Thanks
Hello,
You could give it a try and share your progress with the community. There may be other users in the community that might be interested in collaborating with your issue too.
Regards,
Mauricio R.
You are funny. I don't even have a clue what I'm doing.... I could share my start point and questions with the community, if it is interested.
It is really worth it. Monodepth2 is stunning, from the results. Depth frames from a monocular camera. On a GPU on a good workstation we are reaching about 120 fps. Means: Better than real time.
Today I was trying to convert a pytorch model to a ONNX. It gave some results, but honestly - I have no idea...
BTW: I'm appearing as different persons as it seems. nyoun7 is not my Korean version. It's me under a different acount
Hello Neil,
The model that you are using is not supported and unfortunately, there is nothing we can do from our end. Regarding the user name, you can go to the community web page, at the right top corner click on your nickname, then click on the "My Profile" option, and edit the nickname that you want to be displayed.
Regards,
Mauricio R.
Hi Mauricio,
thanks for the follow up. No issue, meanwhile I returned the stick. It was supposed to replace an GPU. Some benchmarking on my PC did show, that the performance of both - my CPU and the NCS2 - was more or less identical (middle class I5 PC). Since the monodepth2 did show up with 3-4 fps on my CPU I was requiring at least factor 10. A GPU can do that, not the NCS2. The NCS2 seems to be a perfect piece for a Raspberry or so, not for an ordinary laptop.
Regarding the model: I was already on a good way to convert the PyTorch model of monodepth2 to ONXX or something. Not necessary anymore, because it later has turned out, that the stick was too weak.
Waiting on the refund now. The stick is returned since a week.
Thanks anyway
For more complete information about compiler optimizations, see our Optimization Notice.