- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi there,
So I'm hoping to do two camera connections directly to the Myriad X, to leverage the depth-estimation hardware. Are there any Intel examples on how to do this?
Thanks in advance,
Brandon
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Oh and specifically, I'd love to connected the Intel D430 RealSense module to the Myriad X.
Thanks again,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've used RPi:
https://timeless.ninja/blog/the-world-s-first-ai-edge-camera-powered-by-two-intel-myriad-x-vpus
and Up board:
to create two AI Edge camera prototypes.
Hope this helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Martin!
So I actually saw your blog independently, it's really neat work and I love the use of the dummy camera for the housing! So clever.
So what I'm doing right now is actually using an Intel RealSense D435 + Raspberry Pi + MobileNet-SSD to find objects and realtime estimate their distance. So in particular this has TONS of applicability for autonomous warehousing operations... and if the framerate can be ~30FPS, than the robots don't have to be slowed down, which allows them to be a value-add rather than a hinderance. The background here is that the Movidius X is actually capable of usable-video (300x300) resolutions at ~45FPS, and probably higher, if the right data paths are used.
Anyways, I'm using PINTO0309's codebase, thanks to Mr. Katsuya Hyodo, who's also on here.
Here's an example of it running on the Movidius 2 with Raspberry Pi:
https://photos.app.goo.gl/r5QF4bmapz61HTVeA
And Mr. Hyodo has a slew of examples on his Github, here.
So this works, and pulls off ~6 FPS. And with Movidius X it pulls off ~12FPS, both using the Pi. But, without the Pi, it's more like 45FPS, see the LattePanda demo here. The Pi is the bottleneck, and it doesn't have to be, as the stereo pair actually is designed (by Intel) to hook directly to the Movidius X (also by Intel), so that you can eliminate this bottleneck. What I'm looking for is the documentation to do this... so far I haven't been able to get a response from Intel WRT documentation.
So to be clear, the thing I'm looking for is the specifics on how to directly connect the stereo camera pair (D430 module) to the Movidius X, so as to eliminate the frames having to traverse parts unnecessarily, as below
Current frame path:
D430 depth module -> D4 Stereo Processor -> Raspberry Pi -> Movidius X -> Raspberry Pi -> Display/storage
Desired frame path:
D430 depth module -> Movidius X -> Raspberry Pi -> Display/storage
So there are 2 'why' of this:
- The D4 Stereo processor is not need when using the Movidius X, as it has 3x pairs of stereo-depth hardware built-in.
- The Rapsberry Pi limits the framerate by at least 50%, and it doesn't have to be/shouldn't be in the middle. It could be used to just show/save the final result, which would result in an object detection rate of around 45-ish FPS, instead of 12 FPS, while also eliminating a bunch of power use in the process.
Thanks in advance,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Update: Yay!!! Intel just got back to me offline. So I'm working with them there.
Best,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Bob T. wrote:Movidius X on the Pi? I am confused. How do you run the Movidius X on the Pi.
Thanks
Hey Bob,
Check out PINTO0309's github:
https://github.com/PINTO0309/MobileNet-SSD-RealSense
He has the full instructions there on how to do it. Intel released OpenVINO preview for the Raspberry Pi on December 21st, and PINTO0309 had a good chunk of his repository converted by the 23rd, and now has it pretty optimized.
The only latent (no pun intended) big thing to fix is that the NCS1 solution has great latency (i.e. really low), and the NCS2 latency is high. I'm certain that'll get sorted out though.
So the only annoying bit about installing everything on a Pi right now is the compilation, but you can use Docker + QEMU to do the compilation on a desktop instead. Kyle M. Douglass shows how to do that, here.
Both PINTO0309 and I are planning on setting up a Docker like that... just haven't gotten to it. It'll make the setup way easier/faster. And then a pre-setup image can just be distributed for running on the Pi, with all the compilation already done.
Best,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ah, yes, sorry I should be clear that I'm using the NCS2 right now. And yes, I'd like to get closer to using actual hardware (I'm an Electrical Engineer) to increase efficiency here. Ideally I'm wanting to make my own boards to eliminate connectors/junctions/processors/etc. which are unnecessary in my application.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am using the one NCS2 and running the example provided by Open vino The (Security cam car detection) as a bench mark on the older Up board (not the Up2). It has a Atom Z8350 and the results are so much better than the Pi I am not going back. 13 fps for a single stick. Now I can't wait to get another NCS2 stick. The Up2 allows you to plug the myriad in directly via an M2 slot which can container 2 Myraid X processors. Going to the M2 should provide a ton of bandwidth although I have not tried it.
Just FYI
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Bob,
Just as a heads up that on the repository I mentioned, Mr. Hyodo is getting 24 FPS with a single NCS2 on a Raspberry Pi. So 24 fps with a single stick. And with the direct camera connection I'm mentioning, 45FPS should be achievable with a Pi host.
Best,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Gilles, Brandon wrote:Hi Martin!
So I actually saw your blog independently, it's really neat work and I love the use of the dummy camera for the housing! So clever.
So what I'm doing right now is actually using an Intel RealSense D435 + Raspberry Pi + MobileNet-SSD to find objects and realtime estimate their distance. So in particular this has TONS of applicability for autonomous warehousing operations... and if the framerate can be ~30FPS, than the robots don't have to be slowed down, which allows them to be a value-add rather than a hinderance. The background here is that the Movidius X is actually capable of usable-video (300x300) resolutions at ~45FPS, and probably higher, if the right data paths are used.
Anyways, I'm using PINTO0309's codebase, thanks to Mr. Katsuya Hyodo, who's also on here.
Here's an example of it running on the Movidius 2 with Raspberry Pi:
https://photos.app.goo.gl/r5QF4bmapz61HTVeA
And Mr. Hyodo has a slew of examples on his Github, here.
So this works, and pulls off ~6 FPS. And with Movidius X it pulls off ~12FPS, both using the Pi. But, without the Pi, it's more like 45FPS, see the LattePanda demo here. The Pi is the bottleneck, and it doesn't have to be, as the stereo pair actually is designed (by Intel) to hook directly to the Movidius X (also by Intel), so that you can eliminate this bottleneck. What I'm looking for is the documentation to do this... so far I haven't been able to get a response from Intel WRT documentation.
So to be clear, the thing I'm looking for is the specifics on how to directly connect the stereo camera pair (D430 module) to the Movidius X, so as to eliminate the frames having to traverse parts unnecessarily, as below
Current frame path:
D430 depth module -> D4 Stereo Processor -> Raspberry Pi -> Movidius X -> Raspberry Pi -> Display/storage
Desired frame path:
D430 depth module -> Movidius X -> Raspberry Pi -> Display/storage
So there are 2 'why' of this:
- The D4 Stereo processor is not need when using the Movidius X, as it has 3x pairs of stereo-depth hardware built-in.
- The Rapsberry Pi limits the framerate by at least 50%, and it doesn't have to be/shouldn't be in the middle. It could be used to just show/save the final result, which would result in an object detection rate of around 45-ish FPS, instead of 12 FPS, while also eliminating a bunch of power use in the process.
Thanks in advance,
Brandon
Thanks Brandon, really interesting stuff! You know, I have done some work for Myriad X on Up board and indeed the FPS was around 40. I hoped that RPI with NCS 2 would give something similar but like you say there seems to be a bigger bottleneck. I get around 20FPS on face detection mobilnet-ssd and around 13FPS on our own PPE (Personal Protective Equipment) detection model. What in your opinion this bottleneck is? The fact that the CPU is slower and needs to orchestrate data transfers to and from the Myriad X via USB?
Good luck with your project, it does sounds really cool!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Boaz, Jabulon
Tue, 01/08/2019 - 10:42
What is the comand to run the Inference using the Raspberry pi camera?
-i /dev/video0 or -i /dev/video1 or - i cam and manu other comands don't work with the Pi Camera. -i /dev/video0 or -i /dev/video1 only work with USB web cameras.
E. g. That works with the USB web cam on the pi but does not work with the Raspberry pi camera:
./armv7l/Release/object_detection_demo_ssd_async -i /dev/video0 -m frozen_inference_graph.xml -d MYRIAD
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks and sorry about the delay here. So I haven't done an investigation yet, but I think the bottleneck comes down to the extra-traverses of frames from camera, through the Pi Broadcom CPU, through USB, to the neural processor, out of the neural processor, through USB again, through the Broadcom CPU again, to the GPU, and then the display.
So my hunch is this isn't a throughput thing at all... it's a timing thing. With all this back/forth, the timing of everything being ready doesn't line up well, and the added 'waiting' when item x isn't ready for just a bit, so item y waits, and then when item x is ready, it takes some time for item y to get going, results in the lower frame-rate you see.
So the same thing happens when optimizing networking gear and is why DMA, stream processors, etc. are so important - to prevent the x is ready, y isn't, now y is ready, x isn't, sort of performance degradation. And the current path has effectively none of these optimizations, so I think there's plenty of opportunity for these mismatches in resources being busy/ready.
The possibility exists that there's just a throughput bottleneck too, but I think it's the mistmatch in readiness that's causing this.
One thing I want to do is just use the shave processors to do the conversion of the input resolution to the 300x300 pixel blob that the MobileNet network needs, which would then eliminate all the issues discussed above, as the data path would be camera -> shave processor -> neural processor, where the latter two are internal to the Myriad X.
Best,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Boaz, Jabulon wrote:Boaz, Jabulon
Tue, 01/08/2019 - 10:42
What is the comand to run the Inference using the Raspberry pi camera?
-i /dev/video0 or -i /dev/video1 or - i cam and manu other comands don't work with the Pi Camera. -i /dev/video0 or -i /dev/video1 only work with USB web cameras.E. g. That works with the USB web cam on the pi but does not work with the Raspberry pi camera:
./armv7l/Release/object_detection_demo_ssd_async -i /dev/video0 -m frozen_inference_graph.xml -d MYRIAD
Try this
https://gist.github.com/treinberger/c63cb84979a4b3fb9b13a2d290482f4e
I changed PINTOs code to work with Rpi cams.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey everyone!
Figured I'd circle back on this thread to mention that we're getting closer on our effort to have direct connections to the Myriad X to allow disparity depth + AI on the same platform.
More updates here:
https://hackaday.io/project/163679-luxonis-depthai
And you'll eventually be able to buy this on luxonis.com and also we'll be launching a CrowdSupply for the release of Luxonis DepthAI hopefully in September.
And if you want to read about our final goal, check out commuteguardian.com.
Best,
Brandon & The Luxonis Team!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
And here's the version we're working on. It has a Raspberry Pi Compute Module, and our own Myriad X module that we made (to allow easier integration into a variety of our designs):
Cheers,
Brandon
luxonis.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, Did you find examples and documentations how to use Stereo Depth blocks of Myriad X for any custom Stereo-cameras?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexey,
Yes we have stereo working and we're experimenting with different methods for smoothing it etc. and implementing these on the SHAVEs.
https://discuss.luxonis.com/d/24-initial-disparity-depth-tuning-and-experiments
So we're working on the firmware to expose the results over a Python API call.
Feel free to shoot me an email at brandon at luxonis dot com to discuss. Curious to hear about your end application or use-case as we're making a system that allows modular cameras so you might be able to use it directly for your stereo needs.
Best,
Brandon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexey and Brandon,
have you made any progress in using stereo depth blocks of Myriad X for custom stereo-cameras?
Best,
Paola
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Paola,
Sorry I somehow missed this. You can now buy this solution here:
https://www.crowdsupply.com/luxonis/depthai
And also on our store here:
It does the neural inference object detection and depth together to give 3D object localization. Example below, with a Raspberry Pi used to visualize the results:

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page