Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Direct Camera Connections to Myriad X

Gilles__Brandon
Beginner
2,371 Views

Hi there,

So I'm hoping to do two camera connections directly to the Myriad X, to leverage the depth-estimation hardware.  Are there any Intel examples on how to do this?

Thanks in advance,

Brandon

0 Kudos
21 Replies
Gilles__Brandon
Beginner
2,073 Views

Oh and specifically, I'd love to connected the Intel D430 RealSense module to the Myriad X.

 

Thanks again,

Brandon

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Hi Martin!

 

So I actually saw your blog independently, it's really neat work and I love the use of the dummy camera for the housing!  So clever. 

So what I'm doing right now is actually using an Intel RealSense D435 + Raspberry Pi + MobileNet-SSD to find objects and realtime estimate their distance.  So in particular this has TONS of applicability for autonomous warehousing operations... and if the framerate can be ~30FPS, than the robots don't have to be slowed down, which allows them to be a value-add rather than a hinderance.  The background here is that the Movidius X is actually capable of usable-video (300x300) resolutions at ~45FPS, and probably higher, if the right data paths are used.

Anyways, I'm using PINTO0309's codebase, thanks to Mr. Katsuya Hyodo, who's also on here.

Here's an example of it running on the Movidius 2 with Raspberry Pi:

https://photos.app.goo.gl/r5QF4bmapz61HTVeA

And Mr. Hyodo has a slew of examples on his Github, here.

So this works, and pulls off ~6 FPS.  And with Movidius X it pulls off ~12FPS, both using the Pi.  But, without the Pi, it's more like 45FPS, see the LattePanda demo here.  The Pi is the bottleneck, and it doesn't have to be, as the stereo pair actually is designed (by Intel) to hook directly to the Movidius X (also by Intel), so that you can eliminate this bottleneck.  What I'm looking for is the documentation to do this... so far I haven't been able to get a response from Intel WRT documentation.

So to be clear, the thing I'm looking for is the specifics on how to directly connect the stereo camera pair (D430 module) to the Movidius X, so as to eliminate the frames having to traverse parts unnecessarily, as below  

Current frame path:

D430 depth module -> D4 Stereo Processor -> Raspberry Pi -> Movidius X -> Raspberry Pi -> Display/storage

Desired frame path:

D430 depth module -> Movidius X -> Raspberry Pi -> Display/storage

So there are 2 'why' of this:

  1. The D4 Stereo processor is not need when using the Movidius X, as it has 3x pairs of stereo-depth hardware built-in.  
  2. The Rapsberry Pi limits the framerate by at least 50%, and it doesn't have to be/shouldn't be in the middle.  It could be used to just show/save the final result, which would result in an object detection rate of around 45-ish FPS, instead of 12 FPS, while also eliminating a bunch of power use in the process.

Thanks in advance,

Brandon

 

 

0 Kudos
RTasa
New Contributor I
2,073 Views
Movidius X on the Pi? I am confused. How do you run the Movidius X on the Pi. Thanks
0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Update:  Yay!!!  Intel just got back to me offline.  So I'm working with them there.

 

Best,

Brandon

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Bob T. wrote:

Movidius X on the Pi? I am confused. How do you run the Movidius X on the Pi.
Thanks

 

Hey Bob,

Check out PINTO0309's github:

https://github.com/PINTO0309/MobileNet-SSD-RealSense

He has the full instructions there on how to do it.  Intel released OpenVINO preview for the Raspberry Pi on December 21st, and PINTO0309 had a good chunk of his repository converted by the 23rd, and now has it pretty optimized.

The only latent (no pun intended) big thing to fix is that the NCS1 solution has great latency (i.e. really low), and the NCS2 latency is high.  I'm certain that'll get sorted out though.  

So the only annoying bit about installing everything on a Pi right now is the compilation, but you can use Docker + QEMU to do the compilation on a desktop instead.  Kyle M. Douglass shows how to do that, here.

Both PINTO0309 and I are planning on setting up a Docker like that... just haven't gotten to it.  It'll make the setup way easier/faster.  And then a pre-setup image can just be distributed for running on the Pi, with all the compilation already done.

Best,

Brandon

 

0 Kudos
RTasa
New Contributor I
2,073 Views
So when you say Movidius X you are thinking NCS2 or NCS usb stick not the actual chip like the have for the UpBoard connected via M2.
0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Ah, yes, sorry I should be clear that I'm using the NCS2 right now.  And yes, I'd like to get closer to using actual hardware (I'm an Electrical Engineer) to increase efficiency here.  Ideally I'm wanting to make my own boards to eliminate connectors/junctions/processors/etc. which are unnecessary in my application.

0 Kudos
RTasa
New Contributor I
2,073 Views

I am using the one NCS2 and running the example provided by Open vino The (Security cam car detection) as a bench mark on the older Up board (not the Up2). It has a Atom Z8350 and the results are so much better than the Pi I am not going back. 13 fps for a single stick. Now I can't wait to get another NCS2 stick. The Up2 allows you to plug the myriad in directly via an M2 slot which can container 2 Myraid X processors. Going to the M2 should provide a ton of bandwidth although I have not tried it. 

Just FYI

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Hi Bob,

Just as a heads up that on the repository I mentioned, Mr. Hyodo is getting 24 FPS with a single NCS2 on a Raspberry Pi.  So 24 fps with a single stick.  And with the direct camera connection I'm mentioning, 45FPS should be achievable with a Pi host.

Best,

Brandon

0 Kudos
Peniak__Martin
Beginner
2,073 Views

Gilles, Brandon wrote:

Hi Martin!

 

So I actually saw your blog independently, it's really neat work and I love the use of the dummy camera for the housing!  So clever. 

So what I'm doing right now is actually using an Intel RealSense D435 + Raspberry Pi + MobileNet-SSD to find objects and realtime estimate their distance.  So in particular this has TONS of applicability for autonomous warehousing operations... and if the framerate can be ~30FPS, than the robots don't have to be slowed down, which allows them to be a value-add rather than a hinderance.  The background here is that the Movidius X is actually capable of usable-video (300x300) resolutions at ~45FPS, and probably higher, if the right data paths are used.

Anyways, I'm using PINTO0309's codebase, thanks to Mr. Katsuya Hyodo, who's also on here.

Here's an example of it running on the Movidius 2 with Raspberry Pi:

https://photos.app.goo.gl/r5QF4bmapz61HTVeA

And Mr. Hyodo has a slew of examples on his Github, here.

So this works, and pulls off ~6 FPS.  And with Movidius X it pulls off ~12FPS, both using the Pi.  But, without the Pi, it's more like 45FPS, see the LattePanda demo here.  The Pi is the bottleneck, and it doesn't have to be, as the stereo pair actually is designed (by Intel) to hook directly to the Movidius X (also by Intel), so that you can eliminate this bottleneck.  What I'm looking for is the documentation to do this... so far I haven't been able to get a response from Intel WRT documentation.

So to be clear, the thing I'm looking for is the specifics on how to directly connect the stereo camera pair (D430 module) to the Movidius X, so as to eliminate the frames having to traverse parts unnecessarily, as below  

Current frame path:

D430 depth module -> D4 Stereo Processor -> Raspberry Pi -> Movidius X -> Raspberry Pi -> Display/storage

Desired frame path:

D430 depth module -> Movidius X -> Raspberry Pi -> Display/storage

So there are 2 'why' of this:

  1. The D4 Stereo processor is not need when using the Movidius X, as it has 3x pairs of stereo-depth hardware built-in.  
  2. The Rapsberry Pi limits the framerate by at least 50%, and it doesn't have to be/shouldn't be in the middle.  It could be used to just show/save the final result, which would result in an object detection rate of around 45-ish FPS, instead of 12 FPS, while also eliminating a bunch of power use in the process.

Thanks in advance,

Brandon

 

 

 

Thanks Brandon, really interesting stuff! You know, I have done some work for Myriad X on Up board and indeed the FPS was around 40. I hoped that RPI with NCS 2 would give something similar but like you say there seems to be a bigger bottleneck. I get around 20FPS on face detection mobilnet-ssd and around 13FPS on our own PPE (Personal Protective Equipment) detection model. What in your opinion this bottleneck is? The fact that the CPU is slower and needs to orchestrate data transfers to and from the Myriad X via USB? 

Good luck with your project, it does sounds really cool!

0 Kudos
Boaz__Jabulon
Beginner
2,073 Views

Boaz, Jabulon

 

Tue, 01/08/2019 - 10:42

What is the comand to run the Inference using the Raspberry pi camera?
-i /dev/video0 or -i /dev/video1 or - i cam and manu other comands don't work with the Pi Camera. -i /dev/video0 or -i /dev/video1 only work with USB web cameras.

E. g. That works with the USB web cam on the pi but does not work with the Raspberry pi camera:
./armv7l/Release/object_detection_demo_ssd_async -i /dev/video0 -m frozen_inference_graph.xml -d MYRIAD

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Thanks and sorry about the delay here.  So I haven't done an investigation yet, but I think the bottleneck comes down to the extra-traverses of frames from camera, through the Pi Broadcom CPU, through USB, to the neural processor, out of the neural processor, through USB again, through the Broadcom CPU again, to the GPU, and then the display.

So my hunch is this isn't a throughput thing at all... it's a timing thing.  With all this back/forth, the timing of everything being ready doesn't line up well, and the added 'waiting' when item x isn't ready for just a bit, so item y waits, and then when item x is ready, it takes some time for item y to get going, results in the lower frame-rate you see.

So the same thing happens when optimizing networking gear and is why DMA, stream processors, etc. are so important - to prevent the x is ready, y isn't, now y is ready, x isn't, sort of performance degradation.  And the current path has effectively none of these optimizations, so I think there's plenty of opportunity for these mismatches in resources being busy/ready.

The possibility exists that there's just a throughput bottleneck too, but I think it's the mistmatch in readiness that's causing this.

One thing I want to do is just use the shave processors to do the conversion of the input resolution to the 300x300 pixel blob that the MobileNet network needs, which would then eliminate all the issues discussed above, as the data path would be camera -> shave processor -> neural processor, where the latter two are internal to the Myriad X.

Best,

Brandon

0 Kudos
Reinberger__Thomas
2,073 Views

Boaz, Jabulon wrote:

Boaz, Jabulon

 

Tue, 01/08/2019 - 10:42

What is the comand to run the Inference using the Raspberry pi camera?
-i /dev/video0 or -i /dev/video1 or - i cam and manu other comands don't work with the Pi Camera. -i /dev/video0 or -i /dev/video1 only work with USB web cameras.

E. g. That works with the USB web cam on the pi but does not work with the Raspberry pi camera:
./armv7l/Release/object_detection_demo_ssd_async -i /dev/video0 -m frozen_inference_graph.xml -d MYRIAD

Try this

https://gist.github.com/treinberger/c63cb84979a4b3fb9b13a2d290482f4e

I changed PINTOs code to work with Rpi cams.

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Hey everyone!

 

Figured I'd circle back on this thread to mention that we're getting closer on our effort to have direct connections to the Myriad X to allow disparity depth + AI on the same platform.

Luxonis DepthAI Stereo Depth and AI Leveraging Intel Movidius Myriad X

More updates here:

https://hackaday.io/project/163679-luxonis-depthai

And you'll eventually be able to buy this on luxonis.com and also we'll be launching a CrowdSupply for the release of Luxonis DepthAI hopefully in September.

And if you want to read about our final goal, check out commuteguardian.com.

Best,

Brandon & The Luxonis Team!

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

And here's the version we're working on.  It has a Raspberry Pi Compute Module, and our own Myriad X module that we made (to allow easier integration into a variety of our designs):

Myriad X + Raspberry Pi Compute Module for Real-time neural inference (AI) and stereo depthCoupling the Myriad X with direct MIPI image sensors and the Raspberry Pi allows the power of the Myriad X with the huge number of excellent Githubs, libraries, and ease-of-us of the Raspberry Pi

Cheers,

Brandon
luxonis.com

0 Kudos
ABoch5
Beginner
2,073 Views

Hi, Did you find examples and documentations how to use Stereo Depth blocks of Myriad X for any custom Stereo-cameras? 

0 Kudos
Gilles__Brandon
Beginner
2,073 Views

Hi Alexey,

Yes we have stereo working and we're experimenting with different methods for smoothing it etc. and implementing these on the SHAVEs.

https://discuss.luxonis.com/d/24-initial-disparity-depth-tuning-and-experiments

So we're working on the firmware to expose the results over a Python API call.  

 

Feel free to shoot me an email at brandon at luxonis dot com to discuss.  Curious to hear about your end application or use-case as we're making a system that allows modular cameras so you might be able to use it directly for your stereo needs.

 

Best,

Brandon

0 Kudos
Forte__Maria_Paola
2,073 Views

Hi Alexey and Brandon,

have you made any progress in using stereo depth blocks of Myriad X for custom stereo-cameras?

Best,

Paola

0 Kudos
Gilles__Brandon
Beginner
1,685 Views

Hi Paola,

Sorry I somehow missed this.  You can now buy this solution here:

https://www.crowdsupply.com/luxonis/depthai

And also on our store here:

https://shop.luxonis.com/

It does the neural inference object detection and depth together to give 3D object localization.  Example below, with a Raspberry Pi used to visualize the results:

Spatial AI with Myriad X

 

 

0 Kudos
Reply