In reading the MyriadX VPU product brief, it indicates that "...over 20 hardware accelerators to perform tasks such as optical flow and stereo depth..." are present, and that "...the new stereo depth accelerator can simultaneously process 6 camera inputs (3 stereo pairs)...at 60Hz frame rate".
I have a trinocular vision application (3 cameras / 3 stereo pairs) for which this kind of hardware-accelerated video processing would be extremely useful. I purchased a Neural Compute Stick 2 to get my feet wet, but it appears as though acceleration is limited to porting neural nets described by static flow graphs onto it through OpenVino. Is there any example code demonstrating how to access / use the Enhanced Vision Accelerators, either as stand-alone components in a pipeline, or (better yet) integrated with a neural net's input or output?
There is no description or example or workflow that shows the stereo depth accelerator in the links.
Same references to Tensorflow, Caffe, etc.
This was a useless comment, and it is hardlly a professional response to the question.
I have spent hours chasing this so-called accelerator, which Intel claims can handle 6 camera inputs (3 stereo pairs) each running 720p streams.