Currently the only way is to use GStreamer backend (change avidemux element according to your container):
VideoCapture cap( "filesrc location=<filename> ! avidemux ! vaapidecodebin ! appsink", CAP_GSTREAMER);
You can read more here: https://github.com/opencv/opencv/wiki/Video-capture-and-write-benchmark
Thanks Maksim, wondering if the same video decoding acceleration in OpenCV can also be achieved through the Intel MediaSDK ? The following option is there OpenCV.
Thanks, lack of container support makes it much less useful for now. That is a dead end for what I am trying to do. Is it possible to build an entire GEN9 (GPU) accelerated pipeline to process h.264 video in mpeg4 container along the following line (with or without OpenCV , OpenCL is ok) ?
[RTSP, h.264/mpeg4] -> [decode] -> [ resize/transform] ->[low level computer vision, e.g. VME]->[CPU for high level computer vision]
If possible, would be great to have an idea of how to make that work .