I have recently purchased a VIZI-AI board to evaluate it for a project, in particular to see if the advantages/disadvantages compared to Jetson Nano boards. I have been able to install Ubuntu 20.04LTS, as well as Intel Distribution of OpenVINO with ease thanks to the great documentation. Congrats on that sense, much easier than Nvidia
I'm now trying to move as much computation as possible within my OpenCV / FFMPEG program either to the GPU (Atom with HD 500) or the VPU (Intel Myriad X) as expected, the CPU is not enough.
I have followed the instructions to compile a custom ffmpeg binary with support for quick sync drivers (again, very smooth due to great documentation). I have tested my python program in its current form and I'm getting 10.6FPS using the GPU.
As seems the GPU is not capable of reaching the goal of at least 24FPS, I would like to try to use the Myriad X VPU for hardware-assisted encoding as the specs claim 4K@30fps but in this case, I have been unable to find the document on how to do this.
Any help here please?
These might help you:
Multi-Device Plugin: automatically assigns inference requests to available computational devices to execute the requests in parallel. Potential gains are as follows
- Improved throughput that multiple devices can deliver (compared to single-device execution)
- More consistent performance, since the devices can now share the inference burden (so that if one device is becoming too busy, another device can take more of the load)
Official Documentation: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html
Heterogeneous-Plugin: enables computing for inference on one network on several devices. Purposes to execute networks in heterogeneous mode
- To utilize accelerators power and calculate heaviest parts of network on accelerator and execute not supported layers on fallback devices like CPU
- To utilize all available hardware more efficiently during one inference
Official Documentation: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_HETERO.html
Both of them are related to inference. But at this point my goal is to accelerate video encoding.
If I'm unable to encode video at 24FPS, no matter what I do to speed up the "decision making process", I won't be able to use a VIZI board for the task.
I tried with CPU and with the Atom HD 500 GPU (10.6FPS now) but as they deem uncapable of achieving that performance level, I'm wondering on how to use ffmpeg to encode in the Myriad X VPU.
Of course the alternative would be to tackle this problem fully within gstreamer, but is a more complex technology for me to code the "decision making process" inside gstreamer than in opencv
Are you using FFmpeg via OpenCV's VideoCapture/VideoWriter? It did not support HW acceleration (GPU) until Mar 1, see this PR: https://github.com/opencv/opencv/pull/19460. It has not been included in any release yet. Or did you use FFmpeg directly?
I recommend trying GStreamer backend in OpenCV, you'll need to install `gstreamer1.0-vaapi` package with VA-API plugin and use GStreamer pipeline definition string instead of filename in VideoCapture/VideoWriter parameters:
VideoWriter("appsrc ! videoconvert n-threads=4 ! parsebin ! vaapih264enc ! qtdemux ! filesink location=\"test.mp4\"", CAP_GSTREAMER, ...);
Check this sample app for more combinations: https://github.com/opencv/opencv/blob/master/samples/cpp/videocapture_gstreamer_pipeline.cpp
I'm using vidgear library that enables me to call ffmpeg in a certain way (for example enabling hardware encoder) and also has the capability to send the video using RTMP to Facebook or YouTube that is the ultimate goal
I could try with gstreamer enabled backend and see if I can do both of them
Thank you. Will try
BTW, using still FFMPEG, I provide you some numbers. IMHO, the numbers show the GPU is kicking in for video encoding (as vidgear enables me to pass parameters to ffmpeg in its call) but that the Atom GPU is not powerfull enough.
In my Macbook Pro:
1280x720 25.0 38.8
1920x1080 11.9 25.2
In my Vizi AI
1280x720 10.2 12.1
1920x1080 4.3 9.8
So clearly the qsv driver is there, in some cases even doubling the performance. But, it is using the CPU GPU and I wonder if there is any way to use the Myriad X VPU instead to encode in the case of the Vizi board. I will try with gstreamer, no worries about it, but just wondering
Myriad X VPU can't be used for Video encoding, it can only be used for inference Engine.
In gstreamer, if you would like to use VPU to perform inference, then you can use this command:
gvaclassify device=CPU model=...
Thanks for all your help.
IMHO, it is extremely unfortunate to call a product Video Processing Unit and lacking the capacity to use it for video encoding, that I believe is kind of a basic video funtion. If you are going to support inference only, just call it Inference Processing Unit.
Either way your help was really appreciated. It is sad to see my Vizi AI board is going to be too less powerful for the project, as to be honest, working with it was VERY easy. Installing Ubuntu 20.04 and OpenVINO was extremely simple, even getting ffmpeg with QuickSync support was easy.
To compare it with the Nvidia Jetson Nano path is ridiculous. I have spent several days of trial and error just to get it functional with GPU acceleration, with multiple reinstalls and SD cards burned in the process, waiting for hours to have OpenCV compiled to discover at the last moment you missed a cmake flag.
But, and this was just preliminar, seems has enough sauce to make it work in real time due to the power of the GPU.
As for an Intel x86 + Myriad X alternative, I have been looking around, but costs increases quite a bit, and for example, now I'm not sure if a Celeron J4105 with HD 600 graphics will be enough or not to encode such video
Will see. The help from the forum was great, as always. Thank you
Ok, that's what I expected after some internet searching.
It is clear then with this Atom + HD 500 is not enough for my encoding needs.
I will have to either look for a more powerful x86 combo or go for a Nvidia based board
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.