We'd like to develop video conferencing software using Intel Media SDK. This software will run on Intel server w/QSV and act as MCU(Multipoint Control Unit) to do video mixer.
So MCU need to decode videos from many peers, mix these videos to one screen and then encode it and send this stream to every peer.
We don't see the sample program in Media SDK and we'd like to use GPU to do MCU video mixer nor CPU in order to real time performance issue.
Please kindly advise how to do that~
I understand the targeted use case and Intel Media SDK can certainly be used for that scenario. But, unfortunately we do not currently have any samples showcasing such use case.
However, if you intend to implement your server using Linux version of Media SDK, then we will soon (early 2014) release a surface mixing feature (with sample code) to simplify the usage you describe.
If you intend to use Windows, then you will have to rely on (efficient) surface tiling/mixing using either custom OpenCL kernel for the purpose, or DirectX shader, or something similar utilizing the GPU.
Thanks for your prompt reply.
We plan to use Windows version of Media SDK first and write a custom OpenCL kernel for this purpose. Finally, we will port it to our Linux server. However, may I know how to do surface mixing feature in early 2014 release of Linux version? will it also apply to Windows version?
Unfortunately there is currently no sample to showcasing this feature. It is something that is in the works and should be available soon. If you explore the Media SDK API you will find details about the compositing feature. We will notify the community as soon as we have a sample or further details are available.
The plan is to expose the same API for Windows, but it will not be ready until later this year.
I'd like to correct my previous statement. In fact, it turns out the latest recent release of Media SDK for Linux Servers, release 3, does showcase the VPP compositing feature as part of the "sample_vpp" sample.
In Release Note of MediaSDK 2014 for client, I notice there is a new "mfxExtVPPComposite" API. Is it used to do to compose several raw video streams into one ? Is "mfxExtVPPComposite" API also included in API v1.8 ? Because I can't find related keyword in installed folder.
Yes, mfxExtVPPComposite is a new feature of the 1.8 API. However, as noted earlier in this thread, this feature (with sample) is currently only available for Linux version of the SDK. Windows support with HW acceleration will be available as part of new graphics driver update later this year.