Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

graphic blending with video

rshal2
New Contributor II
579 Views

Hello,

I am new with media service SDK, and interested in its linux branch.

I have read the "Intel® Media SDK 2014 Developer’s Guide "

I did not find reference to graphic blending with video (for example when there is need to blend some picture or symbol with video).

Can anyone give a hint as to how this is supported with the media SDK ?

Best Regards,

Ran

0 Kudos
1 Solution
Sravanthi_K_Intel
579 Views

Hi Ran,

Composition feature in Media SDK allows you to do that. If you visit our samples page, please look at sample_vpp example that shows how to "composite" multiple video streams. You can adjust the alpha blending parameter among others as well. In your case, since you want to add a picture (instead of a video) to a video, all you need to do is modify the LoadNextFrame() function to loop on your picture.

Composition accepts raw streams as input (YUV/NV12) - so as long as your video and picture can be represented in these formats, you can easily enable it in Media SDK. You will find helpful info in the readme file in the sample_vpp folder. 

Here is the pointer to download our samples - https://software.intel.com/en-us/intel-media-server-studio-support/code-samples

Hope this helps.

View solution in original post

0 Kudos
4 Replies
Sravanthi_K_Intel
580 Views

Hi Ran,

Composition feature in Media SDK allows you to do that. If you visit our samples page, please look at sample_vpp example that shows how to "composite" multiple video streams. You can adjust the alpha blending parameter among others as well. In your case, since you want to add a picture (instead of a video) to a video, all you need to do is modify the LoadNextFrame() function to loop on your picture.

Composition accepts raw streams as input (YUV/NV12) - so as long as your video and picture can be represented in these formats, you can easily enable it in Media SDK. You will find helpful info in the readme file in the sample_vpp folder. 

Here is the pointer to download our samples - https://software.intel.com/en-us/intel-media-server-studio-support/code-samples

Hope this helps.

0 Kudos
rshal2
New Contributor II
579 Views

Hi Sranathi,

Thank you very much for the response, and the helpful suggestion.
We will surely take a look into the examples.
But I would like to add a question before doing that.

How is the picture/graphic images should be input into the system ? Is it using linux frame buffer (/dev/fbdev) ? 

Regards,

Ran

0 Kudos
Sravanthi_K_Intel
579 Views

Hi Ran - MSS encodes/decodes/video processes on NV12 surfaces (and RGB in some cases). So, if you have a color format conversion from your format to NV12, then you can easily feed it to the encoder/decoder we provide. You can see our samples and tutorials do this conversion when they load the raw frames for encoding. Since you want to composite, I'd convert your picture file to NV12 as well, before calling MSDK APIs.

In our sample I pointed to, we use file I/O to read-in and write-out frames.

0 Kudos
rshal2
New Contributor II
579 Views

Hi Sravanthi ,

Thank you for the answers on this subject.

Is it possible to blend (alpha merge) QT application output on the same way (using LoadNextFrame ) ?

Regards,

Ran

0 Kudos
Reply