- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am new with media service SDK, and interested in its linux branch.
I have read the "Intel® Media SDK 2014 Developer’s Guide "
I did not find reference to graphic blending with video (for example when there is need to blend some picture or symbol with video).
Can anyone give a hint as to how this is supported with the media SDK ?
Best Regards,
Ran
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ran,
Composition feature in Media SDK allows you to do that. If you visit our samples page, please look at sample_vpp example that shows how to "composite" multiple video streams. You can adjust the alpha blending parameter among others as well. In your case, since you want to add a picture (instead of a video) to a video, all you need to do is modify the LoadNextFrame() function to loop on your picture.
Composition accepts raw streams as input (YUV/NV12) - so as long as your video and picture can be represented in these formats, you can easily enable it in Media SDK. You will find helpful info in the readme file in the sample_vpp folder.
Here is the pointer to download our samples - https://software.intel.com/en-us/intel-media-server-studio-support/code-samples
Hope this helps.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ran,
Composition feature in Media SDK allows you to do that. If you visit our samples page, please look at sample_vpp example that shows how to "composite" multiple video streams. You can adjust the alpha blending parameter among others as well. In your case, since you want to add a picture (instead of a video) to a video, all you need to do is modify the LoadNextFrame() function to loop on your picture.
Composition accepts raw streams as input (YUV/NV12) - so as long as your video and picture can be represented in these formats, you can easily enable it in Media SDK. You will find helpful info in the readme file in the sample_vpp folder.
Here is the pointer to download our samples - https://software.intel.com/en-us/intel-media-server-studio-support/code-samples
Hope this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sranathi,
Thank you very much for the response, and the helpful suggestion.
We will surely take a look into the examples.
But I would like to add a question before doing that.
How is the picture/graphic images should be input into the system ? Is it using linux frame buffer (/dev/fbdev) ?
Regards,
Ran
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ran - MSS encodes/decodes/video processes on NV12 surfaces (and RGB in some cases). So, if you have a color format conversion from your format to NV12, then you can easily feed it to the encoder/decoder we provide. You can see our samples and tutorials do this conversion when they load the raw frames for encoding. Since you want to composite, I'd convert your picture file to NV12 as well, before calling MSDK APIs.
In our sample I pointed to, we use file I/O to read-in and write-out frames.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sravanthi ,
Thank you for the answers on this subject.
Is it possible to blend (alpha merge) QT application output on the same way (using LoadNextFrame ) ?
Regards,
Ran
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page