Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Ken_S_
Beginner
73 Views

Linux MSDK 2014R2 - Alpha blending Composition don't seem to work

Hi there,

I tried the alpha blending composition feature on the latest MSDK 2014R2. Unfortunately I couldn't get it to work.

Here is the detail information.

Hardware: Intel XEON E3-1285 v3

OS: SLES SP11SP3 64 bit

MediaSDK version: 1.10 (Release note claimed that alpha composition is supported since 1.9 API)

Repro steps:

1) Preparing two NV12 video (main video w/ 1920x1080, alpha-blended video w/ 480x200)

  • - ffmpeg -ss 00:02:00 -i Sintel.mkv -sn -an -pix_fmt nv12 -frames:v 480 -s 1920x1080 -r 24 -f rawvideo sintel.yuv
  • - ffmpeg -ss 00:02:00 -i TearsOfSteel.mov -sn -an -pix_fmt nv12 -frames:v 480 -s 480x200 -r 24 -f rawvideo tos.yuv

2) Composition config file (composite.par)

stream=sintel.yuv
width=1920
height=818
cropx=0
cropy=0
cropw=1920
croph=818
dstx=0
dsty=0
dstw=1920
dsth=818
framerate=24
fourcc=nv12

stream=tos.yuv
width=480
height=200
cropx=0
crop=0
cropw=480
croph=200
dstx=0
dsty=0
dstw=480
dsth=200
framerate=24
fourcc=nv12

3) run sample_vpp_drm

  • sample_vpp_drm -lib hw -dcc nv12 -df 24 -composite composition.par -o out.yuv

4) encode to h264 and mux to mp4

  • sample_encode_drm h264 -hw -nv12 -i out.yuv -o out.264 -w 1920 -h 818
  • ffmpeg -i out.264 -c:v copy out.mp4

5) Playback the video. The secondary video is all garbled. You can view the result here.

 

Thanks,

Ken

0 Kudos
11 Replies
73 Views

Hello Ken,

Thank you for the question. While I investigate your issue, from my initial understanding of your question, you are passing two videos of different resolutions to be composited. In which case, take a look at this forum thread https://software.intel.com/en-us/forums/topic/518391 - it talks about a work around.

Also, I will get back to you on this soon after my investigation. Thanks.

Ken_S_
Beginner
73 Views

Hi Sravanthi,

I tried your suggestion using the same video with the same WxH, then adjust the CropW/H. See my composition.par file below:

stream=sintel.yuv

width=1920
height=818
cropx=0
cropy=0
cropw=1920
croph=818
dstx=0
dsty=0
dstw=1920
dsth=818
framerate=24
fourcc=nv12


stream=sintel_2.yuv
width=1920
height=818
cropx=100
cropy=100
cropw=480
croph=204
dstx=0
dsty=0
dstw=480
dsth=204
framerate=24
fourcc=nv12
 

The result is the same. The 2nd channel flicks similar to the youtube video I posted earlier. I also tried different combinations of dstx/dsty/cropx/cropy settings on the second video (sintel_2.yuv) and wasn't able to get anything to work. If you have an example of the composition configuration, would you mind to share with me? I can certainly try it out to see how it goes.

Another question is that if the two video have different duration, the vpp sample app seems to stop whenever the shortest duration is reached. Is there any way to get around it?

 

Thanks,

Ken

 

 

73 Views

Hey Ken,

I tried composition using sample_vpp (as you did) and also by modifying simple_4_vpp_resize_denoise_vmem - my output it not garbled and looks good. Your parameters look alright to me as well. I will next test if there are any limitations with the system you have.

In the meantime, try using the foreman video (352x288 yuv stream) on sample_vpp with the .par file, and see if it works. Or if you could share your streams, that could be useful too. Oh, one thing that messed up my output was the value of pitch. The composition worked fine, but my LoadFrame and WriteFrame functions were not - you may want to re-check those functions.

Will get back to you when I have more information.

Yabo_W_
Beginner
73 Views

Hi all,

I encountered the problem too, and what is the finally solution for the problem?

below is my .par file:

stream=./input_nv12.yuv
width=1920
height=1080
cropx=0
cropy=0
cropw=1920
croph=1080
dstx=0
dsty=0
dstw=1920
dsth=1080
framerate=25
fourcc=nv12

stream=./blend_nv12.yuv
width=320
height=240
cropx=0
cropy=0
cropw=320
croph=240
dstx=500
dsty=0
dstw=320
dsth=240
framerate=25
fourcc=nv12
AlphaEnable=1
GlobalAlpha=128

of course, the input_nv12.yuv resolution is 1920x1080, and the blend_nv12.yuv is 320x240

I want to blend the blend_nv12.yuv on the input_nv12.yuv.

73 Views

Hello Yabo, Video composition works on Linux, and I tested it. Not sure if you had a chance to look at the article on composition here - https://software.intel.com/en-us/articles/video-composition-using-intel-media-sdk

Meantime, I will check the issue you are observing and get back to you on alpha blending specifically.

Yabo_W_
Beginner
73 Views

Hi Sravanthi,

A nice article on video composition.

But I want to use sample_vpp_drm to alpha blending, what should I do? Is my .par file incorrect?

73 Views

Hello Yabo,

Your parfile looks fine. Reg alpha blending - I am trying out some workarounds for the problem you observe. This issue was brought forward to us recently - so please stay tuned. I will get back to you.

Ken_S_
Beginner
73 Views

Hi Sravanthi,

I could get the alpha blending to work now. It was my mistake on preparing the nv12 sample.
Instead of 1920x1080, I should have created the sample clip with 1920x818 (to match my composite.par file parameters).
- ffmpeg -ss 00:02:00 -i Sintel.mkv -sn -an -pix_fmt nv12 -frames:v 480 -s 1920x818 -r 24 -f rawvideo sintel.yuv

Tested on 2015R3 release.

Thanks,
Ken S.

73 Views

Thanks for the update Ken.

Hamza_U_1
Beginner
73 Views

Hi,

How can i do alpha blending with per pixel alpha ?

-Using Intel Media SDK.

73 Views

Hi Hamza,

Alpha blending with per pixel alpha can be done by setting PixelAlphaEnable member in  mfxExtVPPComposite structure.(MediaSDK manual Page No. 112:-"https://software.intel.com/sites/default/files/managed/47/49/mediasdk-man.pdf") provided stream must be a RGB color format. Take a look at sample_vpp application( https://software.intel.com/sites/default/files/MediaSamples_Windows_6.0.0.68.msi) which accepts PixelAlphaEnable as one of the input via par file. So, considering your input is in RGB colorspace format and you set PixelAplhaEnable, which should enable your use case scenario.

Please start a new thread on the media forum for further questions, as it will easier for us to track and respond.

Thanks,

 

 

Reply