Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

VPP hw based handling for i420 format.

mmulh
Beginner
1,986 Views

I'm looking at encoding raw video coming in as either YUY2 or I420 formats.  The plan on Windows is to copy the raw video frames in CPU memory into staging textures, map the staging textures to underlying D3D11 textures, and then use VPP to create NV12 hw frames for subsequent encoding.

For YUY2, this seems obvious, using DXGI_FORMAT_YUY2 for the staging and intermediate textures.

I'm assuming this is supported for I420 raw format - since MFX_FOURCC_I420 is defined, but I'm not sure on the underlying hw format.  The closest D3D11 format seems to be DXGI_FORMAT_420_OPAQUE - however this format is not supposed to be writable / known-layout - from the docs it states:

This format differs from DXGI_FORMAT_NV12 in that the layout of the data within the resource is completely opaque to applications. Applications cannot use the CPU to map the resource and then access the data within the resource.

So that seems to make this DXGI format useless for having an I420 video frame in CPU memory, and wanting to copy it to a staging texture.

So are I420 hw buffers supported on Windows ?  Which DXGI format should be used for staging raw I420 video frames ?

0 Kudos
1 Solution
AthiraM_Intel
Moderator
1,754 Views

Hi,


Most USB cameras give the option to use compressed frames (mjpeg, h264, etc.) It may not be intuitive, but it is faster to move fewer data and decode -> encode on the GPU than to just encode.  


If you have any further issue, please let us know.


If this resolves your issue, make sure to accept this as a solution. This would help others with similar issue. 



Thank you




View solution in original post

0 Kudos
8 Replies
AthiraM_Intel
Moderator
1,951 Views

Hi,


Thank you for posting in Intel Communities.


Could you please let us know whether you are using mediaSDK or oneVPL?


If yes, please share the version.


Also share the OS details and Hardware you are using?



Thanks


0 Kudos
mmulh
Beginner
1,938 Views

I'm using oneVPL - on my particular machine the version is 1.35.  I'm running Windows 10, and my machine is an i5-6500.

I'm using the legacy interface since I need to support pre-gen11 hardware, and I'm using the D3D11 hardware buffers / acceleration.

ALSO, I'm looking at my corresponding Linux code base, and would have the same question as to how to support I420 video with VA surfaces.

0 Kudos
AthiraM_Intel
Moderator
1,887 Views

Hi,


Thank you for sharing the details. We are checking on this internally, will get back to you soon with an update.



Thanks


0 Kudos
AthiraM_Intel
Moderator
1,844 Views

Hi,


MFX_FOURCC_I420 is only supported in the VPL CPU implementation. i420 input format is not supported by VPL GPU runtime. For VPL GPU runtime, the best-supported formats for DirectX and libva surfaces are NV12 and P010. Supported Video Processing Formats for different Hardware can be found here.


Hope this helps. If you have any further issue, please let us know.



Thanks


0 Kudos
AthiraM_Intel
Moderator
1,809 Views

Hi,


We have not heard back from you. Could you please give us an update?



Thanks


0 Kudos
mmulh
Beginner
1,797 Views

Okay, that's fine.  Currently I'm mapping an NV12 staging surface, and manually converting / copying the I420 frame into it.

I did some experimentation with the YUY2 format - copying the YUY2 frame to a YUY2 staging texture, to a YUY2 hardware texture, and then using VPP to translate to NV12.  Also tried wrapping my YUY2 frame into a system frame, and then using VPP to translate/copy to a hardware NV12 buffer.  Both methods were significantly slower than just manually copying/converting the YUY2 system frame into the mapped NV12 staging texture, and then copying the staging texture to the hardware buffer.

I was just hoping to improve the performance since I've currently got a scenario of inputting 4K video over a USB interface as YUY2, and then encoding it.  Seems like the bottleneck is just the shear amount of data being moved.

Thanks, Mike.

0 Kudos
AthiraM_Intel
Moderator
1,755 Views

Hi,


Most USB cameras give the option to use compressed frames (mjpeg, h264, etc.) It may not be intuitive, but it is faster to move fewer data and decode -> encode on the GPU than to just encode.  


If you have any further issue, please let us know.


If this resolves your issue, make sure to accept this as a solution. This would help others with similar issue. 



Thank you




0 Kudos
AthiraM_Intel
Moderator
1,722 Views

Hi,


Thanks for accepting our solution. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.



Thanks




0 Kudos
Reply