Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.
3069 Discussions

How does MFXVideoDECODE to calculate the number of surfaces required for decoder?

jigong
Beginner
651 Views

Our application creates 33 by QueryIOSurf now, I tried to modify through Request.NumFrameMin, but failed (decoder does not output frames at all). How can lower down this number?How does MFXVideoDECODE to calculate the number of surfaces required for decoder? What do these surfaces do?

Thanks in advance

0 Kudos
7 Replies
Petter_L_Intel
Employee
651 Views
Hi juixiang,

QueryIOSurf will tell you the minimum number of surfaces the decoder or encoder needs. You cannot use less surfaces than reported by Request.NumFrameMin. However, by configuring the decoder component by explicitly setting the AsyncDepth parameter you can reduce the number of surfaces needed. AsyncDepth = 1 will result in least amount of surfaces, but note that this setting will likely also affect performance due to limited internal caching and multi-tasking.

The actual number of surfaces needed depends on the stream that is being decoded. Besides the memory needed for each surface the decoder/encoder also requires some additional memory for scratch buffers.

Regards,
Petter
0 Kudos
jigong
Beginner
651 Views
Hi Petter,
I used AsyncDepth =1, Request.NumFrameMin becomes 17, memory usage is lower and FPS gains 2~4 and video visual effect looks like same as before. Thank you very much.
Once I change to AsyncDepth = 2, Request.NumFrameMin becomes 30.I am still new to Media. Could you give more explain about AsyncDepth?
Thanks.
Jiuxiang
0 Kudos
Petter_L_Intel
Employee
651 Views
Jiuxiang,

In general, a greater AsyncDepth means that the codec component will schedule more tasks to the HW before requiring explicit synchronization. With more asynchronous tasks in flight the HW can work more efficiently (greater throughput) vs. an approach where only one task is processed at a time.

Note that if you set AsyncDepth to 0, Media SDK will select an appropriate value to achieve good performance. Also note that if low latency is more important than optimal performance then it is suggested to set AsyncDepth to 1 as explained in the SDK manual appendix covering video conferencing use case.

Regards,
Petter
0 Kudos
jigong
Beginner
651 Views
Petter,
Thanks. I got it. This is why the pair of AsyncDepth with the number of surface is (0, 33,) (1, 17) and (2, 30).
If seting low latency (this is with low AsyncDepth), does this lower down video quality (such as drop frames, with larger masic)?
Thanks.
Jiuxiang
0 Kudos
Petter_L_Intel
Employee
651 Views
Modifying AsyncDepth does not impact quality or drop frames. The only impact is on overall frame processing performance.

If you are interested in low latency usages please refer to the Media SDK 2012 sample_decode and sample_videoconf samples.

Regards,
Petter
0 Kudos
jigong
Beginner
651 Views
Hi Petter,
Thanks. I already refered to Media SDK 2012 sample_decode.
Jiuxiang
0 Kudos
Vassili_Yakovlev
Beginner
651 Views
Petter,

My experience with MSDK shows that if there more than one session running at the same time (joined or not) it's better to set AsyncDepth to value which depends on framerate and resolution, then QueryIOSurf returns appropriate number of surfaces, so all sessions runs smoothly and it doesn't have to drop frames (I speak about live encoding, where timing is very important). I've tested for total (for all sessions) framerate of 825 fps with loss of frames less than 0.02%.

Regards,
Vassili
0 Kudos
Reply