Community
cancel
Showing results for 
Search instead for 
Did you mean: 
AaronL
Beginner
106 Views

Useful information for anyone using VBR with HW HEVC encoding on Skylake

In the software that I develop, I've been using HW HEVC encoding and decoding on 6th-generation Intel processors since the technology became available for use at the end of 2015.  In addition, I setup the encoder to use VBR and, at appropriate times, adjust the bit rate.  Early on, I realized that the documented approach for changing to a different bit rate wasn't going to work for my purposes.  The documented approach states to drain the encoder and then reset it using the new bit rate.  However, if I use this approach, after draining and resetting the encoder, the encoder doesn't start to produce new encoded frames until it is used to encode at least 4 video frames, and this occurs with each bit rate change.  Needless to say, this is a showstopper when it comes to live video.  The same sequence, for H.264, doesn't result in the same problem.

I worked around this issue but skipping the draining step.  Instead, I just reset the encoder.  This actually works in practice, even if all of the Intel documentation states to drain the encoder first before changing the bit rate.  And, when I say it works in practice, it works without any loss of video frame data, and the bit rate change also works as well.

This technique worked up until the Intel Graphics driver package released after build 4380 (available at https://downloadcenter.intel.com/download/25818/Intel-Graphics-Driver-for-Windows-10-and-Windows-7-8....  That is, for any driver package released after 4380, as I discovered when I finally got around to investigating it today, the same 4 frame phenomena that I described above occurs when I just reset the encoder without draining it.  With 4380 and earlier, resetting the encoder works fine for bit rate changes.  However, as I also discovered today, there is a simple solution for this--turn HRD conformance off using the NalHrdConformance member in the mfxExtCodingOption.  Here's what I think happened.  In driver packages with versions 4380 or earlier, HRD conformance was probably turned off by default.  After 4380, someone fixed this "bug", and as a result, the encoder noticed that I was resetting it without first doing a drain so it implicitly did a drain for me first before switching to the new bit rate.  By turning HRD conformance off, I'm getting the original behavior with 4380 and earlier.  At least, that's my suspicion--the release notes for the various Intel Graphics driver packages are entirely unhelpful when it comes to QuickSync and the Intel Media SDK, since they never mention anything having to do with QuickSync or the Intel Media SDK even when it is clear something was changed having to do with them.

Anyway, hopefully this information will be helpful to someone else who wants to use VBR with HW HEVC encoding on Skylake.

0 Kudos
2 Replies
Jeffrey_M_Intel1
Employee
106 Views

Hi Aaron,

Glad you found something that works. If HRD conformance is not needed there can be many advantages to turning it off.

Another option to consider is for your application to take more direct control of quantization.  This will mean you're less affected by implementation changes between driver versions.

The mfxEncodeCtrl structure allows additional parameters to be passed for each frame encode.  These include frame QP. This can allow much more flexibility than the automatic bitrate control options, including adjustments for each frame.

In any case, thanks for letting us know what you've found.  

Regards, Jeff

 

 

AaronL
Beginner
106 Views

Jeffrey M. (Intel) wrote:
Another option to consider is for your application to take more direct control of quantization.  This will mean you're less affected by implementation changes between driver versions.

The mfxEncodeCtrl structure allows additional parameters to be passed for each frame encode.  These include frame QP. This can allow much more flexibility than the automatic bitrate control options, including adjustments for each frame.

Using the constant quantization parameter (CQP) algorithm is a nice thought in theory, but in practice, I don't see any way to realistically make use of it for any type of live video without somehow analyzing multiple frames prior to encoding and making choices based on this analysis.  Doing so requires maintaining a buffer of raw frames, which would delay encoding, something that would be problematic for streaming applications that want to minimize the end-to-end delay between client and server as much as possible.  Besides, VBR is more appropriate in general for a streaming application--after all, you don't necessarily know if the network can support a particular bit rate without trying it, and even if you think you know the maximum throughput that the network can sustain at any given time, the situation may change over time.

Reply