VBR: encqsvInit: GopRefDist 3 GopPicSize 256 NumRefFrame 2
CQP: encqsvInit: GopRefDist 4 GopPicSize 32 NumRefFrame 2
The SDK seems to use different gop values for CQP and VBR. Is there a special reason why CQP uses much lower gop values? Because the lower gop values should result in a lower video quality. I wonder why they are different.
The default GOP values chosen by the SDK are Intel's recommendation based on the requested target usage and the quality output of each encoding algorithm. The values can be set by the application if different quality is desired. You may also find that default values can vary for different hardware implementations.
Yes, the algorithm and hardware for each bitrate mode can vary, and the chosen values are designed to provide the expected target used.
For example, if you ask for 'fastest' usage, the fastest using CQP may be different than the fastest using VBR.
Of course, but why is it different in CQP mode? The point is with a GopPicSize of just 32 the quality is worse than GopPicSize 256. There must be a reason why Intel choose a value that lowers the quality.