I use your VPP deinterlacer before performing a h264 QSV compression. For H264 SW version with IntelMedia 2014 I see we can chose between BOB or ADVANCED.
I am wondering if you can explain how you perform the deinterlacing in QSV?
I also want to know if in SW with MediaSDK 2014 we don't specified the deinterlacing mode, what is taken by default?
FYI my setup:
Intel Media SDK System Analyzer (64 bit)
The following versions of Media SDK API are supported by platform/driver:
Version Target Supported Dec Enc
1.0 HW Yes X X
1.0 SW Yes X X
1.1 HW Yes X X
1.1 SW Yes X X
1.3 HW Yes X X
1.3 SW Yes X X
1.4 HW Yes X X
1.4 SW Yes X X
1.5 HW Yes X X
1.5 SW Yes X X
1.6 HW Yes X X
1.6 SW Yes X X
1.7 HW Yes X X
1.7 SW Yes X X
1.8 HW No
1.8 SW Yes X X
Name Version State
Intel(R) HD Graphics 4000 10.18.10.3345 Active
NVIDIA GeForce GT 545 22.214.171.1246 Active
CPU: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
OS: Microsoft Windows 8 Enterprise
According to the Linux release notes, adaptive deinterlacing is the default (higher quality) but BOB is available by explicitly setting deinterlace parameters for better performance. When I've tried this in the past BOB is much faster but the better quality from ADI is very noticeable.
Unfortunately we aren't at liberty to disclose internal implementation information. However, if you need full control/details for deinterlacing it is certainly possible to write your own with OpenCL.
Thanks for the answer. I'm working on windows but I assume that your answer works too.
What I would like to know is what is the difference between 1.7 and 1.8 version of the SDK. From what I understood, in version 1.8 you have the choice between ADI and BOB but what I want to know is what does version 1.7 use (for QSV and for SW)? Is it ADI, BOB or something else?
I have to check on the SW implementation, but for QSV it is up to the graphics drivers and HW capability. There are user and OEM methods to set general/overall system behavior for playback using "Intel(R) Clear Video Technology". For MediaSDK applications prior to API 1.8, the deinterlacing method chosen is whatever that user or OEM felt is best (some prefer 'speed' and other prefer 'quality'). Also, I believe there were some platforms that did not support ADI. The new API 1.8 option allows MediaSDK application to override the default (if supported by hardware).
Sorry its not a simple answer, but MediaSDK has been supported on many platforms with variety of configuration.
By "The deinterlacing method chosen is whatever that user or OEM felt is best (some prefer 'speed' and other prefer 'quality')", do you mean for same hardware and driver the SDK can chose different deinterlacer algorithm based on"TargetUsage" value?