Based upon reading various topics surrounding performance and usage of the sample_encode application, Intel said that the application was not tested with Skylake and as such, to expect some problems. Is this still the case?
CPU is i7-6670HQ with Iris Pro 580 Graphics
I'm using the most recent samples downloaded from GitHub (188.8.131.52) and compiled in either Debug or Release 32bit mode.
I seem to be getting the same H264 performance numbers (FPS) regardless of the Target Usage setting that I'm passing to the application as well as the number of simultaneous encodes. I would expect higher FPS with speed (TU7) settings and lower quality, but that doesn't seem to be what I'm getting. The application dutifully reports the TU settings changing upon starting the encode. What am I doing wrong?
These are the command line options that I'm using:
sample_encode h264 -i testcam.yuv -o testcam-tu1-2000.h264 -w 1920 -h 1080 -b 2000 -f 30 -u quality -d3d -hw
sample_encode h264 -i testcam.yuv -o testcam-tu4-2000.h264 -w 1920 -h 1080 -b 2000 -f 30 -u balanced -d3d -hw
sample_encode h264 -i testcam.yuv -o testcam-tu7-2000.h264 -w 1920 -h 1080 -b 2000 -f 30 -u speed -d3d -hw
Let me know if you need more information.