- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello!
I use qsv h264 encoder with ffmpeg (latest build by Zeranoe) on Win7 with media Studio 2017 installed. And I just curious about few things (actually, problems):
1) Is there any noticeable speed or quality improvements in h264 encoding between Ivy Bridge, Haswell, Kaby lake, etc processors?
1.1) Should I have different encoding time, using Core i3 and Core i7? Does it depend on physical core number? Is the approximate framerate for each model specified somewhere?
I'm asking this because of strange results I have, trying to encode 720p on youngest Haswell i3 CPU (4000M, 2 cores). It gives me 120 fps, ~55% GPU load and surprisingly 100% CPU load by libmfx.dll. The same time encoding on mature IvyBridge i7 (3770, 4 cores) produce 650 fps with only one core engaged (80% idle). How can you explain this, is it really OK, to have 100% CPU? :) Some advices? Could older MediaStudios/OpenCL SDKs help?
2) The next question is about encoding quality and parameters. I didn't find any complete manual for h264_qsv, except some random examples. Is there any?
2.1) Having no idea what to write, I started with just "-c:v h264_qsv", and it had work with Haswell, but not with Ivy bridge. For the latest I had to add "-look_ahead 0". It refuses to encode without this parameter. What does it mean, and why it's important for Ivy? Where to read about it?
2.2) And the quality. I doubt, there is a way to enhance it with any parameters, but current video is quite different from the original. I can forgive the noise and tiny details inaccuracy, but simplification of colours it quite visible even without magnification. Light-light-yellow became white, and bluish purple became straight purple. Here are the screenshots, 8 mbps, original was x264 8mbps too. https://yadi.sk/d/gOu7D3pb3JvYQ2
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
QSV encoding speed does not depend on number of CPU cores, most important is GPU frequency and number GPU slices. Each generation is different by feature set but in general <in theory> later generation shall be faster or produce better picture quality with similar speed compare to previous.
As for ffmpeg documentation the best documents to read are source files.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, ViCue. It seems, I was wrong about 720p.It was 960x540. And after days of googling I ended up with a comment of man, who noticed the same... Why didn't Intel write somewhere, that being multiplied of 16 is vital for hardware acceleration?? So, I tried with real 1280x720 and it's okay now.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
they did it on page 136 of the manual
https://software.intel.com/sites/default/files/managed/47/49/mediasdk-man.pdf
"Width must be a multiple of 16. Height must be a multiple of 16 for progressive frame sequence and a multiple of 32 otherwise. "
but when a picture got arbitrary size you can make surfrace bigger (multiple of 16) and apply Crop to select only visible part of picture
anyway, nothing here yet explains 100% CPU usage in your test. it goes something wrong. single hardware encode should not consume more than 15% of CPU with decent GPU load
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yeah, that's what I call manual! ))
I suppose, you mean QSV's crop function. But ffmpeg does not have any interfaces to it, as well as it can not tune all the cool things I see in the manual you gave. For example, I'd like to tune keyframe interval, which is huge and dumb (constant 250 frames, i.e. every 5 seconds). The PDF says, a host program can do it. But it is not possible with ffmpeg's h264_qsv encoder, am I right?
Anyway, I tried ffmpeg's crop to crop my 720x576 (720/16 = 45) to 708x564 (708/16 = 44.25). And guess what? It was encoded flawlessly too. As the uncropped original. Magic?..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hm, surprisingly it turned out simple: "-g 125" key works. Well, that's not a scene-change-detection, but better than nothing :)
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page