Hi all, I was looking to the documentation of the UMC::H264Decoder class (IPP version 7.0.7) and I found that there's a new function that I never notice in the old version
ChangeVideoDecodingSpeed
The H264 decoder it's sometime very heavyespeciallyon FULL HD or higher streams. So I found that function and I would use it for make decoding process waste less CPU time,especiallywhen I have to decode a lot of streams at the same time (like a lot of IP camera): this way I can have a more scalable system.
I think that function was coded for that reason. The problem is that I have done a test on my PC (a quite old Intel Q6600) and I found no difference using 0 or 7 as decoding speed: I notice only poor quality (as I expected) with speed of 7, but the CPU consumed by the decoding process is the same in 0 or 7. (CPU decoding load was for example 10% in all cases)
Did you do some test with that function at different speed? could you notice some difference? what king of CPU?
链接已复制
Anyone from Intel to comment on this?
I set ChangeVideoDecodingSpeed(7) and GetSkipInfo() returns 6 once the decoding process starts and then it increments the counter only by two for each frame it emits. So I get: 6, 8, 10, 12, etc. So the playback speed is around 2X instead of 8X. Can you give us some pointers as what to try to make this work? I should say that the CPU usage is around 50-60%, so there's still plenty of space for the decoder to do faster decoding.
I also noticed "dbPlaybackRate" in Params struct. What is it for?
Thanks in advance.