Graphics
Intel® graphics drivers and software, compatibility, troubleshooting, performance, and optimization
22722 讨论

Benchmark performance of Quicksync Intel iGPU on 13900K or 14900K versus NVidia 4080 GPU

stressedout-tek
1,663 次查看

Some Video editing Software suppliers have suddenly switched priority of H264 and H265 decoding from the iGPU to an NVidia GPU in a user's system.

Do any benchmarking tests exist for H264 and H265 4.2.0 video decoding to show performance of Intel Quicksync versus an NVidia discrete GPU ?.

 

I have always found Intel iGPU faster, but now I have to disable the NVidia 4080 GPU to force decode to  Intel instead on my NVidia GPU.

 

It would be interesting to see if any actual tests are available in the public domain from Intel to show the benchmarked performance of the two decoding methods for H264 and H265 on iGPU versus say a NVidia 4070 or 4080 GPU.

 

If anyone knows of such valid manufacturer tests, please post here. Thanks

 

 

0 项奖励
9 回复数
Earl_Intel
主持人
1,605 次查看

Hi stressedout-tek,


Thank you for posting in the communities!


Thank you for providing your thoughts and observations about the benchmarking performance of our Intel Quicksync Video.


No worries, I'll try to look on this internally and will provide you an update on this thread as soon as I can.


Best regards,

Earl E.

Intel Customer Support Technician


0 项奖励
stressedout-tek
1,576 次查看

Many thanks for looking into this for me.

To put this in perspective - many Video editors are always looking to get the maximum potential and performance from their hardware investment - I know I am one of those individuals.

Recently we have been struggling against many variables - with Application Software changes in Video Decode/Encode priorities / Intel driver updates / NVidia driver updates and the awful reputation of the upgrade from Win 11 23H2 to Win 11 24H2

This has all been a real headache and many users are still reporting issues in peformance dropping through the floor.

I am just trying to establish a benchmark starting point to see how Quicksync performs versus a discrete GPU for media that is Hardware decode/encode capable - i.e.: H264 , H265 4.2.0,   8 or 10bit.

I know there are many variable to affect this - some of those even listed above, but as a Chartered engineer before retirement one thing that was paramount we always followed was to establish benchmark test criteria and environment so improvements and performance drops could be demonstrated more accurately with some confidence.

 

Information for iGPU engines in Raptor, Raptor enhanced and the more recent Ultra CPUs would be good.

 

I hope you can find some useful information around this topic for us. This is to help users ensure they have the best settings in their hardware platforms.

0 项奖励
Mike_Intel
主持人
1,481 次查看

Hi stressedout-tek,


Thank you for patiently waiting for our update.


We tried to conduct some research however we couldn't find the specific information related to your inquiry. However, we did found this link: 

https://edc.intel.com/content/www/us/en/products/performance/benchmarks/computex-2021/

In this link, it mentions an "8x lead in 10-bit video encoding due to Intel Quick Sync Video acceleration vs competition (45.9 seconds vs. 369.9 seconds)." Typically, comparisons are made between CPUs and competitors in productivity tasks without focusing on specific features like this one.


If you have questions, please let us know. Thank you.


Best regards,

Michael L.

Intel Customer Support Technician


0 项奖励
stressedout-tek
1,453 次查看

Thank you for your efforts to locate some articles on this. Much appreciated.

Unfortunately, the tests are circa 2021, so technology has moved on since then on both Intel and NVidia side.

As explained in my initial question, the context of interest is to help determine the effect of 'turning off ' the Intel iGPU (in say a 13900K, 14900K or Ultra CPU)  in favour of the NVidia GPU (4000 series) when decoding H264 and H265 video footage in video editors such as Adobe Premiere Pro and Da Vinci Resolve.

 

Many users have conducted their own tests including myself, and see deterioriation in multi layer video decoding if the iGPU is not engaged immediately alongside the NVidia GPU.

I believe Adobe are looking at this area to determine the balance of iGPU use and reliance on discrete GPU useage in these types of scenarios.

The initial explanation was that the discrete GPU such as an NVidia 4080 gives better overall performance than having the iGPU working alongside. The iGPU would be brought into decode use when the NVidia GPU reached near its limit.

I do not find that mechanism is working as well as before (with iGPU always active from the start of decode)  and neither do some other users.

 

To have benchmark tests done under repeatable conditions to give some confidence on the best use of iGPU would have been useful as a reference point.

In the meantime, we will continue to see how the software evolves to possibly make better use of Intel iGPU capabilities once more.

 

Its important Video editors make best use of decode/encode GPU resources to speed things up - that's it in a nutshell !!.

 

If you do come up with any other relevant articles it would be useful.

Thank you.

0 项奖励
Earl_Intel
主持人
1,410 次查看

Hello stressedout-tek,


I appreciate you sharing your findings and the analysis that you performed on your end about the performance of both iGPU and discrete GPU.


I'll try to further check on this internally and see what I can do.


I will share you an update on this post as soon as possible.


Best regards,

Earl E.

Intel Customer Support Technician


0 项奖励
stressedout-tek
1,384 次查看

Hi @Earl_Intel 

Tests were done by some of the end user video editing community of Adobe Premiere Pro when a priority change was made in Version 25.1 so that the iGPU was not active all the time for H264 footage, but only came in to support the discrete NVidia GPU when it was nearing max level.

A user in the community suggested a multilayer Video of H264 playback simultaneously on the timeline to stress test the difference with 25.1 versus older 25.0 Premiere version for H264 4-2-0 footage.

With iGPU priority off, and running a 9 layer 4KUHD H264 video playback on version 25.1,  several users saw dropped frames in 4K H264 playback (in my case well over 650+ frames).

PC was 13900K with NVidia 4080 Super. Win 23 H2.

H265 had not been affected as no software changes were made which affected H265. The iGPU was active..

 

With Intel iGPU behaving as before in a permanent on state (using Version 25.0 of Premiere Pro) the multilayer video playback of the test revealed only 1 or 2 dropped frames. I even tested this on my old Intel 9900K / NVidia 2070 Super Win 10Pro PC !!

So my old PC was better than my current PC with Intel 13900K, but this was the Adobe software affecting the result, because the iGPU was active like it always was up to Version 25.0.

 

This IS NOT a real benchmark test as such but has raised questions on the mechanism to de-prioritise the Intel  iGPU amongst the Adobe user community. If you have an AMD CPU you would not notice any difference as no iGPU !.

Adobe claim the new mechanism in 25.1 onwards is more efficient, but no benchmark method/test they use has been described.

 

This is why I have asked if Intel have some vanilla benchmark tests of H264 and H265 4-2-0 decoding versus a discrete GPU like 4080, 4070 etc.

 

In fairness, Adobe continue to investigate the mechanism and are looking to optimise the performance.

Version 25.2.1 (current version)  is better in that respect, but ongoing investigations & debate continue.

 

That is the full background/info on this thread and reason for the initial question.

My conclusions would be that as discrete GPUs continue to improve and performance increases, the iGPU becomes less important, however, for H264/H265 footage that can be used for hardware accelaration, then it would seem to make logical sense to use the iGPU resource where the capability exists ?. Is my assumption correct or not?

 

I hope all this makes some sense to you 🙂

 

0 项奖励
Earl_Intel
主持人
1,326 次查看

Hello stressedout-tek,


Thanks for patiently waiting.


I appreciate you sharing this useful information and the testing that you performed on your end.


Unfortunately, we don't have such performance comparison data available as of the moment. But if you still need assistance or concerns regarding to the performance of our product, we're always here, ready to assist.


It's recommended to try contacting the software developer to get an explanation of the encoding software's function if you're wondering why, it gives the Nvidia GPU priority.


Best regards,

Earl E.

Intel Customer Support Technician



0 项奖励
stressedout-tek
1,307 次查看

Hello @Earl_Intel 

 

Thank you for coming back on this.

Unfortunately, there has been no full explanation on the priority switching from the software vendor, hence why I posed my original question to establish a starting point in understanding some simple benchmark tests.

Lets close this discussion thread now as it appears to have reached a 'no solution found' conclusion.

 

Thanks again for your efforts

 

 

0 项奖励
Earl_Intel
主持人
1,262 次查看

Hi stressedout-tek,


I'm glad that I was able to assist you on this.


If you need further assistance, please submit a new question as this thread will no longer be monitored.

 

Best regards,

Earl E.

Intel Customer Support Technician


0 项奖励
回复