- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I want to do decoding performance comparisons between the MSDK and ffmpeg.
Has anyone done this sort of thing?
My very initial tests, using sample_decode from MSDK demo project vs. the ffmpeg demuxing or filtering_video sample projects show that for a single streaming HD video, MSDK is between 3-6 times slower than the ffmpeg.
I know that these are slightly different implementations - I haven't combined ffmpeg decoding directly into an MSDK project yet, but this seems very strange.
Has anyone seen similar behavior?
Thanks,
Adi
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Adi,
There can be many reasons you see the slower performance, and it is hard for me to comment without knowing more details. For example, what sample_decode cmd-line options are you setting? The default parameters will not make use of any Intel hardware acceleration (without using the -hw option, and even better if -d3d or -d3d11 are used. Also, the application was designed to show how to use the MSDK API, and not written to be used as 'performance' tool (Your experiments might be including file read/write time, for example).
Can you provide more information? THANKS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, Tony.
I'll run some more tests.
Warm regards,
Adi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, Tony.
I'll run some more tests.
Warm regards,
Adi
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page