- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have created a Visual C++ projecttouseH.264 decoding class from umc samples and successfully decoded H.264 images. However, when I measure the time it takes to decode images from a video clip, my implementation takes twice as long compared to the simple_player.exe implementation. I built my wrapper library using Visual Studio 2005. Are there any important compiler optimization parameters I need to set to optimize the generated code?
Thanks.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I saw a 30% boost in perf using ICL91 VS MSVC8.
Not sure where the gain came from, but bitstream decoding seem to be the one that is not optimized right with MSVC8. For example the inline directive is not honored.
The cost was: doubling of the code size. Going from a 500K to 1meg binary .
Stephan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Stephan, I will keep this in my mind.
I am using the H264VideoDecoder class on its own, I mean I don't use any splitter class or the umc_pipeline. I guess this causes my version of the decoder to run very slow. I will try to utilize these remaining parts of the decoding pipeline to improve speed.
Yigithan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
if you use IPP static libraries please make sure you call ippStaticInit function at the beginning of your application.
This call will initialize IPP static dispatcher to select best optimized code for the run.
Regards,
Vladimir
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page