- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Our software is behaving differently on the preliminary i7-4770K system we have than on the Ivy Bridge ones that have been our platform, and we'd be grateful if anyone could help identify what the relevant differences are.
For decoding, this application of ours doesn't use a single contiguous bitstream buffer with encoded frames appended to it, as in the samples. Rather, to better manage encoded frame lifetime for our use case, we maintain a queue of separate buffers, one for each frame. Before calling DecodeFrameAsync() each time, we update our MFXBitstream's Data pointer to the buffer containing the frame to decode, and set the DataOffset (to 0) and DataLength fields appropriately (as well as the MFX_BITSTREAM_COMPLETE_FRAME flag).
This has been working great for us, but on the Haswell system there are ghostly distortions as if I-Frames were being missed, semi-random extreme pixellation, jerkiness and possibly some frames from a tiny bit backwards in time (though this may be an illusion). The video output is overall barely discernible.
If I switch to using a contiguous static bitstream buffer and letting the calls to DecodeFrameAsync() advance the DataOffset automatically, then the frames are all decoded flawlessly, even though everything else is exactly the same.
I noticed on calls to DecodeFrameAsync(), that often even when MFX_ERR_MORE_SURFACE is returned, that the DataOffset is nevertheless being advanced to the end of the frame. So I tried changing things when this occurs to reset the DataOffset pointer to the original position and call DecodeFrameAsync() again on the buffer -- and decoding output improved dramatically. There are still artifacts, but not as frequent or as severe.
I haven't yet been able to figure out anything else to do to further improve the operation back to its usual/previous flawless state, while maintaining our multi-buffer model. So any insight about how the Haswell decode stack handles its operations differently, that might have bearing on our situation, would be most appreciated!
Thanks,
James
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi James,
We cannot discuss the features or capabilities of future yet to be released Intel platforms on this forum.
Please connect with your Intel representative, who provided the test system to you, for support.
Regards,
Petter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Oh, well I suppose that makes sense. Thanks, I'll track him down and go through the proper channels.
James
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, it looks like with the latest Intel Driver (3071), we're seeing the same/similar behavior on Ivy Bridge. Should I create another forum topic and copy and paste the above information, or is this topic an adequate place for discussion?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi James,
we can continue to track this topic via this forum post. Since you are observing the issue using current generation Core processor and official driver release we are certainly here to assist you.
Could you share some more details about you bitstream buffer handling scheme? If you could provide a code sample or small reproducer which illustrate the issue that would be preferred.
Regards,
Petter
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page