Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

Decoding Isolated Frames on Haswell

jnickerson
Beginner
551 Views

Our software is behaving differently on the preliminary i7-4770K system we have than on the Ivy Bridge ones that have been our platform, and we'd be grateful if anyone could help identify what the relevant differences are.

For decoding, this application of ours doesn't use a single contiguous bitstream buffer with encoded frames appended to it, as in the samples. Rather, to better manage encoded frame lifetime for our use case, we maintain a queue of separate buffers, one for each frame. Before calling DecodeFrameAsync() each time, we update our MFXBitstream's Data pointer to the buffer containing the frame to decode, and set the DataOffset (to 0) and DataLength fields appropriately (as well as the MFX_BITSTREAM_COMPLETE_FRAME flag).

This has been working great for us, but on the Haswell system there are ghostly distortions as if I-Frames were being missed, semi-random extreme pixellation, jerkiness and possibly some frames from a tiny bit backwards in time (though this may be an illusion). The video output is overall barely discernible.

If I switch to using a contiguous static bitstream buffer and letting the calls to DecodeFrameAsync() advance the DataOffset automatically, then the frames are all decoded flawlessly, even though everything else is exactly the same. 

I noticed on calls to DecodeFrameAsync(), that often even when MFX_ERR_MORE_SURFACE is returned, that the DataOffset is nevertheless being advanced to the end of the frame. So I tried changing things when this occurs to reset the DataOffset pointer to the original position and call DecodeFrameAsync() again on the buffer -- and decoding output improved dramatically. There are still artifacts, but not as frequent or as severe.

I haven't yet been able to figure out anything else to do to further improve the operation back to its usual/previous flawless state, while maintaining our multi-buffer model. So any insight about how the Haswell decode stack handles its operations differently, that might have bearing on our situation, would be most appreciated!

Thanks,

James

0 Kudos
4 Replies
Petter_L_Intel
Employee
551 Views

Hi James,

We cannot discuss the features or capabilities of future yet to be released Intel platforms on this forum.

Please connect with your Intel representative, who provided the test system to you, for support.

Regards,
Petter 

0 Kudos
jnickerson
Beginner
551 Views

Oh, well I suppose that makes sense. Thanks, I'll track him down and go through the proper channels.

James

0 Kudos
jnickerson
Beginner
551 Views

Ok, it looks like with the latest Intel Driver (3071), we're seeing the same/similar behavior on Ivy Bridge. Should I create another forum topic and copy and paste the above information, or is this topic an adequate place for discussion?

0 Kudos
Petter_L_Intel
Employee
551 Views

Hi James,

we can continue to track this topic via this forum post. Since you are observing the issue using current generation Core processor and official driver release we are certainly here to assist you.

Could you share some more details about you bitstream buffer handling scheme? If you could provide a code sample or small reproducer which illustrate the issue that would be preferred.

Regards,
Petter 

0 Kudos
Reply