Is there a way to keep the encoded bitstream output in video memory w/o copying to system memory? It seems that the current API uses a copy to mfxBitstream's data field when Sync() is called. I am hoping to save on the unnecessary (in my case) extra copy from video to system memory. Thanks for the help.
There is currently no support for storing Media SDK bitstream data in GPU memory.
The size of the bit stream data is relatively small so the overhead is not large.
Can you expand a bit on why this is important to you, such as what you would do with the bit stream after encode if it was stored in GPU memory.
Thanks for the response. I am trying to implement something where each frame rendered by DX on the GPU is then sent directly to QuickSync encoder. Afterward, the results are then used by the GPU again.
A related question is whether it is possible to start the encoder with input from the video memory without copying input first to the system memory (and whether it would be more efficient to do so from a performance standpoint)? Specifically, I have data in a DirectX ID3D11Texture2D object and would like to use it as input to RunFrameVPPAsync() (which is then linked to EncodeFrameAsync()). I didn't find a clear explanation in the mediasdk manual for how to do this. Could you confirm whether this procedure would work:
Is there some example code that is doing something like this already?
When using HW acceleration the preferred approach for efficiency is to use D3D surfaces.
Both the Media SDK sample_encode and the "encode with pre-processing" sample, part of the Media SDK tutorial ( http://software.intel.com/en-us/articles/intel-media-sdk-tutorial-simple-6-encode-d3d-vpp-preproc , http://software.intel.com/en-us/articles/intel-media-sdk-tutorial ) showcases how to perform VPP + Encode in the way you request.
I did see those examples, but was unsure how to actually load D3D surfaces for use with the QuickSync API (since the example shows uninitialized D3D11 surfaces). I did see that in the _simple_alloc() function that mfxFrameSurface1.Data.memId is actually cast to a ID3D11Texture2D. I am guessing I can just do a similar setting of memId to my own surface. If I am on the wrong track, please let me know. Thanks.