When using MediaSDK encoder for yuv soruce, we can create surface in the video memory and use hardware encoder to encode the yuv pictures. I wonder how the copy operaiton is done.
Is DMA is used, thus CPU is free to do other things during the operation? Or CPU cycles are used to copy the yuv pictures in the the video memory?
In some benchmark results, it seems the density of encoder using yuv soruces is lower than the density of transcoder when elementary bit streams are used. I wonder if it is related the copy operation.
I'm not sure I completely understand your question. The hardware encoder operates on NV12 (YUV) surfaces that reside in video memory. If the YUV data to be encoded is not already in video memory, the Media SDK library implementation can copy the data from system memory to video memory. Intel is continuously optimizing this operation to be the best for the platform, so you will often notice that newer graphics drivers improve performnace. The hardware used for the copy will depend on the platform and implementation.
When transcoding, the 'decode' and 'processing' operations may all occur in video memory, and there is no need to copy to or from system memory.
Thanks for your reply.
My question is how YUV picture copy from system memory to video memory is done. For example, is it direct memory access (DMA) is used such that CPU is free to do other jobs? or CPU is actually used to do the copy operations and hence the operation is CPU intensive.
I understand the implementation details may be different from different version of drivers. I just want to know the operation from high level point of view.
The answer is actually "both", as it depends specifically on the platforms capabilities and driver implementation. You may see significant CPU usage for this operation on some platforms.