I implemented an h264 compression solution using Intel Media SDK.
In case of interlaced video: I interlace the 2 uncompressed fields to have a frame at the Input of the encoder and set PicStruc = MFX_PICTRUCT_FIELD_TFF to generate interlaced h264 bitstream. This solution works pretty well but I want to remove the interlace process.
Can I call EncodeFrameAsync() twice (one for each field) and have only one mfxSyncPt for the output to signal that the frame interlaced is encoded ?
Is there any way to do this ?
What I meant by "remove the interlace process" is that I want to give the encoder two fields and receive a compressed bitstream with field macroblocks.
For example in NTSC: I tried to send 2 fields 720x243 with the option PicStruc = MFX_PICTRUCT_FIELD_TFF and I received an compressed bitstream with field macroblok but the compressed size is 720x243 instead of 720x486.
I want to know if it is possible to do that with your encoder?
Am I right to think that we lose some processing power by interleaving the input since the encoder will deinterlace to compress in fields macroblocks?
Is it true that in field your encoder does "MBAFF:Macroblock Adaptative Field Frame" and not "PAFF:Picture Adaptative Field Frame"?
I found this link that explains what is MBAFF and PAFF: http://forum.doom9.org/archive/index.php/t-120317.html
It could explain why in input you accept only the whole frame.
I would like to know what you expect me to do when I have 2 NV12 field surfaces at the input of your encoder and I would like a compressed bitstream field composed of those 2 fields.
Sorry for the slow response.
You are correct, the encoder does not support PAFF, but we are looking at this "feature request".
I'm still not exactly sure I understand the desire of "compressed bitstream field composed of those 2 fields", but I believe you may be looking of the field encoding mode that is available if you use the FieldOutput flag of the "mfxExtCodingOption" structure when encoding.