It may need some information on "using the advance features such as I Frame/B Frame/P Frame", How do you want to use them?
and it need to clarify on "state information of Motion Estimation parameters"? The encoding parameters provides some setting on ME method, search block ranges (search x,search_y).
My question on State information of Motion Estimation parameter is described using an exmple below.
Suppose, the encoding has to happen for 100 frames. Client wants to have one I frame for a set of 20 frames, followed by another I frame for the next 20 frames and so on. Once application sets the parameters and "Init" the decoder, the UMC codec is all responsible for identifying the frames as P/I etc within each set of 20 frames. From the application point of view, it has to call "GetFrame", nothing else. Is this a correct understanding?
Could you please guide me to the efficiency (space) of the encoding process ? When I encode and check the "DataSize" of the DataOut, it is way too low compared to the buffer size. Any rough guess what could have gone wrong?
>>the UMC codec is all responsible for identifying the frames as P/I etc within each set of 20 frames.
It is correct. You can use the GetFrameType() in the output media data to know each frame type.
>> for the bit rates, it is controlled by Params.info.bitrate. Notes it is bits per seconds (not bytes). Also it is average values from multiples frames. It does not mean each frame will have exact size of that value.