I would like some clarification please.
From reading different post on this site I have found the following:
In umc_h264_config.h uncomment #define SLICE_CHECK_LIMIT and comment out #undef SLICE_CHECK_LIMIT
Someone suggested manually setting m_MaxSliceSize manually. How does 1 calculate this size? And is this what Intel intended?
When initializing UMC::H264VideoEncoder variable set UMC::H264EncoderParamss member variable num_slices to the number of slices the frame will have. Does this have to do with NAL units???? And the number provided will output that many NAL units per frame?
The compressed frame is returned in a byte array with markers separating each NAL unit with a Big Endean 1. Use this marker to break the frame into individual packets to be sent across the network. Should the frame, built by packets, include the Big Endean 1 when passed to decompressor?
Receiving side receives the individual NAL units and builds the frame again including the Bit Endean 1 separators?
I am unsure about this and would very appreciate not only affirmation as to my thoughts if correct, but some dialog as to how to properly implement. And if there is clear documentation for this please tell me where to find it. I have been unable to find any.
The SLICE_CHECK_LIMIT will ensure that slice generated do not exceed a certain size.
you can specify the size through the num_slices with a negative value (if I am not mistaking). When SLICE_CHECK_LIMIT.
The value you provide is usually the maximum packet size you would want when using H.264 over RTP. You sepcify MTU-RTP Header - UDP header.
Once the frame is encoded, you do break the NALs (each NAL contain one slice) by looking for 00 00 01 or 00 00 00 01 (the later os for the first NAL of aframe). You can look at annex B of the H.264 specs for detail on the byte stream format.
On the receive side you can rebuild the frame, byadding all the NAL with theseparator in between.