- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am looking to decode real-time streaming H.264 video that comes in via packets. My thought was to use the umc_video_dec_con example as a framework for implementing this functionality. Here are my thoughts that I'm currently stuck with that I would like some help:
1) I was going to use the MemoryReader as my DataReader class, but it seems to be setup such that you have all the data you want to decode in advance. There is no methods for "adding" data to the reader memory nor is there a method for telling the MemoryReader that more data is available. What is the method I should be using for adding video packets that are received so they can get decoded?
2) The splitter initialize functions again seems to need the whole video file available to the reader as it searches the video stream for information at initialize time. If I look at the InitSplitter function of the umc_dec_con example, the Init function for the MP4Splitter will return errors because it might not have all the data to properly initialize. My thought was to keep adding data to my DataReader and continually call Init until there is enough data there for the Splitter to properly initialize. Is this the proper way to setup the splitter?
3) I think once everything is initialized, I should be able to call GetNextData on the splitter until it stops returning UMC_ERR_NOT_ENOUGH_DATA. Once that happens, I would have enough data for a frame and call GetFrame on the decoder to get the decoded frame. I would just keep repeating this process as done in umc_video_dec_con example. Does this sound correct?
Is the approach outlined above the correct way for decoding real-time packet data of H.264 utilizing UMC? It seems like all the example code is geared around opening files which you already have all the data and are formed properly. I'm unsure how to use UMC when the data will come in chunks, some data may be lost if there is a bad ethernet connection, etc. How would UMC handle such scenarios? Any help would be greatly appreciated.
I am using Linux IPP7.1.0.011 samples
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have run into the same problem. Did you ever get this resolved? What approach did you take? Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I didn't ever really get it resolved. My solution ended up being buffering up a large amount of NAL packets received and then send them to the GetFrame function. I don't use the splitter, I found the crashing problem when using the decoder by itself. I know the video type being decoded and the general size so through testing I determined the smallest amount of data I needed to buffer that would ensure that I would always have enough data to extract a single good frame. The obvious downside to this approach is that it adds latency. In my case we were able to live with the delay, but were not happy about it. Hope that helps you and if you find a way to decode the streaming H.264 data without buffering, please let me know!
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page