- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, follows, I am studying the tuturial example simple_decode_d3d, which demonstrates how to decode an ES file. In this scenario, we do not need worry about SPS,PPS, for they are always at the file beginning, as I understand. But, now, I need take this example to do with real time stream from a remote server. How could I do some trick on the sample code to do this job?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There is a Media SDK+FFmpeg example which has been used as a base for realtime encoding projects under the previous version of the tutorials package at the Media Solutions Portal. Please note it is from the previous version of the package -- you are free to use it but it is far from a complete turnkey solution, not in active development, and no longer supported. This white paper may also help. https://software.intel.com/sites/default/files/article/326585/msdk-ffmpeg-white-paper.pdf.
If you're looking for a complete streaming starting point there are many options. Wowza is one example. The new Elecard demultiplexer plugin may also be of interest. Please watch for more plugins and partnerships as more solutions incorporate the hardware acceleration capabilities of Media SDK.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, Jeffrey
Actually, I studied that project month ago. That does not help me on decoding stream, but on encoding stream into file via FFMPEG. Anyway, thanks a lot. When I dig into the code of FFMPEGWriter, I came up with a question, related with ffmpeg muxing. In the code, I understand that the video pts seems to be based on the processed video frame. The value of pts is how many video frames have been encoded and written into file. While audio pts seems to be the number of audio frames. And one audio frame contains a bulk of audio samples, which is determined by audio encoder, indicating by c->frame_size. In this way, in the same av file, e.g. mp4 file, there is no relationship between video pts and audio pts. How could the decoder synchronize them when taking out from that file?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Jeffrey Mcallister (Intel) wrote:
There is a Media SDK+FFmpeg example which has been used as a base for realtime encoding projects under the previous version of the tutorials package at the Media Solutions Portal. Please note it is from the previous version of the package -- you are free to use it but it is far from a complete turnkey solution, not in active development, and no longer supported. This white paper may also help. https://software.intel.com/sites/default/files/article/326585/msdk-ffmpe....
If you're looking for a complete streaming starting point there are many options. Wowza is one example. The new Elecard demultiplexer plugin may also be of interest. Please watch for more plugins and partnerships as more solutions incorporate the hardware acceleration capabilities of Media SDK.
Hi, Jeffrey!
This sample works perfectly only in Debug mode. When I build it in Release mode - it crashes.
Is it just my bad luck or a general trend?
Best regards,
Roman
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page