<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic streaming with Media SDK in Media (Intel® Video Processing Library, Intel Media SDK)</title>
    <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919777#M487</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I also observed that&lt;BR /&gt;&lt;B&gt;A&lt;/B&gt;. Suppose I set libtype=MFX_IMPL_SOFTWARE .&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;1st call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 bytes &lt;BR /&gt;m_Bitstream.TimeStamp = 60508&lt;BR /&gt;m_Bitstream.DataOffset =0&lt;BR /&gt;and sent into transcode function to transcode but returns MFX_ERR_MORE_DATA_UVA.&lt;BR /&gt;and parameters are set to&lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 3&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =97498&lt;BR /&gt;stOutBitstream-&amp;gt;DataLength = 0 &lt;BR /&gt;stOutBitstream-&amp;gt;TimeStamp = 0 &lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;2nd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 +6794 = 104295&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =0&lt;BR /&gt;&lt;BR /&gt;and sent to transcoder() it returns as MFX_OUTPUT_BUFFER_ENABLE.( Meaning that transcoded output is available. )&lt;BR /&gt;&lt;BR /&gt;and parameters are set to &lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 6794&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =97501&lt;BR /&gt;
stOutBitstream-&amp;gt;DataLength = 49610&lt;BR /&gt;
stOutBitstream-&amp;gt;TimeStamp =60508&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;3rd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 6794 + 7162 = 13956&lt;BR /&gt;
m_Bitstream.TimeStamp = 64809&lt;BR /&gt;
m_Bitstream.DataOffset =0&lt;BR /&gt;&lt;BR /&gt;After Transcode() is called parameters are set to&lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 7162&lt;BR /&gt;

m_Bitstream.TimeStamp = 64809&lt;BR /&gt;

m_Bitstream.DataOffset =6794&lt;BR /&gt;

stOutBitstream-&amp;gt;DataLength = 6515&lt;BR /&gt;

stOutBitstream-&amp;gt;TimeStamp =64809&lt;BR /&gt;&lt;BR /&gt;You can observe that time stamps are coming correctly at the output &lt;BR /&gt;&lt;BR /&gt;&lt;B&gt;B.&lt;/B&gt; Now, keeping the same code and set libtype=MFX_IMPL_SOFTWARE&lt;BR /&gt;&lt;BR /&gt;Follwoing are observed &lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;1st call to av_read_frame() &lt;/I&gt;&lt;/SPAN&gt;sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 bytes &lt;BR /&gt;m_Bitstream.TimeStamp = 60508&lt;BR /&gt;m_Bitstream.DataOffset =0&lt;BR /&gt;&lt;BR /&gt;After Transcode() is called parameters are set to&lt;BR /&gt;m_Bitstream.DataLength= 3&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =97498&lt;BR /&gt;
stOutBitstream-&amp;gt;DataLength = 0 &lt;BR /&gt;
stOutBitstream-&amp;gt;TimeStamp = 0&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;2nd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 +6794 = 104295&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =0 &lt;BR /&gt;&lt;BR /&gt;After Transcode() is called&lt;BR /&gt;and returns MFX_ERR_MORE_DATA_UVA and parameters are set to &lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 3&lt;BR /&gt;

m_Bitstream.TimeStamp = 60508&lt;BR /&gt;

m_Bitstream.DataOffset =104292&lt;BR /&gt;

stOutBitstream-&amp;gt;DataLength = 0&lt;BR /&gt;

stOutBitstream-&amp;gt;TimeStamp =0&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;3rd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;
m_Bitstream.DataLength= 104292+ 7162 = 111457&lt;BR /&gt;

m_Bitstream.TimeStamp = 60508&lt;BR /&gt;

m_Bitstream.DataOffset =0&lt;BR /&gt;
&lt;BR /&gt;
After Transcode() is called&lt;BR /&gt;and returns as MFX_OUTPUT_BUFFER_ENABLE and parameters are set to&lt;BR /&gt;
&lt;BR /&gt;
m_Bitstream.DataLength= 7162&lt;BR /&gt;


m_Bitstream.TimeStamp = 60508&lt;BR /&gt;


m_Bitstream.DataOffset =104295&lt;BR /&gt;


stOutBitstream-&amp;gt;DataLength = 16528&lt;BR /&gt;


stOutBitstream-&amp;gt;TimeStamp =60508&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;4th call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;
m_Bitstream.DataLength= 7162 + 16541= 23703&lt;BR /&gt;

m_Bitstream.TimeStamp = 64809&lt;BR /&gt;

m_Bitstream.DataOffset =0&lt;BR /&gt;
&lt;BR /&gt;
After Transcode() is called&lt;BR /&gt;
and returns as MFX_OUTPUT_BUFFER_ENABLE and parameters are set to&lt;BR /&gt;
&lt;BR /&gt;
m_Bitstream.DataLength= 16541&lt;BR /&gt;


m_Bitstream.TimeStamp = 60508&lt;BR /&gt;


m_Bitstream.DataOffset =7162&lt;BR /&gt;


stOutBitstream-&amp;gt;DataLength = 1463&lt;BR /&gt;


stOutBitstream-&amp;gt;TimeStamp =60508&lt;BR /&gt;&lt;BR /&gt;If you observe there is a difference in SOFT and HW setting. In HW time stamps are also not coming at the output. &lt;BR /&gt;&lt;BR /&gt;Pl. help me to fix this issue. I need to use IN HW accelartor in my application &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 04 Nov 2011 11:35:16 GMT</pubDate>
    <dc:creator>ujarijam</dc:creator>
    <dc:date>2011-11-04T11:35:16Z</dc:date>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919762#M472</link>
      <description>I am using Intel Media SDK for transcoding the RTP streaming. Source has TS RTP pockets.It is read by FFMPEG av_read_frame(pFormatCtx, &amp;amp;packet) API.At the moment I am reading only video frames and corresponding timestamps. Video data is in Mpeg2 video data I want to convert into H264 video data by using Media SDK transcode API.After having transcoded TS and RTP pockets are made further re streamed to destination.The idea is to convert into h264 data using media SDK.&lt;BR /&gt;&lt;BR /&gt;Now, problem is Media sdk does not transcode frame basis. &lt;BR /&gt;&lt;BR /&gt;when I give input(Mpeg2 video data of 97501 bytes and Timestamp say ts1) it asks for more data and m_Bitstream.DataOffset is set to 97k. I gave more data of 6794 bytes and timestamp say ts2 by calling av_read_frame API. But I set m_Bitstream.DataOffset to NULL. Transcoder API gives Transcoded output. Whenever Trascoder asks for more data I gave input data and Time stamps.&lt;BR /&gt;&lt;BR /&gt;Probelem I have found is time stamps at out are not comming correctly.As a result of this I could not able to do streaming properly. If Media sdk can able to transcode frame by frame I believe time stamps will not have any problem.&lt;BR /&gt;&lt;BR /&gt;Can any one help me out???&lt;BR /&gt;</description>
      <pubDate>Mon, 17 Oct 2011 08:14:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919762#M472</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-10-17T08:14:23Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919763#M473</link>
      <description>Hi ujarijam,&lt;BR /&gt;&lt;BR /&gt;Media SDK does work on a frame by frame basis. However, since the recommended behavior, for best performance, of the Media SDK encoder (and decoder) is to use it in a asynchronous fashion, it is common that the call to EncodeFrameAsync will return -10 (MORE_DATA) so that additional frame encode requests can be queued up. This approach is used to gain best performance and is also showcased in the Media SDK samples.&lt;BR /&gt;&lt;BR /&gt;If you requireEncodeFrameAsync to always return complete frame please use it in a synchronous fashion by calling SyncOperation directly after the call.&lt;BR /&gt;&lt;BR /&gt;Note that, if you want to use Media SDK for streaming or video conferencing purposes, then you should use the most recent beta release of Media SDK 3.0. Media SDK 2.0 is not suitable for that kind of workload due to high latency (among other things). For info on how to configure Media SDK for low latency please see the following forum item:&lt;BR /&gt;&lt;A href="http://software.intel.com/en-us/forums/showthread.php?t=86910&amp;amp;o=a&amp;amp;s=lr"&gt;http://software.intel.com/en-us/forums/showthread.php?t=86910&amp;amp;o=a&amp;amp;s=lr&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Regarding time stamps. If you use Media SDK encode in an asynchronous fashion, please make sure to write the time stamp to the surface (Data.TimeStamp) before callingEncodeFrameAsync. When frame has been fully encoded the time stamp can be found in the bitstream after calling SyncOperation. If you are using the synchronous approach you can keep track of the timestamps yourself (no need to write to surface).&lt;BR /&gt;&lt;BR /&gt;As for the transcode scenario you describe, decoding a MPEG2 bitstream. The same as above is true for timestamps in this case. For the async approach please set timestamp in bitstream structure before calling DecodeFrameAsync. As for the synchronous approach, timestamps can be handled manually as in the encode case.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;DIV&gt;&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt;Petter&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 17 Oct 2011 21:32:45 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919763#M473</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-10-17T21:32:45Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919764#M474</link>
      <description>Thanks for promt reply.&lt;DIV&gt;Let me explain to you clearly what I am doing.&lt;/DIV&gt;&lt;DIV&gt;I am usingVersion 3.0.442.32245 Media SDK. I am usingsample_multi_transcode for my transcoding(from Mpeg2 video to h264 video) and streaming. I am calling Trancode() function for transcode.&lt;/DIV&gt;&lt;DIV&gt;I set follwoing&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;1 m_mfxDecParams.mfx.DecodedOrder		  = 0; in&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt; mfxStatus CTranscodingPipeline::InitDecMfxParams(sInputParams *pInParams,mfxBitstream* m_pmfxBS1)    function.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;2.m_mfxEncParams.mfx.GopRefDist			  = 1; for no b-frames in&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt; mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams) function&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;My input time stamps are .These timestamps are supplied through structure m_pmfxBS&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 60508&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 64809&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 66969&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 71289&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 75609&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 79929&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 82089&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 86396&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 90716&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 92876&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 97196&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 99356&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 103676&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;PTSin= 107996&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;My output time stamps are these time stamps are obtained through structure&lt;/DIV&gt;&lt;DIV&gt;pBitstreamEx-&amp;gt;Bitstream.TimeStamp&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;PTSout= 60508&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 64809&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 66969&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 60508&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 75609&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 79929&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 71289&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 86396&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 90716&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 82089&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 97196&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 99356&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 92876&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;PTSout= 107996&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;if you look at the time stamps for every two time stamps smaller time stamp I am getting. why it is different time stamp? streaming is not correctly getting.&lt;/DIV&gt;&lt;DIV&gt;Can you pl. help me?&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;</description>
      <pubDate>Tue, 18 Oct 2011 14:23:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919764#M474</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-10-18T14:23:52Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919765#M475</link>
      <description>Hi ujarijam,&lt;BR /&gt;&lt;BR /&gt;The transcode sample was written to illustrate transcode pipeline for optimal throughput. To achieve this, the encode, vpp, decode operations in the pipeline are made as asynchronous as possible also leading to out of order delivery of encoded frames in bit stream.&lt;BR /&gt;&lt;BR /&gt;In streaming use cases throughput is likely not the most central feature but instead latency and robustness etc. If you are requiring encoder to deliver frames in a ordered manner you have to make some changes to the sample to introduce either additional sync points or by reordering the bit stream chunks delivered after SyncOperation is called.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;DIV&gt;Petter&lt;/DIV&gt;</description>
      <pubDate>Tue, 18 Oct 2011 17:47:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919765#M475</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-10-18T17:47:33Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919766#M476</link>
      <description>Thanks once again for prompt answer&lt;DIV&gt;what changes I need to make to get time stamps increasing order after calling encoder? can you give me one exaple Please?&lt;DIV&gt;Thanking you in advance&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 18 Oct 2011 19:02:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919766#M476</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-10-18T19:02:15Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919767#M477</link>
      <description>Hi,&lt;DIV&gt;Since we do not have an available sample code for this scenario I unfortunately do not have any code I can clip out and send to you. Could you please explore the two suggestions I gave in my previous post. If you still are not able to progress then let us know and we may be able to find some time to create some code examples.&lt;/DIV&gt;&lt;DIV&gt;Regards,&lt;/DIV&gt;&lt;DIV&gt;Petter&lt;/DIV&gt;</description>
      <pubDate>Tue, 18 Oct 2011 19:24:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919767#M477</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-10-18T19:24:43Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919768#M478</link>
      <description>Hi,&lt;DIV&gt;Thanks for answer.&lt;/DIV&gt;&lt;DIV&gt;I have some more questions from your above answers.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;1. you said that "...Media SDK does work on a frame by frame basis....."&lt;/DIV&gt;&lt;DIV&gt; what changes must be made in present code to make it frame by frame as Media SDk transcoding function is multi frame function? (note that in transcode decode is called first and encode is called later,if I dont want to use vpp function) I have observed that even I gave input data as large as 300k bytes, still decoder asks for more data!!!!.&lt;/DIV&gt;&lt;DIV&gt;2.you also said that " ....If you are requiring encoder to deliver frames in a ordered manner you have to make some changes to the sample to introduce either additional sync points or...."&lt;/DIV&gt;&lt;DIV&gt;How and where sync points between decoder and encoder.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;3. If you look at above my second quest&lt;I&gt;on where in list of input time stamps and output time stamps, you observe that in output time stamps list first and fourth are same . Means that only once first Ts is duplicated. Can you give me comment on this?&lt;/I&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thanking you in advance.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 19 Oct 2011 18:25:53 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919768#M478</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-10-19T18:25:53Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919769#M479</link>
      <description>Hi ujarijam,&lt;BR /&gt;&lt;BR /&gt;1. What I meant by "Media SDK work on a frame basis" was really to say that the API does not expose the ability to handle decode or encode on a slice level.&lt;DIV&gt;sample_multi_transcode is written in a way to showcases good performance which is achieved by processing frames in an asynchronous manner. As a resultframes will likely be delivered out of order as soon as they have been processed.&lt;BR /&gt;&lt;BR /&gt;Both encoder and decoder sample implementations also work in this way. For instance, what happens during decode is that DecodeFrameAsync is called repeatedly (this is the reason you are asked to input more data) until all buffers have been allocated. At this point we are forced to sync to retrieve decoded frame.&lt;BR /&gt;&lt;BR /&gt;To ensure frames are delivered in order and if your application is not dependent on optimal throughput then you can certainly use the API in a synchronous manner. For instance:&lt;/DIV&gt;&lt;DIV&gt;DecodeFrameAsync()&lt;/DIV&gt;&lt;DIV&gt;SyncOperation()&lt;/DIV&gt;&lt;DIV&gt;EncodeFrameAsync()&lt;/DIV&gt;&lt;DIV&gt;SyncOperation()&lt;BR /&gt;&lt;BR /&gt;Note that none of the available samples are written in this way. That said, you should be able to reuse the common code parts.&lt;BR /&gt;&lt;BR /&gt;2. SyncOperation call is the call I refer to. When calling SyncOperation Media SDK will wait until the scheduled operation is ready.&lt;BR /&gt;&lt;BR /&gt;3. I cannot reproduce duplicate timestamps here. Can you please check your implementation or the place where you print the timestamp data to make sure it is correct?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;DIV&gt;&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt;Petter&lt;BR /&gt;&lt;/SPAN&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 19 Oct 2011 19:14:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919769#M479</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-10-19T19:14:20Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919770#M480</link>
      <description>Hi Petter,&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thanks for answers,&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;1. I am on the process of changing the current code to make it work to give output frame by frame as per your answers&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;2. You asked "...Can you please check your implementation or the place where you print the timestamp data to make sure it is correct?"&lt;/DIV&gt;&lt;DIV&gt;I usepBitstreamEx-&amp;gt;Bitstream.TimeStamp to print inmfxStatus CTranscodingPipeline::PutBS(mfxBitstream *stOutPutBitStream) function in pipeline_transcode.cpp file&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;3. I have one more question to ask you. does Media SDK h264 encoder give multiple sps and pps ? if so, what settings to be made? Multiple sps and pps may be required for streaming.&lt;/DIV&gt;</description>
      <pubDate>Thu, 20 Oct 2011 09:59:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919770#M480</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-10-20T09:59:33Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919771#M481</link>
      <description>Hi Petter,&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;I am reproducing the&lt;/SPAN&gt;DecodeOneFrame&lt;SPAN&gt;function here where is syncoperation function is added immediate after whenever sdk asks for more data&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;mfxStatus CTranscodingPipeline::DecodeOneFrame(ExtendedSurface *pExtSurface)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;{&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  MSDK_CHECK_POINTER(pExtSurface, MFX_ERR_NULL_PTR);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  mfxStatus sts = MFX_ERR_MORE_SURFACE;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  mfxFrameSurface1  *pmfxSurface = NULL;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  pExtSurface-&amp;gt;pSurface = NULL;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  mfxU32 i = 0; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  while (MFX_ERR_MORE_DATA == sts || MFX_ERR_MORE_SURFACE == sts || MFX_ERR_NONE &amp;lt; sts)     &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  {    &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    if (MFX_WRN_DEVICE_BUSY == sts)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      Sleep(TIME_TO_SLEEP); // just wait and then repeat the same call to DecodeFrameAsync&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    }&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    else if (MFX_ERR_MORE_DATA == sts)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      sts = m_pBSProcessor-&amp;gt;GetInputBitstream(&amp;amp;m_pmfxBS); // read more data to input bit stream      &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      MSDK_BREAK_ON_ERROR(sts);      &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    }&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    else if (MFX_ERR_MORE_SURFACE == sts)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      // find new working surface &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      for (i = 0; i &amp;lt; MSDK_DEC_WAIT_INTERVAL; i += 5)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        pmfxSurface = GetFreeSurface(true);      &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        if (pmfxSurface)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;          break;          &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        }&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        else&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;          Sleep(TIME_TO_SLEEP);        &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;        }&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      }    &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      MSDK_CHECK_POINTER(pmfxSurface, MFX_ERR_MEMORY_ALLOC); // return an error if a free surface wasn't found&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    }&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    sts = m_pmfxDEC-&amp;gt;DecodeFrameAsync(m_pmfxBS, pmfxSurface, &amp;amp;pExtSurface-&amp;gt;pSurface, &amp;amp;pExtSurface-&amp;gt;Syncp);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;// code added here&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;		if(MFX_ERR_MORE_DATA == sts)//ujar&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;		{&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;			sts =m_pmfxSession-&amp;gt;SyncOperation(pExtSurface-&amp;gt;Syncp, MSDK_WAIT_INTERVAL);			&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;		}&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    // ignore warnings if output is available,    &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    if (MFX_ERR_NONE &amp;lt; sts &amp;amp;&amp;amp; pExtSurface-&amp;gt;Syncp)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    {&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;      sts = MFX_ERR_NONE;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    }&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  } //while processing &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  return sts;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;}&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;when one frame of data is given, DecodeFrameAsync() asks for more data. if a Sync operation is called like above it returnsMFX_ERR_NULL_PTR. That means there is no output, and exits the code.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Help me to how to proceed to make it work as frame by frame?&lt;/DIV&gt;</description>
      <pubDate>Fri, 21 Oct 2011 07:19:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919771#M481</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-10-21T07:19:56Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919772#M482</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Let me first comment on your previous post (Q3).You can use the IdrInteval
parameter to control how often IDRs should be generated. Media SDK generate one SPS/PPS
per IDR.&lt;DIV&gt;&lt;BR /&gt;Regarding your code snippet.&lt;/DIV&gt;&lt;DIV&gt;MFX_ERR_MORE_DATA means that the decoder does not have enough data to be able to decode one frame. Application must insert more data into bit stream buffer so that a complete frame becomes available to the decoder. The sample code as you can see has logic to read more data into the bit stream.&lt;BR /&gt;So, regarding your code, do not call SyncOperation untilDecodeFrameAsync returnsMFX_ERR_NONE.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;/DIV&gt;&lt;DIV&gt;Petter&lt;/DIV&gt;</description>
      <pubDate>Fri, 21 Oct 2011 22:47:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919772#M482</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-10-21T22:47:52Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919773#M483</link>
      <description>Hi Petter,&lt;DIV&gt;&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt;Thanks for your reply once again.&lt;BR /&gt;&lt;/SPAN&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;I did what ever you asked me to do to get right timestamps. I am getting correct time stamps. I am able to do streaming nicely. As long as stInputParam-&amp;gt;libType=UV_MFX_IMPL_SOFTWARE is set RTP streaming is working fine. I am using i7 machine with quick sync processor. But when I setstInputParam-&amp;gt;libType=UV_MFX_IMPL_HARDWARE orUV_MFX_IMPL_AUTO_ANY I get return message MFX_ERR_DEVICE_FAILEDfrom&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt;sts&lt;/SPAN&gt;&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt; as shown in follwoing code. As you know that I am using trancode part of the code to trascode from mpeg2 to h264&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;// Init encode&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  if (m_pmfxENC.get())&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  {  &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    sts = m_pmfxENC-&amp;gt;Init(&amp;amp;m_mfxEncParams);    &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;    MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;  }&lt;/DIV&gt;// Init encode &lt;/DIV&gt;&lt;DIV&gt;if (m_pmfxENC.get()) &lt;/DIV&gt;&lt;DIV&gt;{   sts = m_pmfxENC-&amp;gt;Init(&amp;amp;m_mfxEncParams);       &lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN style="white-space: pre;"&gt;	&lt;/SPAN&gt;MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);&lt;/DIV&gt;&lt;DIV&gt; }&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;can u pl. help me? why this error message? how to resolve it as I am interested to use quick sync GPU so as to reduce load on CPU.&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 01 Nov 2011 19:06:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919773#M483</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-01T19:06:14Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919774#M484</link>
      <description>Hi ujarijam,&lt;BR /&gt;&lt;BR /&gt;Can you please confirm if the default Media SDK transcode sample (or decode, encode sample) works with HW acceleration?&lt;BR /&gt;&lt;BR /&gt;If it does, then the issue likely has to do with the code changes you've made to the transcode sample.&lt;BR /&gt;&lt;BR /&gt;The ERR_DEVICE_FAILED unfortunately does not give much info of what is going on. To be able to help you better can you please supply more details of the code changes you've made and some further detailsabout your machine such as CPU name, driver version, laptop/desktop, other discrete graphics card installed etc.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;DIV&gt;Petter&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 01 Nov 2011 23:06:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919774#M484</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-11-01T23:06:20Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919775#M485</link>
      <description>&lt;DIV&gt;Hi Petter,&lt;/DIV&gt;&lt;DIV&gt;Thanks for answer.&lt;/DIV&gt;Media sdk sample works with HW acceleration.&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;After debugging I found that Media SDK was not gettingsequence_header_code from input. Once it is fixed I am able to get transcoded output. But time stamps are not correct.&lt;/DIV&gt;&lt;DIV&gt;I have few questions&lt;/DIV&gt;&lt;DIV&gt;1. It did not happen when UV_MFX_IMPL_SOFTWAREwas set. why?&lt;/DIV&gt;&lt;DIV&gt;2. the code which was working forlibType=UV_MFX_IMPL_SOFTWARE well nice streaming with correct time stamps but when just changed tolibType=UV_MFX_IMPL_HARDWARE follwoing changes were observed&lt;/DIV&gt;&lt;DIV&gt; a) there was no correct time stamps ?&lt;/DIV&gt;&lt;DIV&gt; b) H264 output was also found to be different.Access unit delimiter nalu (00 00 00 01 09...) two bytes were followed by SPS, PPS,SEI and IDR frames and agianAccess unit delimiter, SEI,PPS,SEI and P frame and so on&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt; WhyAccess unit delimiter ? where as not inUV_MFX_IMPL_SOFTWARE was set in ?&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Pl. help me to fix this issue&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 03 Nov 2011 18:00:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919775#M485</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-03T18:00:04Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919776#M486</link>
      <description>Hi Petter,&lt;BR /&gt;I am sorry I have not answered all you questions in your #12.&lt;BR /&gt;My Desk top PC details:&lt;BR /&gt;Processor : IntelCorei7-2600k CPU@3.4GHz 3.4GHz&lt;BR /&gt;Installed memory(RAM): 8.00 GB (3.16 GB usable)&lt;BR /&gt;System type : 32-bit Operating System (windows 7 ultimate)&lt;BR /&gt;Display adaptors : Intel HD Graphics Family &lt;BR /&gt; Driver version: 8.15.10.2372&lt;BR /&gt; Driver date : 4/15/2011&lt;BR /&gt;&lt;BR /&gt;mother board : ASUS P8 Z68-V PRO&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The following are changes are made in the code. &lt;BR /&gt;I call FFMPEG RTP source API and call my own developed network RTP out put dump APIs &lt;BR /&gt;&lt;BR /&gt;mfxStatus CTranscodingPipeline::InitDecMfxParams(sInputParams *pInParams,mfxBitstream* m_pmfxBS1)&lt;BR /&gt;{&lt;BR /&gt;&lt;BR /&gt; ----------&lt;BR /&gt; m_mfxDecParams.mfx.DecodedOrder   = 0;// we get in display order&lt;BR /&gt; ----------&lt;BR /&gt;&lt;BR /&gt;}//mfxStatus CTranscodingPipeline::InitDecMfxParams(sInputParams *pInParams,mfxBitstream* m_pmfxBS1)&lt;BR /&gt;mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams)&lt;BR /&gt;{&lt;BR /&gt; MSDK_CHECK_POINTER(pInParams, MFX_ERR_NULL_PTR); &lt;BR /&gt; m_mfxEncParams.mfx.CodecId = pInParams-&amp;gt;EncodeId;&lt;BR /&gt; m_mfxEncParams.mfx.TargetUsage = pInParams-&amp;gt;nTargetUsage; // trade-off between quality and speed&lt;BR /&gt; m_mfxEncParams.mfx.RateControlMethod = MFX_RATECONTROL_CBR; &lt;BR /&gt; &lt;BR /&gt; m_mfxEncParams.mfx.EncodedOrder = 0; // binary flag, 0 signals encoder to take frames in display order&lt;BR /&gt; m_mfxEncParams.mfx.NumSlice = pInParams-&amp;gt;nSlices; &lt;BR /&gt; // added&lt;BR /&gt; m_mfxEncParams.mfx.GopRefDist    = 1; //no B frames &lt;BR /&gt; //m_mfxEncParams.mfx.IdrInterval    = 0; &lt;BR /&gt; m_mfxEncParams.AsyncDepth     = 1;&lt;BR /&gt; m_mfxEncParams.Protected     = 0;&lt;BR /&gt; m_mfxEncParams.mfx.CodecProfile    = MFX_PROFILE_AVC_BASELINE; &lt;BR /&gt; m_mfxEncParams.mfx.GopPicSize    = 30;&lt;BR /&gt; m_mfxEncParams.mfx.MaxKbps     = 2000;&lt;BR /&gt; //m_mfxEncParams.mfx.TargetKbps    = 1500;&lt;BR /&gt; m_mfxEncParams.mfx.InitialDelayInKB   = 0;&lt;BR /&gt; m_mfxEncParams.mfx.NumRefFrame    = 1;&lt;BR /&gt; m_mfxEncParams.mfx.FrameInfo.PicStruct  = MFX_PICSTRUCT_PROGRESSIVE;&lt;BR /&gt; ---------------&lt;BR /&gt; &lt;BR /&gt;}//mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams)&lt;BR /&gt;&lt;BR /&gt;int main(mfxI32 argc,mfxI8 **argv)&lt;BR /&gt;{&lt;BR /&gt; sInputParams stInputParams;&lt;BR /&gt; -------&lt;BR /&gt; stInputParam.DecodeId = UV_MFX_CODEC_MPEG2;&lt;BR /&gt; stInputParam.EncodeId = UV_MFX_CODEC_AVC;&lt;BR /&gt; stInputParam.nTargetUsage=MFX_TARGETUSAGE_BALANCED;&lt;BR /&gt; //stInputParam-&amp;gt;nAsyncDepth=0;&lt;BR /&gt; stInputParam.nBitRate = 1500;&lt;BR /&gt; stInputParam.libType =UV_MFX_IMPL_HARDWARE;//MFX_IMPL_SOFTWARE;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt; while(av_read_frame(pFormatCtx, &amp;amp;packet)&amp;gt;=0)//FFMPEG API to read data from input port&lt;BR /&gt;  {   &lt;BR /&gt;   if(packet.stream_index==videoStream)&lt;BR /&gt;   {&lt;BR /&gt;    if(packet.data[3]!=0xb3)//searching for sequence_header_code&lt;BR /&gt;     continue;    &lt;BR /&gt;    m_PTS = streams-&amp;gt;parser-&amp;gt;pts - streams-&amp;gt;start_time;//timestamp&lt;BR /&gt;    if(m_PTS==prevTS2)//skip if same Time stamp&lt;BR /&gt;     continue;&lt;BR /&gt;    prevTS2=m_PTS;&lt;BR /&gt;    memcpy(ubFileBuffer,packet.data,packet.size);&lt;BR /&gt;    ulTotalBytesInBuffer = packet.size;&lt;BR /&gt;&lt;BR /&gt;    //construct input Bitstream structure      &lt;BR /&gt;    m_Bitstream.Data=ubFileBuffer;&lt;BR /&gt;    m_Bitstream.DataLength =ulTotalBytesInBuffer;&lt;BR /&gt;    m_Bitstream.DataOffset =0;&lt;BR /&gt;    m_Bitstream.MaxLength = ullStreamLength;    &lt;BR /&gt;    m_Bitstream.TimeStamp=m_PTS;&lt;BR /&gt;    stInputData.ulInputDataSize[0] = packet.size;&lt;BR /&gt;    stInputData.ullTimeStamp[0] = (mfxU64)m_PTS;&lt;BR /&gt;    prevTS1=m_PTS;&lt;BR /&gt;    m_pmfxBS = &amp;amp;m_Bitstream; //m_pmfxBS is passed into lib&lt;BR /&gt;&lt;BR /&gt;    // Init Transcode-Lib&lt;BR /&gt;    sts = UvTranscode.Init(stInputParams,m_SessionArray,&amp;amp;m_pSessionArray,m_pmfxBS); &lt;BR /&gt;    if(MFX_ERR_MORE_DATA_UVA==sts)&lt;BR /&gt;     continue;&lt;BR /&gt;    else&lt;BR /&gt;     break;&lt;BR /&gt;   }&lt;BR /&gt;   else&lt;BR /&gt;    continue;&lt;BR /&gt; &lt;BR /&gt;  }&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt; MSDK_CHECK_PARSE_RESULT(sts, MFX_ERR_NONE, 1);&lt;BR /&gt;&lt;BR /&gt;ThreadTranscodeContext *pContext = m_pSessionArray;&lt;BR /&gt; pContext-&amp;gt;transcodingSts = MFX_ERR_NONE;&lt;BR /&gt; //Get output Buffer&lt;BR /&gt; sts = GetOutputBuffPointer(&amp;amp;stOBS);&lt;BR /&gt; stOutBitstream = &amp;amp;stOBS;&lt;BR /&gt;iTotalFrames = *stNwkRendParam.lNumbFrames;&lt;BR /&gt;//start transcoding&lt;BR /&gt; while(1)&lt;BR /&gt; {    &lt;BR /&gt;  if (FrameNumber&lt;ITOTALFRAMES&gt;&lt;/ITOTALFRAMES&gt;  &lt;BR /&gt;  {&lt;BR /&gt;   // if push model and need more data - try to get bitstream&lt;BR /&gt;   if (MFX_ERR_NONE == pContext-&amp;gt;transcodingSts)&lt;BR /&gt;   {&lt;BR /&gt;    pContext-&amp;gt;transcodingSts = pContext-&amp;gt;pPipeline-&amp;gt;Transcode(stOutBitstream);&lt;BR /&gt;    &lt;BR /&gt;   }&lt;BR /&gt;   if(m_Bitstream.DataOffset)&lt;BR /&gt;    if(MFX_OUTPUT_BUFFER_ENABLE == pContext-&amp;gt;transcodingSts)&lt;BR /&gt;    {&lt;BR /&gt;     &lt;BR /&gt;&lt;BR /&gt;     //correcting corrupt time stamps (work around) Lib supposed to give correct time stamps     &lt;BR /&gt;     if(stOutBitstream-&amp;gt;TimeStamp&lt;ULPREVTSOUT&gt;&lt;/ULPREVTSOUT&gt;     {&lt;BR /&gt;      if(m_Bitstream.DataOffset == stInputData.ulInputDataSize[0])&lt;BR /&gt;       stOutBitstream-&amp;gt;TimeStamp=stInputData.ullTimeStamp[0];   &lt;BR /&gt;     &lt;BR /&gt;     }&lt;BR /&gt;     //send it to make TS and RTP pockets and sent to output port&lt;BR /&gt;      SendToPort(stOutBitstream);&lt;BR /&gt;      ulPrevTSOut = stOutBitstream-&amp;gt;TimeStamp;&lt;BR /&gt;      &lt;BR /&gt;      //keep in queue&lt;BR /&gt;      for(int j=0;j&lt;ULFILLEDININDICES-1&gt;&lt;/ULFILLEDININDICES-1&gt;      {&lt;BR /&gt;       stInputData.ulInputDataSize&lt;J&gt; = stInputData.ulInputDataSize[j+1];&lt;BR /&gt;       stInputData.ullTimeStamp&lt;J&gt; = stInputData.ullTimeStamp[j+1];      &lt;BR /&gt;      &lt;BR /&gt;      }&lt;BR /&gt;      ulInputIndex = ulFilledInIndices-2;&lt;BR /&gt;&lt;BR /&gt;     memset(stOutBitstream-&amp;gt;Data + stOutBitstream-&amp;gt;DataOffset,0,stOutBitstream-&amp;gt;DataLength);&lt;BR /&gt;     stOutBitstream-&amp;gt;DataLength =0;&lt;BR /&gt;     stOutBitstream-&amp;gt;DataOffset =0;&lt;BR /&gt;     pContext-&amp;gt;transcodingSts = MFX_ERR_NONE;&lt;BR /&gt;     FrameNumber++;&lt;BR /&gt;     bFiledump=true;    &lt;BR /&gt;    }&lt;BR /&gt;   if(m_Bitstream.DataOffset)&lt;BR /&gt;   {&lt;BR /&gt;    if ((MFX_ERR_MORE_DATA_UVA == pContext-&amp;gt;transcodingSts)||(MFX_ERR_NONE == pContext-&amp;gt;transcodingSts))&lt;BR /&gt;    {&lt;BR /&gt;     //Read the data more&lt;BR /&gt; REPEAT:  if(av_read_frame(pFormatCtx, &amp;amp;packet)&amp;gt;=0)//FFMPEG API to read data from input port&lt;BR /&gt;     {      &lt;BR /&gt;      if(packet.stream_index==videoStream)&lt;BR /&gt;      {       &lt;BR /&gt;       if(bFiledump==false)&lt;BR /&gt;       {&lt;BR /&gt;        m_PTS = streams-&amp;gt;parser-&amp;gt;pts - streams-&amp;gt;start_time;//timestamp&lt;BR /&gt;        if(m_PTS==prevTS2)&lt;BR /&gt;         goto REPEAT;&lt;BR /&gt;        prevTS2=m_PTS;&lt;BR /&gt;        memcpy(m_Bitstream.Data + ulTotalBytesInBuffer,packet.data,packet.size);&lt;BR /&gt;        ulTotalBytesInBuffer += packet.size;&lt;BR /&gt;        m_Bitstream.DataLength=ulTotalBytesInBuffer;&lt;BR /&gt;        m_Bitstream.DataOffset=0;        &lt;BR /&gt;        m_Bitstream.TimeStamp=stInputData.ullTimeStamp[0];&lt;BR /&gt;        ulInputIndex++;&lt;BR /&gt;        stInputData.ulInputDataSize[ulInputIndex] = packet.size;&lt;BR /&gt;        stInputData.ullTimeStamp[ulInputIndex] = (mfxU64)m_PTS;&lt;BR /&gt;       }&lt;BR /&gt;       else if(bFiledump==true)&lt;BR /&gt;       {        &lt;BR /&gt;        m_PTS = streams-&amp;gt;parser-&amp;gt;pts - streams-&amp;gt;start_time;//timestamp&lt;BR /&gt;        if(m_PTS==prevTS2)&lt;BR /&gt;         goto REPEAT;&lt;BR /&gt;        prevTS2=m_PTS;&lt;BR /&gt;        memcpy(m_Bitstream.Data,m_Bitstream.Data + m_Bitstream.DataOffset,m_Bitstream.DataLength);&lt;BR /&gt;        memset(m_Bitstream.Data+m_Bitstream.DataLength,0,m_Bitstream.DataOffset);&lt;BR /&gt;        m_Bitstream.DataOffset=0;&lt;BR /&gt;        memcpy(m_Bitstream.Data + m_Bitstream.DataLength,packet.data,packet.size);&lt;BR /&gt;        m_Bitstream.DataLength +=packet.size;&lt;BR /&gt;        ulTotalBytesInBuffer=m_Bitstream.DataLength;                &lt;BR /&gt;        m_Bitstream.TimeStamp=stInputData.ullTimeStamp[0];&lt;BR /&gt;        ulInputIndex++;        &lt;BR /&gt;        stInputData.ulInputDataSize[ulInputIndex] = packet.size;&lt;BR /&gt;        stInputData.ullTimeStamp[ulInputIndex] = (mfxU64)m_PTS;&lt;BR /&gt;        bFiledump=false;&lt;BR /&gt;      &lt;BR /&gt;       }      &lt;BR /&gt;       m_pmfxBS = &amp;amp;m_Bitstream; //m_pmfxBS is passed into lib&lt;BR /&gt;       ulFilledInIndices = ulInputIndex +1;&lt;BR /&gt;&lt;BR /&gt;       prevTS1 = stInputData.ullTimeStamp[0];&lt;BR /&gt;       &lt;BR /&gt;      }     &lt;BR /&gt;      else &lt;BR /&gt;       goto REPEAT;    &lt;BR /&gt;     }&lt;BR /&gt;     else&lt;BR /&gt;     {     &lt;BR /&gt;      if(!packet.size)//incase input streaming is is stopped and likely to restart&lt;BR /&gt;      {&lt;BR /&gt;       Sleep(1000);&lt;BR /&gt;       goto REPEAT;&lt;BR /&gt;      }&lt;BR /&gt;      break;//file is over&lt;BR /&gt;     }&lt;BR /&gt;     pContext-&amp;gt;transcodingSts = MFX_ERR_NONE;&lt;BR /&gt;     continue;&lt;BR /&gt;    }//if (MFX_ERR_MORE_DATA _UVA== pContext-&amp;gt;transcodingSts)&lt;BR /&gt;   }&lt;BR /&gt;   else&lt;BR /&gt;    pContext-&amp;gt;transcodingSts = MFX_ERR_NONE;&lt;BR /&gt;&lt;BR /&gt;   &lt;BR /&gt; }//if (FrameNumber&amp;lt;1000)  &lt;BR /&gt;  else &lt;BR /&gt;  {&lt;BR /&gt;   //cleanup&lt;BR /&gt;   if(stOutBitstream-&amp;gt;Data )&lt;BR /&gt;   {&lt;BR /&gt;    delete stOutBitstream-&amp;gt;Data;&lt;BR /&gt;    stOutBitstream-&amp;gt;Data=NULL;    &lt;BR /&gt;   }&lt;BR /&gt;   if(ubFileBuffer)&lt;BR /&gt;   {&lt;BR /&gt;    delete ubFileBuffer;&lt;BR /&gt;    ubFileBuffer=NULL;    &lt;BR /&gt;   }    &lt;BR /&gt;    break;&lt;BR /&gt;  }&lt;BR /&gt;  &lt;BR /&gt;  // Free the packet that was allocated by av_read_frame&lt;BR /&gt;  av_free_packet(&amp;amp;packet);&lt;BR /&gt; }&lt;BR /&gt;---------&lt;BR /&gt;//display process result&lt;BR /&gt; sts = UvTranscode.ProcessResult(m_StartTime);&lt;BR /&gt;MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, 1);&lt;BR /&gt; &lt;BR /&gt; exit(0);&lt;BR /&gt;}//main&lt;BR /&gt;&lt;BR /&gt;mfxStatus CTranscodingPipeline::Transcode(mfxBitstream *stOutBS)&lt;BR /&gt;{&lt;BR /&gt; static bool OutEnable =false;&lt;BR /&gt; -------------&lt;BR /&gt; ---------------&lt;BR /&gt;while (MFX_ERR_NONE == sts )&lt;BR /&gt; {&lt;BR /&gt;&lt;BR /&gt;   if (bNeedDecodedFrames)&lt;BR /&gt; {&lt;BR /&gt; if (!bEndOfFile)&lt;BR /&gt; {&lt;BR /&gt; sts = DecodeOneFrame(&amp;amp;DecExtSurface);&lt;BR /&gt; if (MFX_ERR_MORE_DATA == sts)&lt;BR /&gt; {&lt;BR /&gt; sts = DecodeLastFrame(&amp;amp;DecExtSurface); &lt;BR /&gt; bEndOfFile = true;&lt;BR /&gt;     &lt;BR /&gt; }&lt;BR /&gt;    if(MFX_ERR_MORE_DATA_UVA==sts)&lt;BR /&gt;    {&lt;BR /&gt;     return MFX_ERR_MORE_DATA_UVA;    &lt;BR /&gt;    &lt;BR /&gt;    }&lt;BR /&gt;&lt;BR /&gt; }&lt;BR /&gt; -----------&lt;BR /&gt; }//if (bNeedDecodedFrames)&lt;BR /&gt;&lt;BR /&gt; if(OutEnable == false)&lt;BR /&gt;   pBS-&amp;gt;Bitstream= *stOutBS;&lt;BR /&gt;----------&lt;BR /&gt;----------&lt;BR /&gt;if (m_BSPool.size() == m_AsyncDepth)&lt;BR /&gt; {&lt;BR /&gt; sts = PutBS(stOutBS);&lt;BR /&gt; MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);&lt;BR /&gt;   OutEnable=true;&lt;BR /&gt;   return MFX_OUTPUT_BUFFER_ENABLE;&lt;BR /&gt; }&lt;BR /&gt;}//while (MFX_ERR_NONE == sts )&lt;BR /&gt;&lt;BR /&gt;mfxStatus CTranscodingPipeline::PutBS(mfxBitstream *stOutPutBitStream)&lt;BR /&gt;{&lt;BR /&gt;&lt;BR /&gt; mfxStatus sts = MFX_ERR_NONE;&lt;BR /&gt; ExtendedBS *pBitstreamEx = m_BSPool.front();&lt;BR /&gt; &lt;BR /&gt; sts = m_pmfxSession-&amp;gt;SyncOperation(pBitstreamEx-&amp;gt;Syncp, MSDK_WAIT_INTERVAL);&lt;BR /&gt; MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);&lt;BR /&gt; &lt;BR /&gt; stOutPutBitStream-&amp;gt;DataLength = pBitstreamEx-&amp;gt;Bitstream.DataLength;&lt;BR /&gt; &lt;BR /&gt; stOutPutBitStream-&amp;gt;DataOffset = pBitstreamEx-&amp;gt;Bitstream.DataOffset;&lt;BR /&gt; stOutPutBitStream-&amp;gt;DataFlag = pBitstreamEx-&amp;gt;Bitstream.DataFlag;&lt;BR /&gt; stOutPutBitStream-&amp;gt;FrameType = pBitstreamEx-&amp;gt;Bitstream.FrameType;&lt;BR /&gt; stOutPutBitStream-&amp;gt;PicStruct = pBitstreamEx-&amp;gt;Bitstream.PicStruct;&lt;BR /&gt; stOutPutBitStream-&amp;gt;TimeStamp = pBitstreamEx-&amp;gt;Bitstream.TimeStamp;&lt;BR /&gt; pBitstreamEx-&amp;gt;Bitstream.DataLength = 0;&lt;BR /&gt; pBitstreamEx-&amp;gt;Bitstream.DataOffset = 0;&lt;BR /&gt;&lt;BR /&gt; m_BSPool.pop_front();&lt;BR /&gt; m_pBSStore-&amp;gt;Release(pBitstreamEx); &lt;BR /&gt;&lt;BR /&gt; return sts;&lt;BR /&gt;} //mfxStatus CTranscodingPipeline::PutBS()&lt;BR /&gt; &lt;BR /&gt;mfxStatus CTranscodingPipeline::DecodeOneFrame(ExtendedSurface *pExtSurface)&lt;BR /&gt;{&lt;BR /&gt;------&lt;BR /&gt;-------&lt;BR /&gt; sts = m_pmfxDEC-&amp;gt;DecodeFrameAsync(m_pmfxBS, pmfxSurface, &amp;amp;pExtSurface-&amp;gt;pSurface, &amp;amp;pExtSurface-&amp;gt;Syncp);&lt;BR /&gt;  if (MFX_ERR_MORE_DATA == sts)&lt;BR /&gt;  {&lt;BR /&gt;   //data is not sufficient&lt;BR /&gt;   Status = sts=MFX_ERR_MORE_DATA_UVA;&lt;BR /&gt;   break;&lt;BR /&gt;  }&lt;BR /&gt;--------&lt;BR /&gt;}//mfxStatus CTranscodingPipeline::DecodeOneFrame(ExtendedSurface *pExtSurface)&lt;BR /&gt;&lt;/J&gt;&lt;/J&gt;</description>
      <pubDate>Fri, 04 Nov 2011 09:10:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919776#M486</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-04T09:10:25Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919777#M487</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I also observed that&lt;BR /&gt;&lt;B&gt;A&lt;/B&gt;. Suppose I set libtype=MFX_IMPL_SOFTWARE .&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;1st call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 bytes &lt;BR /&gt;m_Bitstream.TimeStamp = 60508&lt;BR /&gt;m_Bitstream.DataOffset =0&lt;BR /&gt;and sent into transcode function to transcode but returns MFX_ERR_MORE_DATA_UVA.&lt;BR /&gt;and parameters are set to&lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 3&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =97498&lt;BR /&gt;stOutBitstream-&amp;gt;DataLength = 0 &lt;BR /&gt;stOutBitstream-&amp;gt;TimeStamp = 0 &lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;2nd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 +6794 = 104295&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =0&lt;BR /&gt;&lt;BR /&gt;and sent to transcoder() it returns as MFX_OUTPUT_BUFFER_ENABLE.( Meaning that transcoded output is available. )&lt;BR /&gt;&lt;BR /&gt;and parameters are set to &lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 6794&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =97501&lt;BR /&gt;
stOutBitstream-&amp;gt;DataLength = 49610&lt;BR /&gt;
stOutBitstream-&amp;gt;TimeStamp =60508&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;3rd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 6794 + 7162 = 13956&lt;BR /&gt;
m_Bitstream.TimeStamp = 64809&lt;BR /&gt;
m_Bitstream.DataOffset =0&lt;BR /&gt;&lt;BR /&gt;After Transcode() is called parameters are set to&lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 7162&lt;BR /&gt;

m_Bitstream.TimeStamp = 64809&lt;BR /&gt;

m_Bitstream.DataOffset =6794&lt;BR /&gt;

stOutBitstream-&amp;gt;DataLength = 6515&lt;BR /&gt;

stOutBitstream-&amp;gt;TimeStamp =64809&lt;BR /&gt;&lt;BR /&gt;You can observe that time stamps are coming correctly at the output &lt;BR /&gt;&lt;BR /&gt;&lt;B&gt;B.&lt;/B&gt; Now, keeping the same code and set libtype=MFX_IMPL_SOFTWARE&lt;BR /&gt;&lt;BR /&gt;Follwoing are observed &lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;1st call to av_read_frame() &lt;/I&gt;&lt;/SPAN&gt;sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 bytes &lt;BR /&gt;m_Bitstream.TimeStamp = 60508&lt;BR /&gt;m_Bitstream.DataOffset =0&lt;BR /&gt;&lt;BR /&gt;After Transcode() is called parameters are set to&lt;BR /&gt;m_Bitstream.DataLength= 3&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =97498&lt;BR /&gt;
stOutBitstream-&amp;gt;DataLength = 0 &lt;BR /&gt;
stOutBitstream-&amp;gt;TimeStamp = 0&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;2nd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;m_Bitstream.DataLength= 97501 +6794 = 104295&lt;BR /&gt;
m_Bitstream.TimeStamp = 60508&lt;BR /&gt;
m_Bitstream.DataOffset =0 &lt;BR /&gt;&lt;BR /&gt;After Transcode() is called&lt;BR /&gt;and returns MFX_ERR_MORE_DATA_UVA and parameters are set to &lt;BR /&gt;&lt;BR /&gt;m_Bitstream.DataLength= 3&lt;BR /&gt;

m_Bitstream.TimeStamp = 60508&lt;BR /&gt;

m_Bitstream.DataOffset =104292&lt;BR /&gt;

stOutBitstream-&amp;gt;DataLength = 0&lt;BR /&gt;

stOutBitstream-&amp;gt;TimeStamp =0&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;3rd call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;
m_Bitstream.DataLength= 104292+ 7162 = 111457&lt;BR /&gt;

m_Bitstream.TimeStamp = 60508&lt;BR /&gt;

m_Bitstream.DataOffset =0&lt;BR /&gt;
&lt;BR /&gt;
After Transcode() is called&lt;BR /&gt;and returns as MFX_OUTPUT_BUFFER_ENABLE and parameters are set to&lt;BR /&gt;
&lt;BR /&gt;
m_Bitstream.DataLength= 7162&lt;BR /&gt;


m_Bitstream.TimeStamp = 60508&lt;BR /&gt;


m_Bitstream.DataOffset =104295&lt;BR /&gt;


stOutBitstream-&amp;gt;DataLength = 16528&lt;BR /&gt;


stOutBitstream-&amp;gt;TimeStamp =60508&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="text-decoration: underline;"&gt;&lt;I&gt;4th call to av_read_frame()&lt;/I&gt;&lt;/SPAN&gt; sets&lt;BR /&gt;
m_Bitstream.DataLength= 7162 + 16541= 23703&lt;BR /&gt;

m_Bitstream.TimeStamp = 64809&lt;BR /&gt;

m_Bitstream.DataOffset =0&lt;BR /&gt;
&lt;BR /&gt;
After Transcode() is called&lt;BR /&gt;
and returns as MFX_OUTPUT_BUFFER_ENABLE and parameters are set to&lt;BR /&gt;
&lt;BR /&gt;
m_Bitstream.DataLength= 16541&lt;BR /&gt;


m_Bitstream.TimeStamp = 60508&lt;BR /&gt;


m_Bitstream.DataOffset =7162&lt;BR /&gt;


stOutBitstream-&amp;gt;DataLength = 1463&lt;BR /&gt;


stOutBitstream-&amp;gt;TimeStamp =60508&lt;BR /&gt;&lt;BR /&gt;If you observe there is a difference in SOFT and HW setting. In HW time stamps are also not coming at the output. &lt;BR /&gt;&lt;BR /&gt;Pl. help me to fix this issue. I need to use IN HW accelartor in my application &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Nov 2011 11:35:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919777#M487</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-04T11:35:16Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919778#M488</link>
      <description>&lt;DIV id="_mcePaste"&gt;Hi ujarijam,&lt;BR /&gt;&lt;BR /&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Many questions. First of all, Media SDK handles timestamps in a completely transparent way. Encoder/Decoder just uses the time stamp from theinput frame and transfers it with no modifications to the output frame.From looking at your logs I suspect the issue you are having with timestamps are related to the order (as discussed earlier in the thread)they are delivered from the decoder. Can you please verify if that is the case. If so, you can either do the reordering yourself by keeping a bufferof frames "in flight" or make changes to the code to make all encode/decode calls synchronous as discussed earlier. Considering your use case thelatter may actually be a better approach (much simpler code). The transcode sample is built with a quite different goal in mind which is high throughput not astreaming and low latency use case. Unfortunately, we do not have any samples right now that illustrates this.&lt;BR /&gt;&lt;BR /&gt;Media SDK Hardware and Software encoder has slightly different behavior. Do not expect the exact same output.You can control encoder/stream creation features via encoder parameter settings. For instance, to to disable AUDelimiteruse mfxExtCodingOption::AUDelimiter = MFX_CODINGOPTION_OFF. Also, PictureTimingSEI and BufferingPeriodSEI messages are likelyinserted in the stream by default. To disable you can set mfxExtCodingOption::PictureTimingSEI/VuiVclHrdParameters/VuiNalHrdParametersto MFX_CODINGOPTION_OFF.&lt;BR /&gt;&lt;BR /&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Some comments on the supplied code:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;- DecodedOrder: This parameter is deprecated in Media SDK 3.0. You can leave it as 0.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;- Encoder config: Rate control is CBR. For that case you should set TargetKbps. Currently this line is commented out.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;- Since I do not have insight into the interfacing components you use I'm not able to pinpoint any other issues.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;BR /&gt;Your HW platform/setup looks good. I do not foresee any issues. Especially since you've already verified that HW acceleration worksusing the unmodified samples.&lt;BR /&gt;&lt;BR /&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Regards,&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Petter&lt;/DIV&gt;</description>
      <pubDate>Fri, 04 Nov 2011 15:58:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919778#M488</guid>
      <dc:creator>IDZ_A_Intel</dc:creator>
      <dc:date>2011-11-04T15:58:55Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919779#M489</link>
      <description>Hi Petter,&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;I have few questions&lt;/DIV&gt;&lt;DIV&gt;1. Pl.you look at the #15 once again. I mentioned that for SW, SDK takes intitially two frames data and gives one frame output whereas for HW, SDK takes three frames and gives two frames output why this difference?&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;2. You mentioned in last answer that "...Media SDK Hardware and Software encoder has slightly different behavior..." what are differences exactly between software and hardware implemetation?&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;3. Actually, If SDK takes one frame data, without asking more data, there wont be problem in time stamps.Ok I understand that for maximum throghput corrent code has been desighned. But there should be some settings are required to tell SDK that operation is for synchronous before decoder accepts input data. This may cause SDK to accept frame by frame without asking for more data. With my experimentation I found that even after giving full working frame SDK used to ask more data.Pl give your comment on this.&lt;/DIV&gt;&lt;DIV&gt;Anyway, I will try to implement as per your comments.&lt;/DIV&gt;</description>
      <pubDate>Sat, 05 Nov 2011 18:39:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919779#M489</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-05T18:39:22Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919780#M490</link>
      <description>Hi Petter,&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;I addedSyncOperationin mfxStatus CTranscodingPipeline::Transcode(mfxBitstream *stOutBS)&lt;/DIV&gt;&lt;DIV&gt;function to make synchronous operation&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;sts = DecodeOneFrame(&amp;amp;DecExtSurface);&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;sts = m_pmfxSession-&amp;gt;SyncOperation(DecExtSurface.Syncp, MSDK_WAIT_INTERVAL);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;DecExtSurface.Syncp = 0;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;This is followed by&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;VppExtSurface.pSurface = DecExtSurface.pSurface;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;sts = EncodeOneFrame(&amp;amp;VppExtSurface,&amp;amp;m_BSPool.back()-&amp;gt;Bitstream);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;sts = PutBS(stOutBS);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;subsequently PuBS() calls&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;sts = m_pmfxSession-&amp;gt;SyncOperation(pBitstreamEx-&amp;gt;Syncp, MSDK_WAIT_INTERVAL);&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;But Response of the output and time stamps are same as explained in this thread.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;My problem is not fixed yet.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I have one question to ask you.can I use multiple rtp streaming using quick sync technology using trancode()or not?&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;is your sample sample code support it or not? is Media SDK support multiple RTP streaming using Trancode() or not?&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Can you explain clearly how Media SDK works for HW accelaration. Document I have is mediasdk-man.pdf API version 1.3.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Why Media SDK decoder doesnot decode for one frame input? this question was still not answered.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;pl. helpme.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;How&lt;SPAN style="color: #515357; font-family: Arial, sans-serif; font-size: 12px; line-height: 16px;"&gt;mfxExtCodingOption::AUDelimiter = MFX_CODINGOPTION_OFF this can be set in&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;SPAN style="color: #515357; font-family: Arial, sans-serif; font-size: 12px; line-height: 16px;"&gt;mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams)&lt;/SPAN&gt; function?&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 07 Nov 2011 18:36:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919780#M490</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-07T18:36:18Z</dc:date>
    </item>
    <item>
      <title>streaming with Media SDK</title>
      <link>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919781#M491</link>
      <description>&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 07 Nov 2011 18:36:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Media-Intel-Video-Processing/streaming-with-Media-SDK/m-p/919781#M491</guid>
      <dc:creator>ujarijam</dc:creator>
      <dc:date>2011-11-07T18:36:22Z</dc:date>
    </item>
  </channel>
</rss>

