Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

SetTimePosition problem

ulisses87
Beginner
1,146 Views

Hi,

I have noticed abnormal proceeding of SetTimePosition in MP4 Splitter. I use a SimpleSplitter sample from your webpage http://software.intel.com/en-us/articles/getting-started-with-intel-ipp-unified-media-classes-sample/.

Before line "Splitter.Run()", I added a line Splitter.SetTimePosition(62). I also used videoInfo()->framerate to get a frame rate from stream. In my case it's 24,99978, so 62 sec * 24,999978 = 1549,98 frame. Consequently .SetTimePosition(62) should go read marker to 1549,98 frame number, BUT I see that it goes to frame number 1529... What's happened?

When I change MP4 splitter to AVi Splitter and VideoDecoder to MPEG4 video decoder (the same video, compressed with xvid to avi container), everything is ok. Frame rate is exactly 25.00 fps and calling .SetTimePosition(62) goes read marker to 62 * 25=1550 frame number.

By the way - how I can get current processing frame time? I have tried to use function Splitter.GetTimePosition() called in for loop behind GetFrame(), but it retrieved me a gibberish - random times, but I need time details in seconds (0, 1,2,3...)

Thanks for any help.

0 Kudos
13 Replies
Naveen_G_Intel
Employee
1,146 Views

Hi,

Which version of IPP you are using? In IPP 7.0 beta, we have fixed an issue(DPD200150940) related to SetTimePosition. Issue was SetTimePosition hangs with MP4 playback when stream is fragmented. Could you check with latest beta release?

Interesting discussion on issue ID - DPD200150940

http://software.intel.com/en-us/forums/showthread.php?t=71175

Thanks,

Naveen Gv

0 Kudos
ulisses87
Beginner
1,146 Views

I use IPP version6.1.6.056. I read linked subjects, but it's different to my problem. In my case, I can extract next frames, but "read marker" will go earlier than it's expected (in example from my post it goes to 1527 frame instead of 1550 frame). I want to say again that problem occurs only in MP4 files.

Sorry, but I don't want to test IPP 7.0 Beta. I'm just designing larger project and I need stable library version - not BETA. Except it, I'm afraid that installation IPP 7.0 make a mess in my OS (mixed libraries files).

0 Kudos
Naveen_G_Intel
Employee
1,146 Views

Hi,

Probably few frames are buffered in the decoder. When calling videoDecoder->GetFrame(), the decoder will return a few of buffered frames. If you call GetFrame() with "NULL" parameter, it will return all buffer frames. videoDecoder->GetFrame(NULL,&out);

It maens insert a call to decoder->GetFrame(NULL, &videoOut) just before the call to splitter->SetTimePosition(..).

Regards,

Naveen Gv

0 Kudos
ulisses87
Beginner
1,146 Views

Hi,

I tried to apply your advice, but unfortunately it doesn't work. In my opinion, a reason of this problem isn't placed in this area, because I set Splitter.SetTimePosition(...) just before first Splitter.Run() command. At this moment, I don't use yet any GetFrame(...) function.

Consequently I think it's a bug in the library. I can send you a video and part of code for analysis, if you want.

0 Kudos
Sergey_O_Intel1
Employee
1,146 Views
Hi,

are you completely sure that a frame with PTS of 62sec is an I frame? Otherwise Splitter will set you on the nearest I frame before the time you set in SetTimePosition. The matter is that Splitter usually prepares data for Decoder but Decoder can't start decoding from any frame.
I also wonder what time GetTimePosition returns. Could you check it just before SetTimePosition and right after it?

-Sergey
0 Kudos
ulisses87
Beginner
1,146 Views

Hi,

First thank you for your reply.

I applied your suggestions. Results below:

1.) Obviously I'm not sure if frame with PTS 62 sec is an I-frame. How I can check it?

2.) Function GetTimePosition(...) returns 0.0 sec before SetTimePostion(..) calling and it returns 61.12 sec after calling. Does it seems that frame with 62 PTS isn't an I-frame?

It's testing code:

[cpp]#include "ipp.h"
#include "umc_file_reader.h"
#include "umc_fio_reader.h"
#include "umc_mp4_spl.h"
#include "umc_splitter.h"
#include "umc_video_render.h"
#include "fw_video_render.h"
#include "umc_h264_dec.h"
#include "vm_time.h"

void EncodeStream(vm_char * inputfilename, vm_char * outputfilename )
{
   	Ipp32u videoTrack=0; int exit_flag =0;
	UMC::Status status;  
	UMC::MediaData in; UMC::VideoData out;	
	UMC::FIOReader reader; UMC::FileReaderParams readerParams;
	UMC::SplitterParams splitterParams; UMC::SplitterInfo * streamInfo;
	UMC::MP4Splitter Splitter;
		
	UMC::VideoStreamInfo *videoInfo=NULL;
	UMC::VideoDecoder *  videoDecoder; UMC::VideoDecoderParams videoDecParams;
	UMC::FWVideoRender fwRender; UMC::FWVideoRenderParams fwRenderParams;
	
	readerParams.m_portion_size = 0;
	vm_string_strcpy(readerParams.m_file_name, inputfilename);
	if((status = reader.Init(&readerParams))!= UMC::UMC_OK) 
       return;
	splitterParams.m_lFlags = UMC::VIDEO_SPLITTER;
	splitterParams.m_pDataReader = &reader;
    if((status = Splitter.Init(splitterParams))!= UMC::UMC_OK)
	   return;
	Splitter.GetInfo(&streamInfo);
    for (videoTrack = 0; videoTrack <  streamInfo->m_nOfTracks; videoTrack++) {
      if (streamInfo->m_ppTrackInfo[videoTrack]->m_Type == UMC::TRACK_H264)
           break;
    }
	videoInfo = (UMC::VideoStreamInfo*)(streamInfo->m_ppTrackInfo[videoTrack]->m_pStreamInfo);
	if(videoInfo->stream_type!=UMC::H264_VIDEO)
        return;
    videoDecParams.info =  (*videoInfo);
	videoDecParams.m_pData = streamInfo->m_ppTrackInfo[videoTrack]->m_pDecSpecInfo;
	videoDecParams.numThreads = 1;
    videoDecoder = (UMC::VideoDecoder*)(new UMC::H264VideoDecoder());
	if((status = videoDecoder->Init(&videoDecParams))!= UMC::UMC_OK)
		return;
	fwRenderParams.out_data_template.Init(videoInfo->clip_info.width, videoInfo->clip_info.height, videoInfo->color_format);
    fwRenderParams.pOutFile = outputfilename;
    if(status = fwRender.Init(&fwRenderParams)!= UMC::UMC_OK)
		return;
	Ipp64f pos;
	Splitter.GetTimePosition(pos);
	printf("%fn",pos);
	Splitter.SetTimePosition(62);
	Splitter.GetTimePosition(pos);
	printf("%fn",pos);
	Splitter.Run();
	do
	{   do{ 
		     if (in.GetDataSize() < 4) {
	    	     do{ 
	              status= Splitter.GetNextData(&in,videoTrack);
			       if(status==UMC::UMC_ERR_NOT_ENOUGH_DATA)
   			            vm_time_sleep(5);
			      }while(status==UMC::UMC_ERR_NOT_ENOUGH_DATA);
			      if(((status != UMC::UMC_OK) && (status != UMC::UMC_ERR_END_OF_STREAM))||
				     (status == UMC::UMC_ERR_END_OF_STREAM)&& (in.GetDataSize()<4)) {
                        exit_flag=1;
				  }
             }
			 fwRender.LockInputBuffer(&out);
		     videoDecoder->GetFrame(&in,&out);
			 status  = videoDecoder->GetFrame(NULL,&out);
			 Splitter.GetTimePosition(pos);
			 printf("%fn",pos);
    	  	 fwRender.UnLockInputBuffer(&out);
		     fwRender.RenderFrame();
	     }while(!exit_flag && (status == UMC::UMC_ERR_NOT_ENOUGH_DATA || status == UMC::UMC_ERR_SYNC));
	 }while (exit_flag!=1);

	/* do{  
		 fwRender.LockInputBuffer(&out);
	     status  = videoDecoder->GetFrame(NULL,&out);
	     fwRender.UnLockInputBuffer(&out);
         fwRender.RenderFrame();
	}while(status == UMC::UMC_OK);			*/
}

void main(int argc, vm_char* argv[])
{
   vm_char *  InputVideofileName, *OutputYUVFileName;
   InputVideofileName = VM_STRING("teststream.mp4"); //use unicode string if project use unicode characters
   OutputYUVFileName  = VM_STRING("testoutput.yuv"); //use unicode string if project use unicode characters

   EncodeStream(InputVideofileName,OutputYUVFileName);
}[/cpp]

My main goal is to cut TV commercials from MP4 file, based on times saved in Xml file. It's part of my Bsc Thesis final project. Unfortunately I can't use correctly GetTimePosition() function, because if I try to involve it in loop just after GetFrame() to check if "read marker" is in place where TV commercial begin, it returns me not ever time (e.g 62 sec and immediately 72.80 sec...) and often it returns the same time stamp several times (eg. 95,64 sec, 95,64 sec, 95,64 sec...).

Could you tell me how I can determine if currently processed frame has concrete time stamp?

I have tried to change number of frame to time stamp by dividing currently processed frame number and FPS factor, but it isn't precise method and it isn't compatible with SetTimePosition(...) when SetTimePosition jumps to approximate time stamp in file...

Thank you for your help.

0 Kudos
Sergey_O_Intel1
Employee
1,146 Views
1) You need some special software to check it but you can trust me that Splitter can jump to I frames only.
2) Quite possible. Seraching for 62 sec Splitter moves to the frame with nearest time stamp and then moves backward to the nearest I frame.
3) First I didn't understand why you call videoDecoder->GetFrame twice.
Then taking into account that commercial begins with I frame your algorithm should work OK. Moving to 62sec you'll catch the 1st frame of commercial. If I were you I would print all ptses first to better understand what frames should be cut off. I would also try to print FrameType. It can keep information about I frames. Or you may also try evaluation version of StreamAnalyzer from Elecard to get this info. Then it will be quite easy to track information from Splitter. I'm only puzzled with your repeating ptses. Every Splitter->GetNextData moves it to the next frame and no frames with the same ptses are allowed in MP4. Please double check that Splitter returns the same time stamps several times. Another possibility is that you'll share your stream so that I could look at it myself.

-Sergey
0 Kudos
ulisses87
Beginner
1,146 Views

Firstly, thank you for your huge reply.

1.) I understand now and I trust you. I have read carefully a specification of H.264 standard.

2.) But why it jumps backward, not forward e.g?

3a.) I call videoDecoder->GetFrame twice, because in each loop step I need a single final frame from video for further processing (I'll plane to use one of CV methods for detection - time stamps method is only additional method). From my observation I noticed that, when I don't put status = videoDecoder->GetFrame(NULL,&out) in line 71 I don't get final frame but indirect frame from decoder I think (I have tried to save all frames from stream as BMP and later JPEG files and I based on experience from that "researches").

3b.) I think shouldn't account that every TV commercial starts with I-frame. Could you explain PTS acronym? I don't understand how I can print it and how it can help me.

3c.) Printing information about concrete frame is good idea, but I can't find any function for retrieving details information about frame type in IPP. Do I overlook something?

3d.) Thanks form Elecard software recommendation. Great program. Unfortunately it can't read my MP4 H264 file. When I try to open my file in Stream Analyzer I only see an error "unsupported file format". I have tried to find and use another application, but these which I find were unavailable for free even for trial period or also can't open my file. So, with my video file works only two programs: MediaInfo and GSpot

By the way, maybe do you know something application which can open YUV file with proper color schema? I have been using YUV Tools from SunRay (now trial has been expired), because only this editor can show properly frames with changed Y U V component order (From my observation I see that IPP saves file YUV with Y V U order components and unfortunately popular video yuv players can't customize this parameter)

3e.) Yes. I'm absolutely sure that time stamps from my file are repetaing several times. I even have modified my bit of code and I have saved these information to the text file. You can see it here: http://www.mmsoft.webd.pl/private/times.txt

3f.) It's no problem. Here you can download my file (31 MB about):http://www.mmsoft.webd.pl/private/teststream.rar. It isn't original file, because it was too heavy to put on FTP (over 1 GB), so I have recorded a new file for you from the same source and with the same parameters, but I checked again with above code and it produces the same problems.

By the way: this video was recorded from polish TV channel with my AverTV analog card and AverTV standard software on default H-264 MP4 profile settings.

Again thanks.

0 Kudos
Sergey_O_Intel1
Employee
1,146 Views

2.) But why it jumps backward, not forward e.g?

Because if you want to see a frame of 62sec and Splitter wouldjump forward (to the next I frame) you'd never see your frame. Curently one should decode from the previous I frame but just not render frames before 62 sec.

3a.) I call videoDecoder->GetFrame twice, because in each loop step I need a single final frame from video for further processing (I'll plane to use one of CV methods for detection - time stamps method is only additional method). From my observation I noticed that, when I don't put status = videoDecoder->GetFrame(NULL,&out) in line 71 I don't get final frame but indirect frame from decoder I think (I have tried to save all frames from stream as BMP and later JPEG files and I based on experience from that "researches").

I'm not sure that decoding will work OK this way. One should feed Decoder frame by frame with real data and feed it with NULL only when all dataare exhausted. The matter is that Decoder fills its own delay buffer first to perform frame reordering if needed and you must sent it frames in the order they were initially encoded

3b.) I think shouldn't account that every TV commercial starts with I-frame. Could you explain PTS acronym? I don't understand how I can print it and how it can help me.


PTS stands for Presentation Time Stamp - the time when a frame must be shown. All PTSes in decoded stream (display order) are increasing smoothly (by fps period) but are sometimes reordered in encoded stream.

3c.) Printing information about concrete frame is good idea, but I can't find any function for retrieving details information about frame type in IPP. Do I overlook something?

As far as I remember there's GetFrameType method for it in MediaData class.

3d.) Thanks form Elecard software recommendation. Great program. Unfortunately it can't read my MP4 H264 file. When I try to open my file in Stream Analyzer I only see an error "unsupported file format". I have tried to find and use another application, but these which I find were unavailable for free even for trial period or also can't open my file. So, with my video file works only two programs: MediaInfo and GSpot

There are Stream Analyzer and StreamEye tool there. Which one did you try?

By the way, maybe do you know something application which can open YUV file with proper color schema?

I use Elecard YUV Viewer.

3e.) Yes. I'm absolutely sure that time stamps from my file are repetaing several times. I even have modified my bit of code and I have saved these information to the text file. You can see it here: http://www.mmsoft.webd.pl/private/times.txt

Didn't understand how you received these numbers. Whether they are from Splitter (encoded stream) or from Decoder? They don't look like valid PTSes from stream. Try to dump it from Decoder but call GetFrame only once per a loop (more accurate until it returns OK status).

3f.) It's no problem. Here you can download my file (31 MB about):http://www.mmsoft.webd.pl/private/teststream.rar. It isn't original file, because it was too heavy to put on FTP (over 1 GB), so I have recorded a new file for you from the same source and with the same parameters, but I checked again with above code and it produces the same problems.

It is opened OK by Elecard StreamEye Tool but I haven't noticed any commircial there except for small picture at ~55sec. Could you make your further experiments with this stream?

Regards
-Sergey

0 Kudos
ulisses87
Beginner
1,146 Views

2.) OK. I understand now.

3a.) I understand general idea, but the problem is that I need to process my video file frame by frame and to call GetFrame() method twice is only one, which I have found working.

3b.) So, I understand that PTSes order at encoded file may be mixed?

Maybe it was hazard that my video file has the same PTSes order in encoded and decoded stream.

3c.) OK. I've got it and I'll check it in the nearest time. Thanks.

3d.) I tried Elecard StreamAnalyzer as you had advised in the previous post. You are right - Elecard StreamEye reads this file correctly. I'll analyze it in details. Thanks also for Elecard YUV Viewer. At last I found program which can read YV12 planar YUV files.

3e.) Bingo. Your solution works. These numbers come from Splitter.GetTimePosition(...) as you can see in above part of my code. When I have changed this method to in.GetTime(...) as you can see below, I got correct ptses I think: http://www.mmsoft.webd.pl/private/times2.txt

[cpp]   Splitter.Run();   
    do   
    {   do{    
             if (in.GetDataSize() < 4) {   
                 do{    
                  status= Splitter.GetNextData(&in,videoTrack);   
                   if(status==UMC::UMC_ERR_NOT_ENOUGH_DATA)   
                        vm_time_sleep(5);   
                  }while(status==UMC::UMC_ERR_NOT_ENOUGH_DATA);   
                  if(((status != UMC::UMC_OK) && (status != UMC::UMC_ERR_END_OF_STREAM))||   
                     (status == UMC::UMC_ERR_END_OF_STREAM)&& (in.GetDataSize()<4)) {   
                        exit_flag=1;   
                  }   
             }   
             fwRender.LockInputBuffer(&out);   
             status=videoDecoder->GetFrame(&in,&out);   
			 if(status==UMC::UMC_OK){
				 pos=in.GetTime();
				 fprintf(f,"%fn",pos);   
			 }
             fwRender.UnLockInputBuffer(&out);   
             fwRender.RenderFrame();   
         }while(!exit_flag && (status == UMC::UMC_ERR_NOT_ENOUGH_DATA || status == UMC::UMC_ERR_SYNC));   
     }while (exit_flag!=1);   
	fclose(f);
  
    do{    
         fwRender.LockInputBuffer(&out);  
         status  = videoDecoder->GetFrame(NULL,&out);  
         fwRender.UnLockInputBuffer(&out);  
         fwRender.RenderFrame();  
    }while(status == UMC::UMC_OK);[/cpp]

UNFORTUNATELY it implicates new/old problems and few questions:

Question 1: I can't use twice calling GetFrame(...) method to use with in.GetTime(...) [not ptses are grabbed because buffer is clear in every iteration], so how I can process video frame by frame? Could you recommend me some another proper method to analyze video stream frame by frame? It's the most important part of my application engine.

Question 2: I'm puzzled why even If I omit GetFrame(...) with NULL source, my output video file seems to be still correct? I made an experiment with final loop, which is started from line 80 and currently is commented and I try to save outputs. I got only four final frames. Why?

Question 3: I see that when I remove twice GetFrame(..) calling, the internal loop runs in another way. The GetFrame(..) buffer not always is full and it returns UMC_OK status only at odd iterations. So I understand that I can't map iteration number to frame number?

Question 4: What's the difference between GetTimePosition from splitter and GetTime(..) from decoder?

3f.) It's small misunderstanding I think. There are no TV commercial in this video file. It's only a sample. It will be better if I define what I understand by "TV commercial". There are series of consecutive, short advertisements emitted as video clips block between proper TV content, often started with special intro animation (and sound) and ended also with special outro animation (and sound). In many world TV during TV commercial emission there are no TV channel logo in screen corner.

Questions 5: What's further experiments do you think about?

Regards

0 Kudos
Sergey_O_Intel1
Employee
1,146 Views
1. I still can't quite understand whyyou can't use pipeline asit isdescribed in samples? You start receiving encoded frames from Splitter and feeding them to Decoder. But Decoder can't produce YUV frame at once because of initial delay. So you just start to receive decoded data from Decoder in let's say3 or 5 frames (depending on stream) but you start receiving them regularly frame by frame. And then you can decide whether to throw it away (commercial) or to render. So you work with PTSes from Decoder making tour decision.
2. 4 final frames - it depends on the stream. But how do you know that the stream is correct (i.e. all frames are decoded)? Try to print PTSes of frames sent to Decoder and PTSes of frames right after Decoder (when it returns OK status).
3. Frame gaps are possible and it depends on GOP structure of the stream. It's difficult to explain if you are unaware of coding theory. That is why return statuses are used everywhere to build right pipeline. You don't need to count iterations, you need to count output frames (when Decoder returns OK).
4. GetTimePos from Splitter is used just to know where it really jumped during reposition (usually not equal to the time you used in SetTimePos). Decoder does not have such method. But MediaData object does have. It returns PTS of the data it keeps.
Sorry but it seems I started to repeate lines from the manual here :)
5. Try the way I told you earlier or look at SimplePlayer (from IPP samples)code more carefully. Everything I told you is implemented there in Audio and Video Process functions.

-Sergey
0 Kudos
ulisses87
Beginner
1,146 Views

1. I applied your suggestion and it works. Below I'm presenting "my" solution in pseudo-code:

[cpp]do{
    GetFrame(...);
    If status returned from buffer is UMC_OK, send a frame to processing (e.g save as JPEG);
}while(end of stream or error)

do{
    Flush the rest of data from buffer by setting NULL as input and save its as frames;
}while(there are data in the buffer)[/cpp]

2. You are right. I can't estimate correctness of result data without detailed, technical methods, so I can't say that stream is absolutely correct. My mistake.

3,4,5. I'm just learning coding theory, because I need it for my project realization. think I'm making a progress but it's impossible to learn whole issues in such short time.

Currently I'm using SetTimePosition() from Splitter class to reposition and I use GetTime() from MediaData class to determine currently processed PTM frame. Is it correct way?

Questions:

1.) I have an one mp4 file where one frame (no 29) is out of display order in decoded yuv file. It's strange, because I use standard pipeline way (from no 1 of this post) and this method works perfectly for other files...

2.) How I can detect properly detailed information about color space standard used in processed video file (yv12 etc.)? I need this information for converting file to RGB. I know generally methods for conversions provided by IPP and I can use its, but I want to make my software more universal.

For example, please tell me. What is background color in this simple and small mp4 file?http://www.mmsoft.webd.pl/private/teststream2.rar

Some players reads it as 234,234,234 (includes my program made with IPP) and others read it as 254,254,254 (eg. Stream Eye Studio). So, which answer is correct - white or gray?

3.) I noticed that I have to flush data from buffer (in loop) BEFORE reposition with SetTimePosition().I understand why I have to do it, but I don't understand why when I don't do it, I'll retrieve several extra frames tagged with future and reverted PTSes?

4.) I'm confused why I can't retrieve first frames with PTSes 0.0, 0.04 (from sample file)? When buffer sends first UMC_OK message I retrieve frame with PTS (e.g) 0.08, despite that in StreamEye I see these two lost frames. It's uncomfortable for example when I want to cut video from 0.0 to X PTS - than I have to make reposition before first splitter running.

Thank you for your effort and patience.

Regards,

0 Kudos
Sergey_O_Intel1
Employee
1,146 Views

Hi!

1. It's really strange. My question is if it is really on a wrong place or PTS is just wrong? Decoders usually just bypass PTSes so either incoming PTS is wrong (for Decoder) or you feed Decoder frames not in encoded order. Double check your incoming into Decoder PTSes. I see that your streams doesn't have B frames so there's no reordering expected.
2. As far as I remember Decoder works with 4:2:0 format and you can set output format yourself (yv12, nv12...). See e.g. SimplePlayer options for it.
Concerning YUV to RGB convertion you'd better ask Google about it. It two words luma component in YUV format has range of [16; 234] and it maps to [0;255] while converting to RGB.
3. Don't forget that both Splitter and Decoder have their own internal buffers which are full of data by the time you break the whole pipeline. So you're like starting a brand new stream. You should clean both components first not to receive artifacts later. I don't remember exactly if Splitter calls Reset itself when SetTimePosition is called but you should call Decoder->Reset yourself because it knows nothing about reposition.
4. If you see that 1st frame from Decoder is really 3d frame in StreamEye it means that there's a bug somewhere. I can advise you again to track incoming PTSes to Decoder. Do they match PTSes from StreamEye? You may also try SimplePlayer as a reference. It can render your output into YUV file. Then you'll have just to compare your streams in YUV viewer. There should be no lost frames.

-Sergey

0 Kudos
Reply