Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6427 Discussions

Multi Camera multi person - missing camera frames?

Duggy
New Contributor I
1,190 Views

Hi,

 

I am somewhat confused and not 100% sure how to "debug".

We are processing with multi camera/ multi target/person and we are showing the video live. The camera on the video (RTSP) has the time with seconds. What we are seeing is a jumpy, jerky video. That is the seconds can go from 13:20:25 to 13:20:30 in one frame and the fps or the average_fps prints out to under 1 or even 0.3. The box is drawn around the people in the view to show they have been picked up or detected. This doesnt happen on every frame as the person can be bottom right hand of the screen on one frame and then on the next frame be at the top left, i.e. crossed the entire view in what seems like one frame. 

My question is how do I know that all the frames from 13:20:25 to 13:20:30 have been processed as well? How do I know that 5 seconds of processing (for example) have not taken place and then once the processing (which seems to run in threads) is complete it tries to get the next image from frame (or videocapture). Meaning that it missed or skipped on 5 seconds worth of footage and processing as it was busy doing the background work?

OR it actually has skipped the 5 seconds of video and I need to look at a queue solution to ensure that I pass the system each frame to ensure no misses?

 

Much appreciated.

 

Labels (2)
0 Kudos
4 Replies
Hairul_Intel
Moderator
1,146 Views

Hi Duggy,

Thank you for reaching out to us.

 

I have validated the Multi Camera Multi Target Python Demo using an IP camera. For your information, I am using IP Webcam with RTSP protocol when running the demo.

 

Here is the command that I used:

python multi_camera_multi_target_tracking_demo.py -i rtsp://username:password@ip_address:port/h264_ulaw.sdp -m intel\person-detection-retail-0013\FP16\person-detection-retail-0013.xml --m_reid intel\person-reidentification-retail-0277\FP16\person-reidentification-retail-0277.xml --config configs\person.py

 

From my side, I observed a different issue - the output video playback was lagging.

 

To debug this issue, I would suggest you try using the RTSP camera in VLC Media Player and observe the performance of your camera before running the demo.

 

RTSP is expected to have latency issues on OpenCV in Python. You can refer to this Stack Overflow discussion regarding RTSP latency. To minimize this latency issue, you can set your RTSP data rate to a higher value, and also reduce your IP camera resolution.

 

On another note, what is the command that you used to run the Multi Camera Multi Target Python Demo?

 

 

Regards,

Hairul


0 Kudos
Duggy
New Contributor I
1,133 Views

Hi,

 

The issue is less of how it looks visually when it runs in opecv imshow. Its fine if it shows 13:10:20 and then skips to 13:10:25 from a visual point of view. The question is more of, in the background is it processing all the frames between 13:10:20 to 13:10:25 or is it skipping these frames as well? If it is skipping the frames in the processing as it cannot keep up then the 5 seconds of data is also lost.

 

Less concerned if I dont see the 5 seconds of visual frames in between the times, more concerned if the system is actually processing them or skipping forward. That is the question I am trying to understand.

 

Using the runtime parameters similar to yours. 

 

Much appreciated.

0 Kudos
Hairul_Intel
Moderator
1,082 Views

Hi Duggy,

I'd recommend you save the processed result to verify whether the mentioned "skipped frames" occurs in the processing.

 

You can use the following arguments in the command line:

  • --output_video - save the processed results.
  • --history_file - save detection data into JSON file.
  • --save_detections - save bounding boxes into JSON file.

 

The JSON files will show each processed frames for the demo. From here, you can evaluate whether the demo is processing the frames or skipping forward.

 

Here is the command that I used:

python multi_camera_multi_target_tracking_demo.py -i rtsp://username:password@ipaddress:port/h264_ulaw.sdp -m intel\person-detection-retail-0013\FP16\person-detection-retail-0013.xml --m_reid intel\person-reidentification-retail-0277\FP16\person-reidentification-retail-0277.xml --config configs\person.py --output_video testrtsp.avi --history_file histtestrtsp.json --save_detections dettestrtsp.json

 

From my end, the saved demo results of the JSON file (historytestrtsp.json) shows the detection id and the frame it was processed in the video. Here is "id0" which was first detected and processed on frame 337 of the video.

history.png

 

I could clearly see the frames which were processed with the confidence score and bounding box from the saved bounding boxes JSON file (dettestrtsp.json).

detect.png

 

If the person was not detected, the demo will not process the frame. For example, I'll take frame 336 which was prior to the first detection and observe the confidence score and bounding box from the detection JSON file (dettestrtsp.json). The "scores" and "boxes" entry are empty as the demo does not detect and process the person on that frame.

not_detect.png

 

As I've mentioned previously, I did not encounter any skipping frames. From my side, the visualized output was lagging but the saved output file and JSON files indicates that the demo was able to process every frame of RTSP feed.

 

 

Regards,

Hairul

 

0 Kudos
Hairul_Intel
Moderator
952 Views

Hi Duggy,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Hairul


0 Kudos
Reply