- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi was using yolov5m custom model trained on crowdhuman dataset.
i tried to convert the model IR format using utralytics export.py code and used detect.py there i got the good results.
The same i followed the dlstreamer guide (Yolov5 Model Preparation Example — Intel® Deep Learning Streamer (Intel® DL Streamer) documentation) to convert the model to IR format and used the below model_proc file there i got the multiple and improper bboxes compared to detect.py.
please find the below pipeline and model repo link
Using openvino 2022.3
Model repo: deepakcrk/yolov5-crowdhuman: Head and Person detection using yolov5. Detection from crowd. (github.com)
MODEL_PATH=../models/yolov5_head_openvino_model/FP32/yolov5m.xml
PROC_PATH=../models/yolov5_head_openvino_model/crowdhuman_yolov5.json
cmd="filesrc location=../videos/NVR_ch18.mp4 ! decodebin \
! videoconvert ! video/x-raw, format=BGR \
! queue ! gvadetect model=$MODEL_PATH model-proc=$PROC_PATH threshold=0.7 device=CPU \
! queue ! gvatrack \
! queue ! gvawatermark ! queue ! videoconvert ! fpsdisplaysink sync=false"
gst-launch-1.0 $cmd
{
"json_schema_version": "2.2.0",
"input_preproc": [
{
"format": "image",
"layer_name": "images",
"params": {
"resize": "aspect-ratio",
"color_space": "BGR",
"reverse_channels": true
}
}
],
"output_postproc": [
{
"converter": "yolo_v5",
"output_sigmoid_activation": true,
"do_cls_softmax": true,
"iou_threshold": 0.20,
"bbox_number_on_cell": 3,
"cells_number": 20,
"classes": 2,
"labels": ["person", "head"],
"anchors": [
10.0,
13.0,
16.0,
30.0,
33.0,
23.0,
30.0,
61.0,
62.0,
45.0,
59.0,
119.0,
116.0,
90.0,
156.0,
198.0,
373.0,
326.0
],
"masks": [
6,
7,
8,
3,
4,
5,
0,
1,
2
]
}
]
}
Using dlstreamer pipeline
`dlstreamer.png`
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for reaching out to us.
Referring to Line 14 output_postproc in your Model-proc File, could you please change "labels": ["person", "head"], to "labels": ["head"], in line 23, and see if the issue can be resolved?
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wan,
If i change the "labels": ["head"] the number of classes are 2 it gives me error.
0:00:02.774746148 1483 0x565550db46f0 WARN kmssink gstkmssink.c:1160:gst_kms_sink_start:<fps-display-video_sink-actual-sink-kms> error: Could not open DRM module (NULL)
0:00:02.774788742 1483 0x565550db46f0 WARN kmssink gstkmssink.c:1160:gst_kms_sink_start:<fps-display-video_sink-actual-sink-kms> error: reason: No such file or directory (2)
0:00:02.774823112 1483 0x565550db46f0 WARN basesink gstbasesink.c:5881:gst_base_sink_change_state:<fps-display-video_sink-actual-sink-kms> error: Failed to start
0:00:02.822407738 1483 0x565550db46f0 WARN ximagepool ximagepool.c:500:gst_x_image_sink_check_xshm_calls: MIT-SHM extension check failed at XShmAttach. Not using shared memory.
0:00:02.858997065 1483 0x565550db46f0 WARN ximagepool ximagepool.c:500:gst_x_image_sink_check_xshm_calls: MIT-SHM extension check failed at XShmAttach. Not using shared memory.
0:00:02.875303764 1483 0x565550db46f0 WARN basesrc gstbasesrc.c:3693:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:02.898917888 1483 0x565550d3ab60 WARN qtdemux qtdemux.c:3244:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
Redistribute latency...
Redistribute latency...
Redistribute latency...
0:00:03.172265025 1483 0x565550d3a6a0 WARN default inference_impl.cpp:474:InferenceImpl:<gvadetect0> Loading model: device=CPU, path=../models/yolov5_head_openvino_model/FP32/yolov5m.xml
0:00:03.172340407 1483 0x565550d3a6a0 WARN default inference_impl.cpp:476:InferenceImpl:<gvadetect0> Initial settings batch_size=0, nireq=0
0:00:03.186059231 1483 0x565550d3a6a0 WARN default model_proc_parser_v2_1.h:50:parseProcessingItem: The 'layer_name' field has not been set. Its value will be defined as ANY
0:00:04.986374130 1483 0x565550d3a6a0 ERROR GVA_common post_processor_c.cpp:22:createPostProcessor: Couldn't create post-processor:
Failed to create PostProcessorImpl
Failed to create "yolo_v5" converter.
Number of classes greater then number of labels.
0:00:04.986463484 1483 0x565550d3a6a0 WARN gva_base_inference gva_base_inference.cpp:796:gva_base_inference_set_caps:<gvadetect0> error: base_inference based element initialization has been failed.
0:00:04.986509957 1483 0x565550d3a6a0 WARN gva_base_inference gva_base_inference.cpp:796:gva_base_inference_set_caps:<gvadetect0> error:
post-processing is NULL.
ERROR: from element /GstPipeline:pipeline0/GstGvaDetect:gvadetect0: base_inference based element initialization has been failed.
Additional debug info:
/home/dlstreamer/dlstreamer/src/monolithic/gst/inference_elements/base/gva_base_inference.cpp(796): gva_base_inference_set_caps (): /GstPipeline:pipeline0/GstGvaDetect:gvadetect0:
post-processing is NULL.
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
0:00:04.987362338 1483 0x565550d3a6a0 WARN basetransform gstbasetransform.c:1379:gst_base_transform_setcaps:<gvadetect0> FAILED to configure incaps video/x-raw, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)25/1, format=(string)BGR and outcaps video/x-raw, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)25/1, format=(string)BGR
0:00:04.989638122 1483 0x565550d3a860 WARN GST_PADS gstpad.c:4361:gst_pad_peer_query:<videoconvert1:src> could not send sticky events
0:00:05.011254426 1483 0x565550d3a6a0 WARN gva_base_inference gva_base_inference.cpp:946:gva_base_inference_transform_ip:<gvadetect0> error: base_inference failed on frame processing
0:00:05.011305041 1483 0x565550d3a6a0 WARN gva_base_inference gva_base_inference.cpp:946:gva_base_inference_transform_ip:<gvadetect0> error:
Failed to submit images to inference
Mapper is null
ERROR: from element /GstPipeline:pipeline0/GstGvaDetect:gvadetect0: base_inference failed on frame processing
Additional debug info:
/home/dlstreamer/dlstreamer/src/monolithic/gst/inference_elements/base/gva_base_inference.cpp(946): gva_base_inference_transform_ip (): /GstPipeline:pipeline0/GstGvaDetect:gvadetect0:
Failed to submit images to inference
Mapper is null
ERROR: pipeline doesn't want to preroll.
0:00:05.026946452 1483 0x565550d3ab60 WARN qtdemux qtdemux.c:6967:gst_qtdemux_loop:<qtdemux0> error: Internal data stream error.
0:00:05.026999576 1483 0x565550d3ab60 WARN qtdemux qtdemux.c:6967:gst_qtdemux_loop:<qtdemux0> error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0: Internal data stream error.
Additional debug info:
../gst/isomp4/qtdemux.c(6967): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0:
streaming stopped, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Freeing pipeline ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for the information.
Could you please change the classes to 1 and see if the issue persists?
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wan,
if i change classes to 1 i am getting error.
can you try to reproduce the issue from your side if possible?
Failed to submit images to inference
Mapper is null
ERROR: from element /GstPipeline:pipeline0/GstGvaDetect:gvadetect0: base_inference failed on frame processing
Additional debug info:
/home/dlstreamer/dlstreamer/src/monolithic/gst/inference_elements/base/gva_base_inference.cpp(946): gva_base_inference_transform_ip (): /GstPipeline:pipeline0/GstGvaDetect:gvadetect0:
Failed to submit images to inference
Mapper is null
ERROR: pipeline doesn't want to preroll.
0:00:03.052007273 151 0x557de98bbb60 WARN qtdemux qtdemux.c:6967:gst_qtdemux_loop:<qtdemux0> error: Internal data stream error.
0:00:03.052070496 151 0x557de98bbb60 WARN qtdemux qtdemux.c:6967:gst_qtdemux_loop:<qtdemux0> error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0: Internal data stream error.
Additional debug info:
../gst/isomp4/qtdemux.c(6967): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0:
streaming stopped, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Freeing pipeline ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for the information.
We'll further investigate the issue and we'll update you as soon as possible.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for your patience.
I've run your PyTorch model with detect.py from the GitHub repository, and the inference result was good:
https://github.com/deepakcrk/yolov5-crowdhuman/tree/master
Inference Result of PyTorch model with detect.py:
Before converting your PyTorch model into Intermediate Representation, I've converted your PyTorch model into an ONNX model. Are you able to run the ONNX model with your detect.py? I would like to check the inference result of the ONNX model with your detect.py before we convert it into an Intermediate Representation.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wan,
I am able to run detect.py using onnx model, Please find the below screen shot
python3 detect.py --weights crowdhuman_yolov5m.onnx --source data/images/bus.jpg --imgsz 640
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for the information.
I ran the detect.py script with the command: python3.9 detect.py --weights crowdhuman_yolov5m.onnx --source data/images/bus.jpg --imgsz 640. However, I encountered the following error: detect.py: error: unrecognized arguments: --imgsz 640
Next, I ran the detect.py script without --imgsz 640, I encountered the following error: _pickle.UnpicklingError: invalid load key, '\x08'
Could you please share the ONNX model with us so we can replicate the issue on our end?
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wan,
Please find the exported onnx file link. 71mb is the maximum size so i uploaded to gdrive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for the information.
Could you please run the inference with the sample video in the link below and share the result with us?
https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4
On the other hand, please share the environment details with us:
Host Operating System
Hardware specifications
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for your patience.
I also encountered an accuracy issue when running the object detection pipeline using YOLOv5m with Intel® Deep Learning Streamer Pipeline Framework Release 2023.0.
1. Object Detection using PyTorch model with detect.py
python3 detect.py --weights ../crowdhuman_yolov5m.pt --source ../people-detection.mp4 --device CPU
2. Object detection using Intermediate Representation with Intel® DL Streamer
gst-launch-1.0 filesrc location=people-detection.mp4 ! decodebin force-sw-decoders=true ! queue ! gvadetect model=crowdhuman_yolov5m.xml model-proc=yolo-v5.json inference_interval=1 threshold=0.4 device=CPU ! queue ! gvawatermark ! videoconvert ! autovideosink sync=false
{
"json_schema_version": "2.2.0",
"input_preproc": [
{
"layer_name": "images",
"format": "image",
"params": {"resize": "aspect-ratio","color_space":"BGR"}
}
],
"output_postproc": [
{
"converter": "yolo_v5",
"output_sigmoid_activation": true,
"do_cls_softmax": true,
"iou_threshold": 0.4,
"classes": 2,
"anchors": [
10.0,
13.0,
16.0,
30.0,
33.0,
23.0,
30.0,
61.0,
62.0,
45.0,
59.0,
119.0,
116.0,
90.0,
156.0,
198.0,
373.0,
326.0
],
"masks": [
6,
7,
8,
3,
4,
5,
0,
1,
2
]
}
]
}
Let me check with relevant team and I'll update you as soon as possible.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wan,
Thank you for the update.
I am running on macOS with an Intel chipset and using OpenVINO 2023.2. The YOLOv5m 80-class model from the Ultralytics repository works fine for me. However, when I use the 2-class model, it produces false results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Let me check with the relevant team and we'll update you at the earliest.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for your patience. We've received feedback from relevant team.
Referring to Known Issues in Intel® Deep Learning Streamer Pipeline Framework Release 2023.0, we regret to inform you that intermittent accuracy fails with YOLOv5m and YOLOv5s is a known issue.
Our developers are working on resolving the issue. Please stay tune for the next Intel® Deep Learning Streamer Pipeline Framework Releases. Sorry for the inconvenience and thank you for your support.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi wan,
Thanks for the update, i did not face this issue with yolov5m pretrained models from ultralytics repo.
This issue i am facing only with the yolov5m custom trained models.
Thanks
shekar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for your information.
We encountered the same issue as you when running the object detection pipeline using your custom YOLOv5m model with Intel® Deep Learning Streamer Pipeline Framework Release 2023.0.
We will fix the issue in the future Intel® Deep Learning Streamer Pipeline Framework Releases. We are not sure when the fix will be available, but please stay tuned for the next release. Hope it helps.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shekarneo,
Thanks for your question.
If you need additional information from Intel, please submit a new question as this thread will no longer be monitored.
Regards,
Wan

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page