Hi,
How to extract the boxes, class id, score from the results there using the person-vehicle-bike-detection-crossroad-yolov3-1020 Intel pre-trained model?
Consulting the object_detection_demo.py it's not clear how to do that compared to reading the model from Yolo weights and cfg, using the " classes, scores, boxes = model.detect(frame, CONFIDENCE_THRESHOLD, NMS_THRESHOLD)" call.
....
net = cv.dnn.readNet(person-vehicle-bike-detection-crossroad-yolov3-1020.xml, person-vehicle-bike-detection-crossroad-yolov3-1020.bin)
net.setPreferableBackend(cv.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)
...
blob = cv.dnn.blobFromImage(frame, size=(416, 416))
net.setInput(blob)
out = net.forward()
for detection in out:
confidence = float(detection[2])
xmin = int(detection[3] * frame.shape[1])
ymin = int(detection[4] * frame.shape[0])
xmax = int(detection[5] * frame.shape[1])
ymax = int(detection[6] * frame.shape[0])
if confidence > 0.5:
cv.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
連結已複製
Hi Igor_F_Intel,
Thank you for reaching out to us.
For your information, by running Object Detection Python* Demo with person-vehicle-bike-detection-crossroad-yolov3-1020, you can print the raw output inference results by using command -r or --raw_output_message.
You can execute the command as follow:
python object_detection_demo.py -i <path_to_input> -m <path_to_model>\person-vehicle-bike-detection-crossroad-yolov3-1020.xml -at yolo --raw_output_message
Regards,
Wan
Tks Wan for the support.
But what I need is not to use the python demo script as you described. I would like to get the inference's result and extract the bounding boxes, score, and class id, to use it on my own application.
The demo script you pointed out makes use of several other scripts that is very verbose/complex to embedded on my application.
best,
Igor
Hi Igor_F_Intel,
By executing --raw_output_message or -r with Object Detection Python* Demo, you will extract the Bounding Boxes, Detection Score, and Class ID from the inference’s result as shown in the attached picture:
Looking into the object_detection_demo.py, Line 213 to Line 224, you can find the function of extracting the Bounding Boxes, Detection Score, and Class ID from the inference’s result.
Hope this helps.
Regards,
Wan
Did you try OpenCV's DNN object_detection sample? -> https://github.com/opencv/opencv/blob/master/samples/dnn/object_detection.py
It reads parameters from the `models` file -> https://github.com/opencv/opencv/blob/master/samples/dnn/models.yml
There are no Intel/OpenVINO models in this file, but I assume person-vehicle-bike-detection-crossroad-yolov3-1020 is very similar to YOLOv3.
cv.dnn.readNet
Keep getting the error when trying to read openvino ir :
import cv2 as cv
net = cv.dnn.readNet(XML, BIN)
cv2.error: OpenCV(4.5.2-openvino) /home/pi/opencv/modules/dnn/src/dnn.cpp:3901: error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function 'readFromModelOptimizer'
I did install openvino and run the initialization script on my raspberry pi4b ( with MYRIAD ) as per instructions in:
https://www.intel.com/content/www/us/en/support/articles/000057005/boards-and-kits.html
The initialization command is:
source /home/pi/openvino_dist/bin/setupvars.sh
How can I use cv.dnn.readNet without the error ?
