Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

unable to get the data about detection box from the converted model from darknet yolov3

HirokiTKN
Beginner
987 Views

Hi all,

I'm trying to use my original model on NCS2 + raspberry 4 + OpenVINO, but it's not working.

I made my original darknet yolov3 model, and converted it to .xml and .bin. When I tried to actually use it by "out = exec_net.infer(inputs = {'inputs': img})", I got "detector/yolo-v3/Conv_14/BiasAdd/YoloRegion", "detector/yolo-v3/Conv_22/BiasAdd/YoloRegion", and "detector/yolo-v3/Conv_6/BiasAdd/YoloRegion" tensors as "out", and their respective shapes are (1, 27, 26, 26), (1, 27, 52, 52), (1, 27, 13, 13).

(https://docs.openvinotoolkit.org/latest/omz_models_public_yolo_v3_tf_yolo_v3_tf.html)
↑From this web page, you can see that "out" is correct, but I would like to see the data about the detection box. 

How can I get data about detection boxes?

 

My settings--------------------------------

NCS2 + raspberry 4

OpenVINO 2020.3

Puthon 3.7.3

---------------------------------------------

 

Please ask me if there is any other information you need.

I'm not very good at English, but I'm looking forward to working with you.

 

Regards

0 Kudos
5 Replies
RandallMan_B_Intel
968 Views

Hi HirokiTKN,


Thanks for reaching out. Please share more information about your darknet yolov3 model, the MO command used to convert your model to Intermediate Representation (IR), and the command for running it. If possible, please share the files for us to reproduce your issue from our end.


Regards,

Randall


0 Kudos
HirokiTKN
Beginner
956 Views

@RandallMan_B_Intel 

Hi Randall,

 

Thank you for your reply. 

My darknet yolov3 model is a model that classifies into four classes by input shape [416, 416].

I converted my darknet yolov3 model to the frozen model(.pb) as shown on this site with the following code.

python3 convert_weights_pb.py --class_names obj.names --data_format NHWC --weights_file yolov3_custom_last.weights

site:https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html

 

To convert it into an IR model, I ran the following code using the yolo-v3.json with classes rewritten to 4 from 80.

python3 C:\IntelSWTools\openvino\deployment_tools\model_optimizer\mo_tf.py ^
--input_model C:\home\tensorflow-yolo-v3\frozen_darknet_yolov3_model.pb ^
--transformations_config C:\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json ^
--input_shape [1,416,416,3] ^
--data_type=FP16 ^
--model_name LED_test ^
--output_dir C:\home\tensorflow-yolo-v3\FP16

 

When I ran the following python program using the converted IR model, I was unable to obtain any information about the bounding box.

import cv2
import numpy as np
 
# モジュール読み込み 
import sys#pathを通すモジュールのimport
sys.path.append('/opt/intel/openvino/python/python3.5/armv7l')
from openvino.inference_engine import IENetwork, IEPlugin
 
# ターゲットデバイスの指定 
plugin = IEPlugin(device="MYRIAD")
 
# モデルの読み込み 
net = IENetwork(model='/home/pi/workspace/FP16/LED_test.xml', weights='/home/pi/workspace/FP16/LED_test.bin')
exec_net = plugin.load(network=net)
 
# カメラ準備 
cap = cv2.VideoCapture(0)
 
# メインループ 
while True:
    ret, frame = cap.read()
 
    # Reload on error 
    if ret == False:
        continue
 
    # 入力データフォーマットへ変換 
    img = cv2.resize(frame, (416, 416))   # サイズ変更 
    img = img.transpose((2, 0, 1))    # HWC > CHW 
    img = np.expand_dims(img, axis=0) # 次元合せ 
 
    # 推論実行 
    out = exec_net.infer(inputs={'inputs': img})
 
    # 出力から必要なデータのみ取り出し
    print(out)
    out1 = out['detector/yolo-v3/Conv_14/BiasAdd/YoloRegion']
    print(out1.shape)
    out2 = out['detector/yolo-v3/Conv_22/BiasAdd/YoloRegion']
    print(out2.shape)
    out3 = out['detector/yolo-v3/Conv_6/BiasAdd/YoloRegion']
    print(out3.shape)
    out = np.squeeze(out) #サイズ1の次元を全て削除
    
    cv2.imshow('frame', frame)
    key = cv2.waitKey(1)
    if key != -1:
        break
 
    # 検出されたすべての顔領域に対して1つずつ処理 
    for detection in out:
        # conf値の取得 
        confidence = float(detection[2])
 
        # バウンディングボックス座標を入力画像のスケールに変換 
        xmin = int(detection[3] * frame.shape[1])
        ymin = int(detection[4] * frame.shape[0])
        xmax = int(detection[5] * frame.shape[1])
        ymax = int(detection[6] * frame.shape[0])
 
        # conf値が0.5より大きい場合のみバウンディングボックス表示 
        if confidence > 0.5:
            cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(240, 180, 0), thickness=3)
 
    # 画像表示 
    cv2.imshow('frame', frame)
 
    # 何らかのキーが押されたら終了 
    key = cv2.waitKey(1)
    if key != -1:
        break
 
# 終了処理 
cap.release()
cv2.destroyAllWindows()

I'm going to extract the bounding box and do image processing,  so I want to get the coordinates and confidence value of the bounding box. Could you please tell me how I can get the bounding box information?

The files are stored in this google drive. (https://drive.google.com/drive/folders/1rk_fm8JmNgkwXwUFSUkzlYrxoDKU9yIm?usp=sharing)

 

If you need more information, please tell me.

 

Best regards,

HirokiTKN

0 Kudos
Sahira_Intel
Moderator
922 Views

Hi Hiroki,

If you need to get coordinates and confidence values of the bounding boxes, your code snippet attached above gives you those values in the for loop here:

for detection in out:

   # conf値の取得

   confidence = float(detection[2])

   xmin = int(detection[3] * frame.shape[1])

   ymin = int(detection[4] * frame.shape[0])

   xmax = int(detection[5] * frame.shape[1])

   ymax = int(detection[6] * frame.shape[0])

 

If I have misunderstood your question, please let me know.

Best Regards,

Sahira 

0 Kudos
Sahira_Intel
Moderator
909 Views

Hi Hiroki,

Since we have not heard back, this thread will no longer be monitored. If you need any additional information from Intel, please submit a new question.

Thank you,

Sahira 

 

0 Kudos
RandallMan_B_Intel
937 Views

Hi HirokiTKN,


Thanks for your reply. We will take a look into your issue and come back to you as soon as possible. 


Regards,

Randall.


0 Kudos
Reply