Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

convert YoloV3-tiny with 1ch model into OpenVino

Waragai__Katsunori
826 Views

I am using customized YoloV3-tiny with 1 channel images (grayscale images) with different number of classes.

It works on darknet environment.

I noticed that yolo_v3.py and yolo_v3_tiny.py assumes 3channel image input and it might not work with models assumed grayscale image input.

Am I right?

 

Katsunori

 

 

 

0 Kudos
5 Replies
Shubha_R_Intel
Employee
826 Views

Dear Waragai, Katsunori

 yolo_v3.py and yolo_v3_tiny.py are not part of the OpenVino installation. They are however part of the darknet repo. But I actually read through the code as well as the header comments in yolo_v3_tiny.py and I am not seeing why it shouldn't handle grayscale. C in this case would be "1" instead of "3". Doing a quick google search doesn't reveal any recent issues with grayscale either.

:param inputs: a 4-D tensor of size [batch_size, height, width, channels].
        Dimension batch_size may be undefined. The channel order is RGB.
    :param num_classes: number of predicted classes.
    :param is_training: whether is training or not.
    :param data_format: data format NCHW or NHWC.
    :param reuse: whether or not the network and its variables should be reused.
    :return:
    """

Hope it helps,

Thanks,

Shubha

0 Kudos
Waragai__Katsunori
826 Views

Thank you!

I succeeded in converting customized YoloV3-tiny with 1 channel images (grayscale images) with different number of classes.

https://software.intel.com/en-us/forums/computer-vision/topic/821024#comment-1945820

 

0 Kudos
Waragai__Katsunori
826 Views

Now I add --gray option for Inference script .

Katsunori

----

$ diff --cont object_detection_demo_yolov3_async_org.py object_detection_demo_yolov3_async.py
*** object_detection_demo_yolov3_async_org.py    2019-09-27 20:00:40.000000000 +0900
--- object_detection_demo_yolov3_async.py    2019-09-27 20:04:43.000000000 +0900
***************
*** 55,60 ****
--- 55,62 ----
                        action="store_true")
      args.add_argument("-r", "--raw_output_message", help="Optional. Output inference results raw values showing",
                        default=False, action="store_true")
+     args.add_argument("-g", "--gray", help="Optional. Use grayscaled model",
+                       default=False, action="store_true")
      return parser
  
  
***************
*** 160,165 ****
--- 162,173 ----
          return 0
      return area_of_overlap / area_of_union
  
+ def one_channel_frame(frame):
+     if len(frame.shape) == 3:
+         frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
+     h, w = frame.shape[:2]
+     frame = frame.reshape((h, w, 1))
+     return frame
  
  def main():
      args = build_argparser().parse_args()
***************
*** 218,223 ****
--- 226,233 ----
      # Number of frames in picture is 1 and this will be read in cycle. Sync mode is default value for this case
      if number_input_frames != 1:
          ret, frame = cap.read()
+         if args.gray:
+             frame = one_channel_frame(frame)
      else:
          is_async_mode = False
          wait_key_code = 0
***************
*** 240,247 ****
--- 250,261 ----
          # in the regular mode, we capture frame to the CURRENT infer request
          if is_async_mode:
              ret, next_frame = cap.read()
+             if args.gray:
+                 next_frame = one_channel_frame(next_frame)
          else:
              ret, frame = cap.read()
+             if args.gray:
+                 frame = one_channel_frame(frame)
  
          if not ret:
              break
***************
*** 254,259 ****
--- 268,274 ----
              in_frame = cv2.resize(frame, (w, h))
  
          # resize input_frame to network size
+         in_frame = in_frame.reshape((h, w, c)) 
          in_frame = in_frame.transpose((2, 0, 1))  # Change data layout from HWC to CHW
          in_frame = in_frame.reshape((n, c, h, w))
  
***************
 

0 Kudos
Shubha_R_Intel
Employee
826 Views

Dear Waragai, Katsunori,

Thank you for sharing with the OpenVino community. Is your issue fixed now ?

Thanks,

Shubha

 

0 Kudos
Waragai__Katsunori
826 Views

Yes fixed.

 

0 Kudos
Reply