Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Convert YOLOv3 Model to IR

verma__Ashish
Beginner
6,925 Views

Hi,

I have followed this link to train yolov3 using Pascal VOC data

https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects

finetuning using darknet53.conv.74 available weights.

after training I got yolov3.weights. I am trying to convert those weights to tensorflow using this link

https://github.com/mystic123/tensorflow-yolo-v3

and this command

python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights

But I am getting this error

Traceback (most recent call last):
  File "convert_weights_pb.py", line 53, in <module>
    tf.app.run()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "convert_weights_pb.py", line 43, in main
    load_ops = load_weights(tf.global_variables(scope='detector'), FLAGS.weights_file)
  File "/home/sr/yolo/tensorflow-yolo-v3/utils.py", line 114, in load_weights
    (shape[3], shape[2], shape[0], shape[1]))
ValueError: cannot reshape array of size 14583 into shape (78,256,1,1)

Do I have to specify yolo cfg file somewhere in flags or I am missing something else

Any help will be appreciated

Regards

Ashish

 

 

0 Kudos
54 Replies
verma__Ashish
Beginner
3,131 Views

A little update -

when I am trying to convert same yolo model to tensorflow using this link

https://github.com/jinyu121/DW2TF

I am successfully able to convert it and I got three files

 yolov3-voc.ckpt.index
 yolov3-voc.ckpt.meta
 yolov3-voc.pb

Now when I am trying to convert this to IR using this command

 python3 mo_tf.py --input_model yolov3-voc.pb --tensorflow_use_custom_operations_config yolo_v3.json --output_dir /home/ --input_shape=[1,416,416,3] --disable_fusing --disable_gfusing

I am getting following error

 [ ERROR ]  Exception occurred during running replacer "TFYOLOV3" (<class 'extensions.front.tf.YOLO.YoloV3RegionAddon'>): TensorFlow YOLO V3 conversion mechanism was enabled. Entry points "detector/yolo-v3/Reshape, detector/yolo-v3/Reshape_4, detector/yolo-v3/Reshape_8" were provided in the configuration file. Entry points are nodes that feed YOLO Region layers. Node with name detector/yolo-v3/Reshape doesn't exist in the graph. Refer to documentation about converting YOLO models for more information.

Any help will be apppreciated

Thanks

Ashish

 

 

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
I will check the operation from tomorrow. The example below is an example of tiny-YoloV3, but I succeeded. I was inspired by https://github.com/jinyu121/DW2TF.git. https://github.com/PINTO0309/OpenVINO-YoloV3/wiki/Reference-repository#conversion-success-2
0 Kudos
verma__Ashish
Beginner
3,131 Views

Hi,

I just changed my approach a little bit. I am training darknet on less number of classes without finetuning as I have reduced the number of filters. I have got the trained model and I am successfully able to convert that model in tensorflow using

https://github.com/jinyu121/DW2TF

Also I find out that entry points layer has changed in this model , so I have modified the yolo_v3.json based on my entry points and ran this command

python3 mo_tf.py --input_model yolov3-voc.pb --tensorflow_use_custom_operations_config yolo_v3.json --output_dir /home/ --input_shape=[1,416,416,3]

but I am getting this error -

[ ERROR ]  Cannot infer shapes or values for node "yolov3/convolutional12/BatchNorm/gamma".
[ ERROR ]  Attempting to use uninitialized value yolov3/convolutional12/BatchNorm/gamma
     [[Node: _retval_yolov3/convolutional12/BatchNorm/gamma_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](yolov3/convolutional12/BatchNorm/gamma)]]
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7efc0e201d08>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "yolov3/convolutional12/BatchNorm/gamma" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

Note: I am able to run the darknet model separately.

Any help will be appreciated

Regards

Ashish

0 Kudos
verma__Ashish
Beginner
3,131 Views

Hi,

I am not getting any file named freeze_graph.py in the link you have sent

https://github.com/PINTO0309/OpenVINO-YoloV3/wiki/Reference-repository#c.

 

Regards

Ashish

 

0 Kudos
HemanthKum_G_Intel
3,131 Views

Freeze-Graph is provided by Tensorflow. Search in your TF installation folder or pull it from the following link - https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py

 

0 Kudos
verma__Ashish
Beginner
3,131 Views

Hi ,

I found out that but while following this link

https://github.com/PINTO0309/OpenVINO-YoloV3/wiki/Reference-repository#conversion-success-2

I am getting this error -

   self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1321, in _build
    raise ValueError("No variables to save")
ValueError: No variables to save

 

Regards

Ashish

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
Does the following work? $ git clone https://github.com/jinyu121/DW2TF.git $ cd DW2TF $ wget https://pjreddie.com/media/files/yolov3.weights $ wget https://github.com/tensorflow/tensorflow/raw/master/tensorflow/python/tools/freeze_graph.py $ cp models/yolov3.cfg . $ python3 main.py \ --cfg 'yolov3.cfg' \ --weights 'yolov3.weights' \ --output 'data/' \ --prefix 'yolov3/' $ python3 freeze_graph.py \ --input_graph=data/yolov3.pb \ --input_checkpoint=data/yolov3.ckpt \ --output_graph=data/frozen_yolov3.pb \ --output_node_names=yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd \ --input_binary=True $ cd ~/DW2TF $ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ --input_model data/frozen_yolov3.pb \ --output_dir . \ --data_type FP16 \ --batch 1 \ --input yolov3/net1 \ --output yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd or $ cd ~/DW2TF $ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ --input_model data/frozen_yolov3.pb \ --output_dir . \ --data_type FP32 \ --batch 1 \ --input yolov3/net1 \ --output yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd
0 Kudos
verma__Ashish
Beginner
3,131 Views

Yeah it works, Is there some issue with the custom model?

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
I do not know the structure of your model so I can not give accurate advice. But, I think that it is only to change "yolov3/net1" and "yolov3/convolutional59/BiasAdd, yolov3/convolutional67/BiasAdd, yolov3/convolutional75 /BiasAdd" according to your model.
0 Kudos
verma__Ashish
Beginner
3,131 Views

No, Its same only. I am also using "yolov3/net1" and "yolov3/convolutional59/BiasAdd, yolov3/convolutional67/BiasAdd in my json file.

SO, while converting it into tensorflow using

python3 main.py \
--cfg 'yolov3.cfg' \
--weights 'yolov3.weights' \
--output 'data/' \
--prefix 'yolov3/'

I am getting following output files-

  Feb 27 15:40 yolov3-voc.ckpt.data-00000-of-00001
 Feb 27 15:40 yolov3-voc.ckpt.index
 Feb 27 15:40 yolov3-voc.ckpt.meta
 Feb 27 15:40 yolov3-voc.pb

and while running this command -

python3 freeze_graph.py --input_graph=data/yolov3-voc-reduced-filters.pb --input_checkpoint=data/yolov3-voc.ckpt.index --output_graph=data/frozen-voc-yolov3.pb --output_node_names=yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd --input_binary=True

This is the output error message

2019-02-27 15:41:56.706511: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py", line 382, in <module>
    run_main()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py", line 379, in run_main
    app.run(main=my_main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py", line 378, in <lambda>
    my_main = lambda unused_args: main(unused_args, flags)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py", line 272, in main
    flags.saved_model_tags, checkpoint_version)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py", line 254, in freeze_graph
    checkpoint_version=checkpoint_version)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py", line 128, in freeze_graph_with_def_protos
    var_list=var_list, write_version=checkpoint_version)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1284, in __init__
    self.build()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1296, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1321, in _build
    raise ValueError("No variables to save")
ValueError: No variables to save

Any clues, Is there any issue with this file yolov3-voc.ckpt.index?

Regards

Ashish

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
NG: --input_checkpoint=data/yolov3-voc.ckpt.index OK: --input_checkpoint=data/yolov3-voc.ckpt
0 Kudos
verma__Ashish
Beginner
3,131 Views

thanks , By using OK: --input_checkpoint=data/yolov3-voc.ckpt

I am able to get xml and bin file but when I am running it using inference engine samples objectdetection_demo yolov3 using this command

./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml

Its stopping at

    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. lnx_20181004
    Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
[ INFO ] Start inference

It's detecting thousands of objects in single frame, Any clues?

Because when I am running the darknet model using darknet.exe , I am getting the correct output and bounding boxes.

Regards

Ashish

 

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
The threshold value in the sample program is too small. Adjust with "-t" option. For example. ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.95 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.90 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.85 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.80 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.75 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.70 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.65 -d CPU ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.60 -d CPU
0 Kudos
verma__Ashish
Beginner
3,131 Views

I have tried adjusting that also but still I am getting the same output

I ran this command -

./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.95 -d CPU

I got this output -


InferenceEngine:
    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. lnx_20181004
    Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
[ INFO ] Start inference

and then it is waiting here only, when I debug I found out it's detecting around 53000 objects in a single frame, thats why it is taking time to display results.

 

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
It may be better to customize the Darknet sample program. I have been working on Python conversion of Darknet program since yesterday. I am trying to isolate whether it is a model problem after converting or a compatibility problem with the sample program. Since I am working on multiple projects at the same time by myself, it seems that it will take a few days. I am concerned about the following warnings displayed during model conversion. /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:65: RuntimeWarning: invalid value encountered in sqrt scale = 1. / np.sqrt(variance.value + eps)
0 Kudos
verma__Ashish
Beginner
3,131 Views

Case 1 -> If I  directly use yolov3.cfg and train using custom dataset and convert into openvino is not giving any problem its detecting correct output.

Case 2 -> when I tweak the yolov3.cfg by reducing layers or filters and then follow the same path, I am getting this issue of detecting too many objects in a frame while running inference

Is openvino can only be used for standard yolov3 model? or I am missing something else

Thanks

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
Case 2 -> when I tweak the yolov3.cfg by reducing layers or filters and then follow the same path, I am getting this issue of detecting too many objects in a frame while running inference It is a very interesting result. Is there a change in the result even if "-iou_t 0.2" is specified? ./object_detection_demo_yolov3_async -i cam -m frozen-yolov3.xml -t 0.95 -d CPU -iou_t 0.2
0 Kudos
verma__Ashish
Beginner
3,131 Views

yeah , I am getting this warning while converting to xml and bin

/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:65: RuntimeWarning: invalid value encountered in sqrt
scale = 1. / np.sqrt(variance.value + eps)

No, there is no change even if I specify -iou_t 0.2"

Regards

Ashish

0 Kudos
Hyodo__Katsuya
Innovator
3,131 Views
It may be that the operation overflows while converting the model. If the following command is executed, will an overflow warning be displayed? $ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ --input_model data/frozen_yolov3.pb \ --output_dir . \ --data_type FP32 \ --batch 1 \ --input yolov3/net1 \ --output yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd \ --log_level WARNING or $ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ --input_model data/frozen_yolov3.pb \ --output_dir . \ --data_type FP32 \ --batch 1 \ --input yolov3/net1 \ --output yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd \ --log_level DEBUG
0 Kudos
verma__Ashish
Beginner
2,498 Views

I ran this -

sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model data/frozen_yolov3.pb \
--output_dir . \
--data_type FP32 \
--batch 1 \
--input yolov3/net1 \
--output yolov3/convolutional59/BiasAdd,yolov3/convolutional67/BiasAdd,yolov3/convolutional75/BiasAdd \
--log_level WARNING

Yes I am getting the overflow warning

/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:65: RuntimeWarning: invalid value encountered in sqrt
  scale = 1. / np.sqrt(variance.value + eps)

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/sr/yolo/DW2TF/data/./frozen-voc-reduced-graph-yolov3.xml
[ SUCCESS ] BIN file: /home/sr/yolo/DW2TF/data/./frozen-voc-reduced-graph-yolov3.bin
[ SUCCESS ] Total execution time: 12.76 seconds.

0 Kudos
Reply