Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Convert YOLOv3 Model to IR

verma__Ashish
Beginner
4,070 Views

Hi,

I have followed this link to train yolov3 using Pascal VOC data

https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects

finetuning using darknet53.conv.74 available weights.

after training I got yolov3.weights. I am trying to convert those weights to tensorflow using this link

https://github.com/mystic123/tensorflow-yolo-v3

and this command

python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights

But I am getting this error

Traceback (most recent call last):
  File "convert_weights_pb.py", line 53, in <module>
    tf.app.run()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "convert_weights_pb.py", line 43, in main
    load_ops = load_weights(tf.global_variables(scope='detector'), FLAGS.weights_file)
  File "/home/sr/yolo/tensorflow-yolo-v3/utils.py", line 114, in load_weights
    (shape[3], shape[2], shape[0], shape[1]))
ValueError: cannot reshape array of size 14583 into shape (78,256,1,1)

Do I have to specify yolo cfg file somewhere in flags or I am missing something else

Any help will be appreciated

Regards

Ashish

 

 

0 Kudos
54 Replies
Korada__Madhu
Beginner
919 Views

During generating IR file i am getting the following error

 

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      D:\working\DW2TF\yolov3-tiny/frozen_yolov3-tiny.pb
        - Path for generated IR:        D:\working\DW2TF\irmodels/YoloV3-tiny/FP16
        - IR output name:       frozen_yolov3-tiny
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         yolov3-tiny/net1
        - Output layers:        yolov3-tiny/convolutional10/BiasAdd,yolov3-tiny/convolutional13/BiasAdd
        - Input shapes:         [1,416,416,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP16
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Offload unsupported operations:       False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        1.5.12.49d067a0
[ ERROR ]  List of operations that cannot be converted to IE IR:
[ ERROR ]      LeakyRelu (11)
[ ERROR ]          yolov3-tiny/convolutional1/Activation
[ ERROR ]          yolov3-tiny/convolutional2/Activation
[ ERROR ]          yolov3-tiny/convolutional3/Activation
[ ERROR ]          yolov3-tiny/convolutional4/Activation
[ ERROR ]          yolov3-tiny/convolutional5/Activation
[ ERROR ]          yolov3-tiny/convolutional6/Activation
[ ERROR ]          yolov3-tiny/convolutional7/Activation
[ ERROR ]          yolov3-tiny/convolutional8/Activation
[ ERROR ]          yolov3-tiny/convolutional9/Activation
[ ERROR ]          yolov3-tiny/convolutional11/Activation
[ ERROR ]          yolov3-tiny/convolutional12/Activation
[ ERROR ]  Part of the nodes was not translated to IE. Stopped.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.

I think it is not able to convert LeakyRelu layer in my case, but how did you guys able to convert it without any issues.

Did anyone encounter the same problem?

0 Kudos
Ravat__Bhargav
Beginner
919 Views

Hyodo, Katsuya wrote:

NG: --input_checkpoint=data/yolov3-voc.ckpt.index

OK: --input_checkpoint=data/yolov3-voc.ckpt

I have followed the earlier steps , as output I am getting following 4 files . 

1. checkpoint

2. yolov3_edited.ckpt.data-00000-of-00001

3. yolov3_edited.ckpt.index

4. yolov3_edited.ckpt.meta 

However I am unable to get yolov3_edited.ckpt file , Can you let me know the process how can I get .ckpt file ?

0 Kudos
Ravat__Bhargav
Beginner
919 Views

@Verma Ashish , @Hyodo, Katsuya, 

 

 

Hi :

Now , when I run darknet detctor demo it is working fine and obtaining the expected output. However after converting the weights in to IR using intel openvino the results are extremely weird. I guess I made some mistake in conversion.
Step1 : weights to pb conversion
(using https://github.com/mystic123/tensorflow-yolo-v3 )

python3 /home/paperspace/Desktop/tensorflow-yolo-v3/convert_weights_pb.py --class_names /home/paperspace/Downloads/Dataset/metadata/person.names --data_format NHWC --weights_file /home/paperspace/Downloads/Dataset/metadata/yolo_backup/yolov3_edited_14700.weights --output_graph /home/paperspace/Desktop/yolo_edited_test_14700.pb --size 416

And Now PB to XML

_python3 /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo_tf.py --input_model /home/paperspace/Desktop/yolo_edited_test_14700.pb --tensorflow_object_detection_api_pipeline_config /home/paperspace/Downloads/Dataset/metadata/yolov3_edited.cfg --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3_edited.json --reverse_input_channels --input_shape [1,416,416,3] --data_type FP16 --output_dir /home/paperspace/Desktop/14700/_

Can you suggest the head way ? Here is the output.

 

 

0 Kudos
verma__Ashish
Beginner
919 Views

@Ravat Bhargav,

Hi,

Have you change the darknet model file or are you using the same number of layers ?

Regards

Ashish

0 Kudos
Hyodo__Katsuya
Innovator
919 Views

Please use ubuntu.

And, tensorflow v1.12.0.

https://github.com/PINTO0309/OpenVINO-YoloV3/issues/15

0 Kudos
verma__Ashish
Beginner
919 Views

@hyodo, Katsuya

Hi,

Have you tried running yolo with NCS2. I am able to run full yolo and tiny yolo openvino version with CPU and GPU but when I am trying to run with NCS (MYRIAD) , I am getting error.

Any help will be appreciated

Regards

Ashish

0 Kudos
Hyodo__Katsuya
Innovator
919 Views

@verma, Ashish

Please at least describe about the error message.

I have no idea what kind of problem you are encountering.

0 Kudos
verma__Ashish
Beginner
919 Views

@Hyodo,Katsuya

I have tried running openvino object_detection_demo_yolov3_async inference sample using the model given in this link

https://github.com/PINTO0309/OpenVINO-YoloV3 and i am able to run it in NCS2 (Myriad)

But when I am running my custom trained model of yolov3 using the same inference engine sample, i am getting

ERROR: segment exceed given buffer limits. Please validate weights file.

I am running using this command -

./object_detection_demo_yolov3_async -i cam -m /path/to/model/ -d MYRIAD

Regards

Ashish

 

0 Kudos
Hyodo__Katsuya
Innovator
919 Views

@verma, Ashish

Please tell me the file size of the model you generated. ".bin" and ".xml" NCS's internal buffer is only 500MB. Intermediate data generated internally in the process of inference can be larger than the file size of .bin.

0 Kudos
verma__Ashish
Beginner
919 Views

@Hyoda, katsuya

I got busy in some work.

xml file is of size - 33kB

bin file is of size - 17.4 MB

Regards

Ashish

0 Kudos
Lu_C_Intel
Employee
919 Views

@Hyodo, Katsuya

I can convert the IR format successfully from the instructions listed last page step by step.

However, when I run the sample object_detection_demo_yolov3_async, it reports error like below:

$/opt/intel/openvino/inference_engine/samples/build/intel64/Release$ ./object_detection_demo_yolov3_async -i cam -m /sata/data/DW2TF/FP32/frozen_yolo3.xml
InferenceEngine:
        API version ............ 1.6
        Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin

        API version ............ 1.6
        Build .................. 23780
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
[ INFO ] Start inference
To close the application, press 'CTRL+C' or any key with focus on the output window
[ ERROR ] No such parameter name 'num' for layer yolov3/convolutional59/Conv2D

Seems the model is not align with the sample, and I can run the demo from https://github.com/PINTO0309/OpenVINO-YoloV3 with the model download from the git, but the result classes are not correct.

Any suggestions for my problem? Thanks very much.

Best Regards,

Lu 

 

 

 

0 Kudos
huang__bingcheng
Beginner
919 Views

I use keras-yolov3 to train our dataset ,and make  h5 model file to  pb than to  IR , i use  the parameter like the openvino-yolov3,  when i use python3 openvino_yolov3_test.py  ,my result so bad  why??

 

blob [[[[ 3.02317500e-01 -2.25801952e-02  3.30833942e-02 ...  1.28009886e-01
     1.44932061e-01 -2.51883507e-01]
   [ 4.43242282e-01 -3.33306521e-01 -3.08181614e-01 ...  2.00642720e-01
     1.67894781e-01 -4.04987097e-01]
   [ 2.61393487e-01 -8.63624692e-01 -4.92631435e-01 ...  2.54397631e-01
     3.66868585e-01 -8.74971598e-02]
   ...
   [ 1.50360096e+00 -6.33053923e+00 -6.85847342e-01 ... -1.48463875e-01
    -1.39868855e-01 -2.97371238e-01]
   [ 2.94847536e+00 -7.72444344e+00 -6.55049682e-01 ... -1.51675895e-01
    -4.54718113e-01 -5.37983358e-01]
   [ 2.30355883e+00 -3.71498585e+00 -3.67018521e-01 ... -4.95947674e-02
    -2.39513770e-01 -3.31587523e-01]]

  [[ 5.57280123e-01  6.22004807e-01  6.63405180e-01 ...  1.15434803e-01
     3.88104826e-01  4.57279772e-01]
   [ 5.52652836e-01 -4.07576114e-02 -4.12248224e-01 ... -8.32907557e-01
    -5.95257401e-01 -2.01923922e-01]
   [ 3.68219793e-01 -3.46882105e-01 -6.72265410e-01 ... -8.79552186e-01
    -7.66544998e-01 -2.90897548e-01]
   ...
   [ 7.32840776e-01 -4.02417660e+00 -5.52696848e+00 ...  3.46186221e-01
     4.76295292e-01  1.44927442e-01]
   [-1.44332170e+00 -1.77398658e+00 -3.29320097e+00 ...  2.83997953e-01
     2.39876449e-01  1.20636009e-01]
   [-8.35060787e+00 -1.02383976e+01 -4.66845179e+00 ... -6.34153962e-01
    -8.22160363e-01 -4.03590918e-01]]

  [[-1.79282993e-01  4.06725854e-01  5.77209532e-01 ...  6.45074725e-01
     4.91491795e-01 -6.32852316e-05]
   [-6.86667800e-01  5.29617071e-04  2.05647886e-01 ...  2.73459852e-01
     1.14413090e-01 -4.35639828e-01]
   [-9.26618218e-01 -2.74961531e-01 -1.14081562e-01 ... -2.43551843e-02
    -1.49180934e-01 -6.78820908e-01]
   ...
   [-4.86591399e-01 -1.88698471e-02  1.40680420e+00 ...  1.86087221e-01
     9.24996659e-02 -4.84933913e-01]
   [-1.12545943e+00 -1.21379304e+00 -1.24170363e-01 ...  4.11653399e-01
     2.05088660e-01 -3.69241208e-01]
   [-1.05271912e+00 -6.46932185e-01  4.29875046e-01 ...  5.47537923e-01
     3.17502588e-01 -1.50794417e-01]]

  ...

  [[-1.60614147e+01 -1.67510414e+01 -1.68019581e+01 ... -1.59574451e+01
    -1.62251701e+01 -1.62220707e+01]
   [-1.56929417e+01 -1.51135378e+01 -1.47859612e+01 ... -1.43818264e+01
    -1.47014790e+01 -1.52930326e+01]
   [-1.59485970e+01 -1.41741276e+01 -1.35027084e+01 ... -1.45224056e+01
    -1.52010841e+01 -1.58225384e+01]
   ...
   [-4.08533936e+01 -5.03273964e+01 -3.96339798e+01 ... -1.49356136e+01
    -1.56351147e+01 -1.64504337e+01]
   [-5.18898392e+01 -5.57715149e+01 -3.61811485e+01 ... -1.64879456e+01
    -1.68405285e+01 -1.68291378e+01]
   [-4.95469208e+01 -3.87742233e+01 -2.14143620e+01 ... -1.78146400e+01
    -1.74390373e+01 -1.68003178e+01]]

  [[ 5.93221307e-01  3.56530130e-01 -1.14560358e-01 ... -5.02764821e-01
    -7.26292670e-01 -5.25126696e-01]
   [ 4.64518726e-01  3.17291766e-02 -7.48738527e-01 ... -8.15853238e-01
    -1.22229266e+00 -8.56097043e-01]
   [ 9.93648767e-01  7.56937146e-01 -2.86679696e-02 ...  4.65208650e-01
    -1.82974428e-01 -1.39138237e-01]
   ...
   [ 1.29833202e+01  1.17195053e+01  7.48569012e-01 ... -1.82213783e+00
    -2.02665114e+00 -1.09218109e+00]
   [ 4.65657330e+00  2.60413074e+00 -1.10176754e+00 ... -1.54984689e+00
    -1.82909358e+00 -1.09776747e+00]
   [ 8.63333941e-01 -2.23850155e+00 -2.51050687e+00 ... -7.47357488e-01
    -1.00603712e+00 -5.44654727e-01]]

  [[-6.15586102e-01 -3.92042398e-01  8.61226022e-02 ...  4.68087137e-01
     7.04768300e-01  4.77207989e-01]
   [-4.88374949e-01 -9.10570472e-02  7.03229785e-01 ...  7.52987862e-01
     1.16498268e+00  7.64343798e-01]
   [-9.89402175e-01 -7.71424770e-01  2.16384549e-02 ... -5.27767837e-01
     1.35010183e-01  6.95758760e-02]
   ...
   [-1.25256634e+01 -1.15335865e+01 -7.20562875e-01 ...  1.77409554e+00
     1.97947931e+00  1.02823961e+00]
   [-3.89892626e+00 -2.07335377e+00  1.18565965e+00 ...  1.48811507e+00
     1.77801490e+00  1.02785110e+00]
   [-2.49264002e-01  3.04722738e+00  2.77845216e+00 ...  7.02163815e-01
     9.69598174e-01  4.78352308e-01]]]]

objects
 []
blob [[[[ 3.2482412e-01  2.3373149e-01 -2.3526889e-01 ... -2.0183572e-01
    -2.7742699e-01 -7.0295680e-01]
   [ 5.6607127e-01  3.3292863e-01 -4.1708097e-01 ... -1.7908981e-01
    -3.2350200e-01 -8.9885241e-01]
   [ 3.5849631e-01  5.1075976e-02 -6.5125096e-01 ... -1.1890617e-01
    -2.7575117e-01 -7.3964387e-01]
   ...
   [-1.9423971e+00 -6.0473549e-01  1.1913171e+00 ...  2.9311210e-01
    -1.5897185e-01 -6.2162393e-01]
   [-1.8131250e+00 -5.6214345e-01 -2.2793698e+00 ...  6.6525199e-02
    -3.4338003e-01 -5.7381094e-01]
   [-1.3022811e+00 -2.1834922e+00 -1.4269416e+00 ... -4.3240841e-02
    -2.0526198e-01 -2.8856516e-01]]

  [[ 4.2970824e-01  1.0572000e+00  1.2770895e+00 ...  1.2062249e+00
     9.3962967e-01  4.7305626e-01]
   [ 1.7350902e-01  4.9856827e-01  5.1283163e-01 ...  1.5044251e-01
     1.3996156e-01 -7.4604467e-02]
   [ 6.9700778e-02  3.1952716e-02  4.5912415e-02 ... -2.5338918e-01
    -1.7510137e-01 -2.1491735e-01]
   ...
   [ 3.7505594e-01 -2.9474115e-01 -2.1082096e+00 ... -4.4242561e-01
    -5.1927811e-01 -5.9794837e-01]
   [ 2.2424271e+00  9.3831134e-01 -3.3865857e+00 ... -8.0387658e-01
    -7.1148592e-01 -6.0621250e-01]
   [ 1.2482803e+00 -1.9313731e+00 -4.9653712e-01 ... -1.1798427e+00
    -1.0720817e+00 -8.9570212e-01]]

  [[ 1.5524569e-01  3.3928287e-01  4.1149086e-01 ...  4.3834254e-01
     3.5333878e-01  1.6994642e-01]
   [ 7.9329580e-02  1.4556906e-01  1.5709358e-01 ...  2.3304534e-01
     1.4486325e-01  2.5117721e-02]
   [ 1.6837474e-03  2.4264891e-02  6.3309968e-03 ...  1.2564175e-02
    -6.0821258e-02 -1.1654762e-01]
   ...
   [-3.9361101e-01 -4.5368963e-01 -5.5360746e-01 ... -7.8950703e-02
    -1.8219501e-01 -3.2796162e-01]
   [ 3.2027557e-02  2.7315855e-02 -2.1380696e+00 ... -3.2294914e-02
    -1.5034856e-01 -2.7706450e-01]
   [ 7.5535053e-01  3.9620045e-01 -9.9118137e-01 ...  4.1525364e-02
    -4.6283174e-02 -1.6802971e-01]]

  ...

  [[-1.7873985e+01 -1.8474991e+01 -1.9481554e+01 ... -1.8868465e+01
    -1.7905094e+01 -1.7323845e+01]
   [-1.6735104e+01 -1.8047115e+01 -2.0070080e+01 ... -1.9011175e+01
    -1.6773438e+01 -1.6684298e+01]
   [-1.7453674e+01 -1.8668812e+01 -2.0987602e+01 ... -1.8899429e+01
    -1.6682434e+01 -1.7406630e+01]
   ...
   [-1.9996929e+01 -1.7823313e+01 -1.8161367e+01 ... -1.8418585e+01
    -1.7909933e+01 -1.8248184e+01]
   [-2.1336893e+01 -2.4060944e+01 -2.1337830e+01 ... -1.7861912e+01
    -1.7061895e+01 -1.7229200e+01]
   [-1.9836140e+01 -2.0394693e+01 -1.8417387e+01 ... -1.8927322e+01
    -1.7947588e+01 -1.7351641e+01]]

  [[-4.8663449e-01 -1.4798261e+00 -2.0304441e+00 ... -3.4728539e-01
    -2.6837713e-01  7.2373360e-02]
   [-1.0635546e+00 -2.3269398e+00 -2.9704545e+00 ... -6.5737367e-01
    -6.9362503e-01 -1.9731241e-01]
   [-1.5496202e+00 -3.0324531e+00 -3.7177904e+00 ... -1.1190648e+00
    -1.1807379e+00 -5.6697261e-01]
   ...
   [-4.0522298e-01  1.5526697e-01  1.9732105e+00 ... -2.2136062e-02
    -6.5619844e-01 -5.5834782e-01]
   [-1.1733514e-01  1.6553690e+00 -2.2639129e-01 ... -3.1733364e-01
    -7.1294093e-01 -5.4556715e-01]
   [-2.0026772e+00  2.0416997e-01 -6.8040103e-01 ...  5.1314369e-02
    -6.1034523e-02  8.7745264e-02]]

  [[ 5.3198552e-01  1.5294709e+00  2.0842934e+00 ...  3.9590195e-01
     3.0943632e-01 -4.4679470e-02]
   [ 1.1258658e+00  2.4052107e+00  3.0718639e+00 ...  7.3347294e-01
     7.4535024e-01  2.3304376e-01]
   [ 1.6212786e+00  3.1221616e+00  3.8354783e+00 ...  1.1821100e+00
     1.2283750e+00  6.0335404e-01]
   ...
   [ 2.3815376e-01 -1.8132132e-01 -1.9252982e+00 ...  3.6010019e-02
     6.7450291e-01  5.6172818e-01]
   [-2.8026924e-01 -1.5972958e+00  2.1925490e-01 ...  3.4674919e-01
     7.3900676e-01  5.5151016e-01]
   [ 2.2354407e+00 -3.0814502e-01  4.2977777e-01 ... -4.3068796e-02
     6.7282677e-02 -8.4575377e-02]]]]

objects
 [{'ymax': 671, 'confidence': 18.755718, 'class_id': 1, 'xmin': 485, 'ymin': 183, 'xmax': 830}]
 

this is my blob's value and my detect object's confidence, Obviously, it's wrong

0 Kudos
Reply