I tried to convert YOLOv3 model in OpenVINO R4 just following the official instructions. It did not work. It shows the error:
List of operations that cannot be converted to IE IR
ERROR: Exp(3)
ERROR: detector/yolo-v3/Exp
ERROR: detector/yolo-v3/Exp_1
ERROR: detector/yolo-v3/Exp_2
part of the nodes was not translated to IE
did anyone convert YOLOv3 sucessfully?
連結已複製
Hi Fucheng,
Can you please provide the Model Optimizer command you used to get the results?
By the way there is in-package documentation in R4 release on how to convert yolo V3 in file:///opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/documentation/docs/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html or if on Windows change "opt/intel" to "c:/intel"
Kind Regards,
Monique Jones
Thanks for your reply. I first converted the YOLOv3 model downloaded from the official website to tensorflow model using code from "https://github.com/mystic123/tensorflow-yolo-v3.git". The converted tensorflow model was OK. Then I modified the yolo_v3.json, run the command "python3 mo_tf.py
--input_model /mypath/to/yolo_v3.pb
--tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/yolo_v3.json". There, the error occured.
My OS system is Ubuntu 16.04, computer NUC5i5RYH
Oh, really, I also specified the input_shape=[1, 416, 416,3] when I converted it into tensorflow. I found it would fail if the input_shape=[none, 416,416,3] in the original code from "https://github.com/mystic123/tensorflow-yolo-v3.git".
is it the problem for my hardware? But I tried caffe model, it worked fine.
I don't think there is a problem with your hardware. Have you managed to run the samples successfully? If not there may an OpenVino installation issue, you may want to try an SDK re-install.
Just to clarify, do you get the error when you run the model optimizer command ( python3 mo_tf.py ) or during inference ( when you run object_detection_demo_yolov3_async ) ?
Could you copy the complete command and output that shows the error and attach here?
I get the error when I run the model optimizer command(python3 mo_tf.py), not during inference. The installation is OK because I have tried samples as well as my own caffemodel.
Thanks, I will try it later and update any new information
Hi, Jones, and Nikos, I have tried again, both on computer computer NUC5i5RYH (i5, 5th) and another laptop (i5, 8th). The problem is the same. I put the command and the error as follows:
python3 mo_tf.py --input_model ~/tensorflow-yolo-v3/yolo_v3.pb --tensorflow_use_custom_operations_config ./extensions/front/tf/yolov3.json --input_shape [1,416,416,3]
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/extremevision/tensorflow-yolo-v3/yolo_v3.pb
- Path for generated IR: /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/.
- IR output name: yolo_v3
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/./extensions/front/tf/yolov3.json
Model Optimizer version: 1.4.292.6ef7232d
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] Exp (3)
[ ERROR ] detector/yolo-v3/Exp
[ ERROR ] detector/yolo-v3/Exp_1
[ ERROR ] detector/yolo-v3/Exp_2
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.
I just exactly followed the instructions. My ubuntu is 16.04 and I did not run install_4_14_kernel.sh. The only thing I'm not sure is the yolov3.json, I tried:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 9,
"mask": [0, 1, 2],
"entry_points": ["detector/yolo-v3/detect_1", "detector/yolo-v3/detect_2", "detector/yolo-v3/detect_3"]
}
}
]
and also:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 9,
"mask": [0, 1, 2],
"entry_points": ["detector/yolo-v3/detect_1", "detector/yolo-v3/detect_2", "detector/yolo-v3/detect_3"]
}
},
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 9,
"mask": [3, 4, 5],
"entry_points": ["detector/yolo-v3/detect_1", "detector/yolo-v3/detect_2", "detector/yolo-v3/detect_3"]
}
},
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 9,
"mask": [6, 7, 8],
"entry_points": ["detector/yolo-v3/detect_1", "detector/yolo-v3/detect_2", "detector/yolo-v3/detect_3"]
}
}
]
The error were the same.
Nikos wrote:I don't think there is a problem with your hardware. Have you managed to run the samples successfully? If not there may an OpenVino installation issue, you may want to try an SDK re-install.
Just to clarify, do you get the error when you run the model optimizer command ( python3 mo_tf.py ) or during inference ( when you run object_detection_demo_yolov3_async ) ?
Could you copy the complete command and output that shows the error and attach here?
Hi, Nikos
I just copy the command and output. The outputs were the same.
Cheers
Fucheng
Hi Deng,
Sorry for the delay. Still having this issue?
I can try to repro if you provide the command that you used to freeze YOLO3 ( yolo_v3.pb ).
Also did you try with your own YOLO3 or a pretrained model, if so which one?
Thanks,
Nikos
Nikos wrote:Hi Deng,
Sorry for the delay. Still having this issue?
I can try to repro if you provide the command that you used to freeze YOLO3 ( yolo_v3.pb ).
Also did you try with your own YOLO3 or a pretrained model, if so which one?
Thanks,
Nikos
Hi, Nikos,
I just exactly followed the official guide "https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow", convert yolov3 to tensorflow model part. I used pretrained model download from DarkNet website "https://pjreddie.com/darknet/yolo/".
Thanks,
Fucheng
Hello Fucheng,
Please note
> To solve the problems explained in the YOLO V3 architecture overview section, use the yolo_v3.jsonconfiguration file with customoperations located in the <OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tfrepository.
What happens if you try the recommended intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json ?
[ { "id": "TFYOLOV3", "match_kind": "general", "custom_attributes": { "classes": 80, "coords": 4, "num": 9, "mask": [0, 1, 2], "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"] } } ]
In your json above I am not seeing "detector/yolo-v3/Reshape" .
Just for the record I can also repro your issue if I use your json
Model Optimizer version: 1.4.292.6ef7232d [ ERROR ] List of operations that cannot be converted to IE IR: [ ERROR ] Exp (3) [ ERROR ] detector/yolo-v3/Exp_2 [ ERROR ] detector/yolo-v3/Exp [ ERROR ] detector/yolo-v3/Exp_1 [ ERROR ] Part of the nodes was not translated to IE. Stopped. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.
Cheers,
Nikos
Nikos wrote:Hello Fucheng,
Please note
> To solve the problems explained in the YOLO V3 architecture overview section, use the yolo_v3.jsonconfiguration file with customoperations located in the <OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tfrepository.
What happens if you try the recommended intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json ?
[ { "id": "TFYOLOV3", "match_kind": "general", "custom_attributes": { "classes": 80, "coords": 4, "num": 9, "mask": [0, 1, 2], "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"] } } ]In your json above I am not seeing "detector/yolo-v3/Reshape" .
Cheers,
Nikos
Hi, Nikos,
I am actually confused by the json file. First, the YOLOv3 has three yolo detection layers, how should I write the json file, just copy it three times and change "mask" to [3,4,5], [6,7,8]
{ "id": "TFYOLOV3", "match_kind": "general", "custom_attributes": { "classes": 80, "coords": 4, "num": 9, "mask": [0, 1, 2], "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"] } }
Second, for the entry_points, openvino will not find the "detector/yolo-v3/Reshape" if I just do not change them. So, I changed them according to names used in the converted tensorflow model (the names of yolo detection layers are defined in the converting python code)
Thanks,
Fuchengh
Hi Fuchengh,
> for the entry_points, openvino will not find the "detector/yolo-v3/Reshape" if I just do not change them.
That's weird. Works fine here. Please try to follow the steps once again. Did you git checkout fb9f543 from tensorflow-yolo-v3 ?
> First, the YOLOv3 has three yolo detection layers, how should I write the json file, just copy it three times and change "mask" to [3,4,5], [6,7,8]
That's a good question but may be better to start a new thread (?) so that we can focus on your issue above first. I can only get the 416,416 to work and different mask values do not seem to make any difference.
Nikos
Hi Fuchengh,
Just verified that size 416 will also work from the latest master of tensorflow-yolo-v3.git
Maybe try this to freeze and let us know if you still have issues:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3 mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32 # run ./object_detection_demo_yolov3_async -i ~/Videos/test.mp4 -m ./fp32/frozen_darknet_yolov3_model.xml -d CPU -t 0.8
This works fine for me here.
Cheers,
Nikos
JFTR the other issue with YOLO3 608x608 is fixed if we add additional scale values in the sample like shown below
switch (side) { //case yolo_scale_13: case yolo_scale_19: anchor_offset = 2 * 6; break; //case yolo_scale_26: case yolo_scale_38: anchor_offset = 2 * 3; break; //case yolo_scale_52: case yolo_scale_76: anchor_offset = 2 * 0; break; default: throw std::runtime_error("Invalid output size"); }
Nikos wrote:JFTR the other issue with YOLO3 608x608 is fixed if we add additional scale values in the sample like shown below
switch (side) { //case yolo_scale_13: case yolo_scale_19: anchor_offset = 2 * 6; break; //case yolo_scale_26: case yolo_scale_38: anchor_offset = 2 * 3; break; //case yolo_scale_52: case yolo_scale_76: anchor_offset = 2 * 0; break; default: throw std::runtime_error("Invalid output size"); }
Hi, Nikos,
I changed another completely new computer with i5-7300U, and tried again. Yeah, it works fine. Thank you very much!
Fucheng
using the latest convert_weights_pb.py and OpenVINO R4.420 got the following shape error:
tensorflow-yolo-v3-master$ python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names
Traceback (most recent call last):
File "convert_weights_pb.py", line 52, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "convert_weights_pb.py", line 42, in main
load_ops = load_weights(tf.global_variables(scope='detector'), FLAGS.weights_file)
File "/home/ubuntu/Downloads/tensorflow-yolo-v3-master/utils.py", line 115, in load_weights
(shape[3], shape[2], shape[0], shape[1]))
ValueError: cannot reshape array of size 338452 into shape (512,256,3,3)
this could be the weight, tensorflow-yolo-v3, or intel tensorflow issue?
No repro after git pull the latest tensorflow-yolo-v3, the following still works well
python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC Maybe check the size of weights file and download again in case of corruption. nikos
nikos wrote:No repro after git pull the latest tensorflow-yolo-v3, the following still works well
python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC Maybe check the size of weights file and download again in case of corruption. nikos
Yes. It works with latest git pull (i made a mistake in downloading coco.names that is in HTML and download again the raw coco.names to fix the error).
Thanks Nikos!
