Hi, i have 2 problems:
1. when i try to convert from yolov3 weights to IR i need to specify the input to [1,412,412,3]. but the original YOLOv3 is 612x612 (with better result then 412x412). can i change it?
2. when i run the command:
./object_detection_demo -i ~/splited.mp4 -m ~/intel/openvino_2019.1.144/deployment_tools/model_optimizer/frozen_darknet_yolov3_model.xml
i get the following error msg:
[ ERROR ] Error reading network: in Layer detector/darknet-53/Conv_1/Conv2D: trying to connect an edge to non existing output port: 2.1
i look in the frozen_darknet_yolov3_model.xml file and i think it correpted because at the end of the file there is a list of connection between the layers. and there is a line in there:
<edge from-layer="2" from-port="1" to-layer="3" to-port="0"/>
but THERE IS NO LAYER NAMED "2"!
also
<edge from-layer="4" from-port="1" to-layer="5" to-port="0"/>
but there is no layer named "4"!
and so on.
can somone help me plz?
thx
Shalom
The XML file is attached
連結已複製
Dear Dimant, Shalom,
Indeed I confirmed your observations about missing layers when I looked at the contents of your zip file. May I know exactly which command you used to build the Yolo V3 IR ? I can tell you that I followed Converting YOLO* models in 2019 R1 and I definitely didn't see the problems you are seeing.
OpenVino R1.1 was just released. Can you kindly try it ? Please post your results here.
Thanks,
Shubha
Hi Shubha thanks for your response,
I alredy use OpenVino R1.1
I follow the orders in Converting YOLO* models.
Just in case, i did it again.
When i run the follow command:
python3 /home/ws/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json
I get the first error i wrote:
Model Optimizer version: 2019.1.1-83-g28dfbfd
[ ERROR ] Shape [ -1 416 416 3] is not fully defined for output 0 of "inputs". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "inputs".
[ ERROR ] Not all output shapes were inferred or fully defined for node "inputs".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7f11ffba8620>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "inputs" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
so i change the command to:
python3 mo_tf.py --input_shape [1,416,416,3] --input_model ~/convertYoloV3ToIR/tensorflow-yolo-v3/frozen_darknet_yolov3_model
.pb --tensorflow_use_custom_operations_config "/home/ws/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json"
and then i run:
./object_detection_demo -i ~/splited.mp4 -m ~/intel/openvino_2019.1.144/deployment_tools/model_optimizer/frozen_darknet_yolov3_model.xml
and get the msg:
[ INFO ] InferenceEngine:
API version ............ 1.6
Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /home/ws/splited.mp4
[ INFO ] Loading plugin
API version ............ 1.6
Build .................. 23780
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
/home/ws/intel/openvino_2019.1.144/deployment_tools/model_optimizer/frozen_darknet_yolov3_model.xml
/home/ws/intel/openvino_2019.1.144/deployment_tools/model_optimizer/frozen_darknet_yolov3_model.bin
[ ERROR ] Error reading network: in Layer detector/darknet-53/Conv_1/Conv2D: trying to connect an edge to non existing output port: 2.1
the JSON file:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 9,
"mask": [0, 1, 2],
"entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
}
}
]
Thanks!
Dear Dimant, Shalom,
It looks like you did everything correctly.
A github forum poster recently discovered github issue 151 during inference on tiny yolo v3. And I reproduced it on 2019R1.1. I regret to say that there may be a yolo v3 bug as well. Your error looks distinctly different but who knows - maybe even the non-tiny yolo v3 is broken. I shall reproduce it and report back here. I'm really sorry about the inconvenience.
Thanks,
Shubha
Dear Dimant, Shalom,
Yolo V3 works fine on 2019R1.1. Tiny Yolo is broken, however, per that github issue I posted earlier. You should not be running object_detection_demo for yolo. Instead, try the python yolov3 demo:
C:\Users\sdramani\Downloads\tensorflow-yolo-v3>python "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\inference_engine\samples\python_samples\object_detection_demo_yolov3_async\object_detection_demo_yolov3_async.py" -i c:\users\sdramani\Downloads\sample-videos\person-bicycle-car-detection.mp4 -m frozen_darknet_yolov3_model.xml -l c:\users\sdramani\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release\cpu_extension.dll
It works !
Hope it helps and sorry that it took so long,
Shubha
Hi Shubha, thank for your answer.
Unfortunately this answer didn't solve the problem :-(
When i execute:
python3 object_detection_demo_yolov3_async.py -i ~/20190522_110655_ra71.mp4 -m ~/convertYoloV3ToIR/tensorflow-yolo-v3/frozen_darknet_yolov3_model.xml
the output is:
[ INFO ] Loading network files:
/home/ws/convertYoloV3ToIR/tensorflow-yolo-v3/frozen_darknet_yolov3_model.xml
/home/ws/convertYoloV3ToIR/tensorflow-yolo-v3/frozen_darknet_yolov3_model.bin
Traceback (most recent call last):
File "object_detection_demo_yolov3_async.py", line 349, in <module>
sys.exit(main() or 0)
File "object_detection_demo_yolov3_async.py", line 175, in main
net = IENetwork(model=model_xml, weights=model_bin)
File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: in Layer detector/darknet-53/Conv_1/Conv2D: trying to connect an edge to non existing output port: 2.1
Notice that the XML file is corrupted, therfor i don't understend why it should work anyway.
Can you please send me you'r XML file?
Dear Dimant, Shalom,
Make sure you are not using Tensorflow 1.13. Model Optimizer only supports up to 1.12 and in fact YoloV3 requires a fairly recent version too - 1.12 should work fine. But 1.13 will break Model Optimizer.
Attached is a *.zip file containing the generated IR XML.
Hope it helps,
Thanks,
Shubha
Dear Dimant Shalom,
I would like to answer your initial questions:
1. Yes, you do can use YOLO v3 model with bigger input shape (recommended size is 608x608). To do so, please add --size key in convert_weights_pb.py script like so:
python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3_608.weights --size 608
2. Edges are connected according to layer ids in .xml, not by names.
I didn't manage to reproduce the issue you reported, but I would like to try harder.
For futher investigation of your issue, please provide as much as you can of:
- IR (.xml and/or .bin)
- TensorFlow .pb file
- DarkNet files used to generate .pb (.cfg and .weights)
- TensorFlow version you use for DarkNet -> TensorFlow and TensorFlow -> IR conversions.
I'm really sorry for your inconvenience.
Thanks,
Evgenya
excuse me. now openvino2019.1.148 support tensorflow1.14.0 ?
I successfully convert yolov3 model to tensorflow(pb model) , to openvino (IR model) in tensorflow1.14.0 on windows10. but failed on others tensorflow version.
