- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I tried to convert YOLOv3 model in OpenVINO R4 just following the official instructions. It did not work. It shows the error:
List of operations that cannot be converted to IE IR
ERROR: Exp(3)
ERROR: detector/yolo-v3/Exp
ERROR: detector/yolo-v3/Exp_1
ERROR: detector/yolo-v3/Exp_2
part of the nodes was not translated to IE
did anyone convert YOLOv3 sucessfully?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Kim Chuan L. wrote:using the latest convert_weights_pb.py and OpenVINO R4.420 got the following shape error:
tensorflow-yolo-v3-master$ python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names
Traceback (most recent call last):
File "convert_weights_pb.py", line 52, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "convert_weights_pb.py", line 42, in main
load_ops = load_weights(tf.global_variables(scope='detector'), FLAGS.weights_file)
File "/home/ubuntu/Downloads/tensorflow-yolo-v3-master/utils.py", line 115, in load_weights
(shape[3], shape[2], shape[0], shape[1]))
ValueError: cannot reshape array of size 338452 into shape (512,256,3,3)
this could be the weight, tensorflow-yolo-v3, or intel tensorflow issue?
Hi, Kim Chuan,
Recently, I also found other models trained by darknet based on yolov3 architecture could not be converted successfully using tensorflow-yolo-v3. Maybe there is a bug for loading weights in the " convert_weights_pb.py". Thus, if you successfully try your model other than the original yolov3 model, Please let me know. Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nikos,
i'm running just as instructed here:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3
and i'm getting errors, help will be greatly appriciated.
this is what i run and get:
mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32
mellerdaniel@mellerdaniel-GS43VR-7RE:~/Documents/newYolo/tensorflow-yolo-v3$ python3 ~/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/mellerdaniel/Documents/newYolo/tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb
- Path for generated IR: /home/mellerdaniel/Documents/newYolo/tensorflow-yolo-v3/.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /home/mellerdaniel/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json
Model Optimizer version: 1.4.292.6ef7232d
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xa
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xa
[ WARNING ]
Detected not satisfied dependencies:
numpy: installed: 1.11.0, required: 1.12.0
Please install required versions of components or use install_prerequisites script
/home/mellerdaniel/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
[ ERROR ] Cannot infer shapes or values for node "detector/yolo-v3/Conv_14/BiasAdd/YoloRegion".
[ ERROR ] 'coords'
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function RegionYoloOp.regionyolo_infer at 0x7f290119b488>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "detector/yolo-v3/Conv_14/BiasAdd/YoloRegion" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please refer to https://software.intel.com/en-us/forums/computer-vision/topic/800813
Potentially numpy issue - needs upgrade
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
cool, it worked (works pretty slowly i have to say) - thanks for the help!
its suppose to support tinyYolo as well? and the NCS2 is supporting yolov3?
nikos wrote:Please refer to https://software.intel.com/en-us/forums/computer-vision/topic/800813
Potentially numpy issue - needs upgrade
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good to hear it worked!
Yes, it is rather slow. Try the other sizes too ( 320 must be a bit faster ).
It runs on both CPU and GPU but not on NCS2 I believe.
Good point about tiny. I tried a few weeks ago, but I had an issue and then switched to a different task.
Anyone.. Let us know if you were able to run tiny v3 successfully.
Cheers,
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Daniel,
> its suppose to support tinyYolo as well?
Yes, tiny YLO v3 works fine too.
Just needs a few minor modifications to yolo_v3.json and a few modifications to the sample application.
Also works fast on GPU FP16 - about 100 fps on my slow GT2 but will be much faster on GT3 or GT4.
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nikos,
I've been following this thread to try yolo v3 with the Openvino model optimizer. I cannot make work the model optimization for the tiny version.
My yolo_3.json file looks like:
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 6,
"mask": [0, 1, 2],
"entry_points": ["detector/yolo-v3-tiny/Reshape", "detector/yolo-v3-tiny/Reshape_4", "detector/yolo-v3-tiny/Reshape_8"]
}
}
]
I'm also issuing the python3 mo_tf.py script with '--input_shape=[1,416,416,3].
I'm getting the following error:
[ ERROR ] Cannot infer shapes or values for node "detector/yolo-v3-tiny/Tile/YoloRegion".
[ ERROR ] index 2 is out of bounds for axis 0 with size 2
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function RegionYoloOp.regionyolo_infer at 0x7f9a3fe05ea0>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "detector/yolo-v3-tiny/Tile/YoloRegion" node.
Could you tell me which modifications did you do to the yolo_v3.json file?
Thanks in advance,
Mauro.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Mauro,
Please remove "detector/yolo-v3-tiny/Reshape_8"
"entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]
You could also refer to https://software.intel.com/en-us/forums/computer-vision/topic/801626 , there are more instructions for tiny yolo and also https://github.com/PINTO0309/OpenVINO-YoloV3.git by Katsuya-san.
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hi Mauro,
Please remove "detector/yolo-v3-tiny/Reshape_8"
"entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]You could also refer to https://software.intel.com/en-us/forums/computer-vision/topic/801626 , there are more instructions for tiny yolo and also https://github.com/PINTO0309/OpenVINO-YoloV3.git by Katsuya-san.
Cheers,
Nikos
Thanks Nikos! That was it!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Where I can set my label?
nikos wrote:Hi Fuchengh,
Just verified that size 416 will also work from the latest master of tensorflow-yolo-v3.git
Maybe try this to freeze and let us know if you still have issues:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3 mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32 # run ./object_detection_demo_yolov3_async -i ~/Videos/test.mp4 -m ./fp32/frozen_darknet_yolov3_model.xml -d CPU -t 0.8This works fine for me here.
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello ,
I have a question about Yolo3 IR-model's inference result format. I opened a thread for it. I appreciate it, if anyone can help one this:
https://software.intel.com/en-us/forums/computer-vision/topic/804890
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ross,
label file should be in the same folder as xml , same filename but with the extension .labels ( for example for frozen_darknet_yolov3_model.xml it is frozen_darknet_yolov3_model.labels ). One label per line, no spaces.
Tsin, Ross wrote:Where I can set my label?
Quote:
nikos wrote:
Hi Fuchengh,
Just verified that size 416 will also work from the latest master of tensorflow-yolo-v3.git
Maybe try this to freeze and let us know if you still have issues:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3 mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32 # run ./object_detection_demo_yolov3_async -i ~/Videos/test.mp4 -m ./fp32/frozen_darknet_yolov3_model.xml -d CPU -t 0.8This works fine for me here.
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hi Fuchengh,
Just verified that size 416 will also work from the latest master of tensorflow-yolo-v3.git
Maybe try this to freeze and let us know if you still have issues:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3 mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32 # run ./object_detection_demo_yolov3_async -i ~/Videos/test.mp4 -m ./fp32/frozen_darknet_yolov3_model.xml -d CPU -t 0.8This works fine for me here.
Cheers,
Nikos
hi nikos,
I try your first command and successfully get a frozen_darknet_yolov3_model.pb file.
But errors occors in the second command like this:
Common parameters:
- Path to the Input Model: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\frozen_darknet_yolov3_model.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] LeakyRelu (72)
[ ERROR ] detector/darknet-53/Conv/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_1/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_2/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_3/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_4/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_5/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_6/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_7/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_8/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_9/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_10/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_11/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_12/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_13/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_14/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_15/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_16/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_17/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_18/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_19/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_20/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_21/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_22/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_23/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_24/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_25/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_26/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_27/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_28/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_29/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_30/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_31/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_32/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_33/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_34/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_35/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_36/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_37/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_38/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_39/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_40/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_41/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_42/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_43/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_44/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_45/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_46/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_47/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_48/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_49/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_50/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_51/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_1/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_2/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_3/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_4/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_7/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_8/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_9/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_10/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_11/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_12/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_13/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_15/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_16/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_17/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_18/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_19/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_20/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_21/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_5/LeakyRelu
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hi Fuchengh,
Just verified that size 416 will also work from the latest master of tensorflow-yolo-v3.git
Maybe try this to freeze and let us know if you still have issues:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3 mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32 # run ./object_detection_demo_yolov3_async -i ~/Videos/test.mp4 -m ./fp32/frozen_darknet_yolov3_model.xml -d CPU -t 0.8This works fine for me here.
Cheers,
Nikos
hi nikos.
I use you first command in win10 cmd, and successfully generate a frozen_darknet_yolov3_model.pb file.
But errors occurs like this when I run your second command:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\frozen_darknet_yolov3_model.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] Shape [ -1 416 416 3] is not fully defined for output 0 of "inputs". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "inputs".
[ ERROR ] Not all output shapes were inferred or fully defined for node "inputs".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x00000240086C7840>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "inputs" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer>python mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP16
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\frozen_darknet_yolov3_model.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] LeakyRelu (72)
[ ERROR ] detector/darknet-53/Conv/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_1/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_2/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_3/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_4/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_5/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_6/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_7/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_8/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_9/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_10/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_11/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_12/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_13/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_14/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_15/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_16/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_17/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_18/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_19/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_20/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_21/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_22/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_23/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_24/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_25/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_26/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_27/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_28/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_29/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_30/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_31/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_32/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_33/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_34/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_35/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_36/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_37/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_38/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_39/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_40/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_41/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_42/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_43/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_44/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_45/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_46/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_47/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_48/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_49/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_50/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_51/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_1/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_2/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_3/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_4/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_7/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_8/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_9/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_10/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_11/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_12/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_13/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_15/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_16/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_17/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_18/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_19/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_20/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_21/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_5/LeakyRelu
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.
I am eagerly looking for your help, please.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
zeng, dongxu wrote:hi dongxu, i got the very same problem situation as you,have you solved it? also looking for any solution help,pls,tks.
引文:
nikos 写道:
Hi Fuchengh,
Just verified that size 416 will also work from the latest master of tensorflow-yolo-v3.git
Maybe try this to freeze and let us know if you still have issues:
git clone https://github.com/mystic123/tensorflow-yolo-v3.git cd tensorflow-yolo-v3 python3 convert_weights_pb.py --weights_file yolov3.weights --class_names coco.names.txt --size 416 --data_format NHWC # model optimizer python3 mo_tf.py --input_model ~/artifacts/yolo3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP32 # run ./object_detection_demo_yolov3_async -i ~/Videos/test.mp4 -m ./fp32/frozen_darknet_yolov3_model.xml -d CPU -t 0.8This works fine for me here.
Cheers,
Nikos
hi nikos.
I use you first command in win10 cmd, and successfully generate a frozen_darknet_yolov3_model.pb file.
But errors occurs like this when I run your second command:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\frozen_darknet_yolov3_model.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] Shape [ -1 416 416 3] is not fully defined for output 0 of "inputs". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "inputs".
[ ERROR ] Not all output shapes were inferred or fully defined for node "inputs".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x00000240086C7840>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "inputs" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer>python mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json --input_shape=[1,416,416,3] --data_type=FP16
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\frozen_darknet_yolov3_model.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\yolo_v3.json
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] LeakyRelu (72)
[ ERROR ] detector/darknet-53/Conv/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_1/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_2/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_3/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_4/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_5/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_6/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_7/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_8/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_9/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_10/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_11/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_12/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_13/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_14/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_15/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_16/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_17/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_18/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_19/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_20/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_21/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_22/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_23/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_24/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_25/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_26/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_27/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_28/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_29/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_30/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_31/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_32/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_33/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_34/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_35/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_36/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_37/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_38/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_39/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_40/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_41/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_42/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_43/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_44/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_45/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_46/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_47/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_48/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_49/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_50/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_51/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_1/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_2/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_3/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_4/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_7/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_8/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_9/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_10/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_11/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_12/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_13/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_15/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_16/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_17/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_18/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_19/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_20/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_21/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_5/LeakyRelu
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.I am eagerly looking for your help, please.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I've been trying to convert YOLOV3 and Tiny-YOLOV3 and i get the same error as the above:
python3 /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3_tiny.json --batch 1
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /media/unknown/DATA/Code_Testing/tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb
- Path for generated IR: /media/unknown/DATA/Code_Testing/tensorflow-yolo-v3/.
- IR output name: frozen_darknet_yolov3_model
- Log level: ERROR
- Batch: 1
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3_tiny.json
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] LeakyRelu (11)
[ ERROR ] detector/yolo-v3-tiny/Conv/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_1/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_2/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_3/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_4/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_5/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_6/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_7/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_10/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_11/LeakyRelu
[ ERROR ] detector/yolo-v3-tiny/Conv_8/LeakyRelu
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.
I tried everything, nothing seems to work.
If someone find a solution to this please let us know, it will be very helpful :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I managed to find the cause and the solution for the LeakyRelu problem!!
Just install tensorflow 1.11.0 and everything will perfectly work. Its strange that by running the dependencies script for OpenVino will automatically install the latest version which can cause this problem. Here is the command for installing the correct version of tensorflow.
sudo pip3 install tensorflow==1.11.0 #linux
pip install tensorflow==1.11.0 #Windows
Hope that helps!
--George
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everyone.
Looks like the solution was found in my repository.
- Tensorflow's version issue
https://github.com/PINTO0309/OpenVINO-YoloV3/issues/19
I am still using v1.12.0 to convert successfully.
sudo -H pip3 install tensorflow==1.12.0 --upgrade
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I've been trying to convert Tiny-YOLOV3 and i get the same error as the above and a different error.
the above error:
[ ERROR ] List of operations that cannot be converted to IE IR: [ ERROR ] LeakyRelu (11) [ ERROR ] detector/yolo-v3-tiny/Conv/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_1/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_2/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_3/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_4/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_5/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_6/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_7/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_10/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_11/LeakyRelu [ ERROR ] detector/yolo-v3-tiny/Conv_8/LeakyRelu [ ERROR ] Part of the nodes was not translated to IE. Stopped.
can be solved by reinstalled tensorflow:
sudo -H pip3 install tensorflow==1.12.0 --upgrade
but after that, I encountered another error:
intel@intel-tank1:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer$ sudo python3 mo_tf.py --input_model /home/intel/Downloads/tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3_tiny.json --input_shape=[1,416,416,3] --output_dir /home/intel/Downloads/tensorflow-yolo-v3/output/yolov3tiny [sudo] password for intel: Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/intel/Downloads/tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb - Path for generated IR: /home/intel/Downloads/tensorflow-yolo-v3/output/yolov3tiny - IR output name: frozen_darknet_yolov3_model - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,416,416,3] - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3_tiny.json Model Optimizer version: 1.5.12.49d067a0 [ ERROR ] Cannot infer shapes or values for node "detector/yolo-v3-tiny/Conv/LeakyRelu". [ ERROR ] Op type not registered 'LeakyRelu' in binary running on intel-tank1. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f4d2a1fc378>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Stopped shape/value propagation at "detector/yolo-v3-tiny/Conv/LeakyRelu" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
If someone find a solution to this please let us know, I will be very appreciate. Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Rachel,
Please review my answer to this post. Also please upgrade to 2019 R1 if you haven't already.
https://software.intel.com/en-us/forums/computer-vision/topic/807541
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, everyone.
Can yolov3-spp.weights be coverted to IRmodels?
I tried to modified ./extensions/front/tf/yolov3-spp.json. And it said 'node with name detector/yolo-v3-spp/Reshape doesn't exits.
Is there any advices?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page