- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ,
I am trying to convert Tensorflow model to IR.
During the conversion process i get this error. as shown in the attachment
[ ERROR ] Shape [ -1 224 224 3] is not fully defined for output 0 of "input". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "input".
[ ERROR ] Not all output shapes were inferred or fully defined for node "input".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x000001920A07F9D8>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "input" node.
I trying to convert mobilenet_v1_1.0_224 model .it is standard tensorflow model example.
Please help me to under stand this issue and what other parameters is required in the cmd whiling converting to IR ?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For example... $ sudo python3 mo_tf.py \ --input_model xxx.pb \ --output_dir path/to/your/output \ --input input \ --output zzz \ --data_type FP32 \ --batch 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hyodo,
I did like this on windows machine :
Here python is python 3.6.5
C:\Users\Ignitarium\Documents\tensorflow-yolo-v3>python C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo_tf.py --input_model yolo_v3.pb --tensorflow_use_custom_operations_config yolo_v3_changed.json
I am getting errors as indicated above.
Also attached the python and .pb files also.
I am not really clear on what you are trying to say by example ?
C:\Users\Ignitarium\Documents\tensorflow-yolo-v3>python C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo_tf.py --input_model yolo_v3.pb --tensorflow_use_custom_operations_config yolo_v3_changed.json
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Users\Ignitarium\Documents\tensorflow-yolo-v3\yolo_v3.pb
- Path for generated IR: C:\Users\Ignitarium\Documents\tensorflow-yolo-v3\.
- IR output name: yolo_v3
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Users\Ignitarium\Documents\tensorflow-yolo-v3\yolo_v3_changed.json
Model Optimizer version: 1.4.292.6ef7232d
[ ERROR ] Shape [ -1 416 416 3] is not fully defined for output 0 of "Placeholder". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "Placeholder".
[ ERROR ] Not all output shapes were inferred or fully defined for node "Placeholder".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x0000027A804BEAE8>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "Placeholder" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
python C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo_tf.py --input_model yolo_v3.pb --tensorflow_use_custom_operations_config yolo_v3_changed.json --batch 1
or
python C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo_tf.py --input_model yolo_v3.pb --tensorflow_use_custom_operations_config yolo_v3_changed.json --input_shape [1,416,416,3]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Hyodo, this helped.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Hydo I need more help on this.
First i ran this cmd as told in the intel Openvino Documentation.
python mo.py --input_model C:\Users\raryax\Desktop\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224_frozen.pb
i have used mobilenet_v1_1.0_224 model .it is standard tensorflow model .
after seeing the error (complete error log is below) it looks like there is some problem with shapes parameter
so what value has to be passed to it ?
I ran this below cmd by passing --input_shape [1,416,416,3]
then i got this error
python mo.py --input_model C:\Users\raryax\Desktop\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224_frozen.pb --input_shape [1,416,416,3]
mo.py: error: unrecognized arguments: --input_shape[1,416,416,3]
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Users\raryax\Desktop\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224_frozen.pb
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\.
- IR output name: mobilenet_v1_1.0_224_frozen
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 1.4.292.6ef7232d
[ ERROR ] Shape [ -1 224 224 3] is not fully defined for output 0 of "input". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "input".
[ ERROR ] Not all output shapes were inferred or fully defined for node "input".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x000002B903FAF9D8>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "input" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
please can explain me in details what value has to be passed for shape or should i pass more parameters ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@yr, rakesh arya
>[ ERROR ] Shape [ -1 224 224 3] is not fully defined for output 0 of "input". Use --input_shape with positive integers to override model input shapes.
Please try below.
python mo.py --input_model C:\Users\raryax\Desktop\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224\mobilenet_v1_1.0_224_frozen.pb --input_shape [1,224,224,3]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
-
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear yr, rakesh arya
try this command to convert the ssd-mobile-net model
python3 <mo_tf.py> --input_model <path_to .pb> --tensorflow_use_custom_operations_config </extension/front/tf/ssd_support.json> --input_shape [1,224,224,3]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Hyodo Thanks for the help it is working
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hello,Hyodo:
I've settled it.
Thanks;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i have used a Ubuntu LTS 16.04
in my case,
sudo python3 mo_tf.py
--input_model <.pb file directory>
--input_shape "[1, 64, 128, 3]"
--input "input"
--data_type FP32
it works for me
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
I had the same issue, is used this:
python3 mo_tf.py --input_model yolo_v3.pb --tensorflow_use_custom_operations_config yolo_v3.json --data_type FP16 --input_shape [1,416,416,3]
than i become another error:
E0701 13:40:49.641717 140493334865728 main.py:323] -------------------------------------------------
E0701 13:40:49.641915 140493334865728 main.py:324] ----------------- INTERNAL ERROR ----------------
E0701 13:40:49.641994 140493334865728 main.py:325] Unexpected exception happened.
E0701 13:40:49.642042 140493334865728 main.py:326] Please contact Model Optimizer developers and forward the following information:
E0701 13:40:49.642104 140493334865728 main.py:327]
E0701 13:40:49.642600 140493334865728 main.py:328] Traceback (most recent call last):
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 312, in main
return driver(argv)
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
is_binary=not argv.input_model_is_text)
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 141, in tf2nx
graph_clean_up_tf(graph)
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/eliminate.py", line 186, in graph_clean_up_tf
graph_clean_up(graph, ['TFCustomSubgraphCall', 'Shape'])
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/eliminate.py", line 181, in graph_clean_up
add_constant_operations(graph)
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/eliminate.py", line 145, in add_constant_operations
Const(graph, dict(value=node.value, shape=np.array(node.value.shape))).create_node_with_data(data_nodes=node)
File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/ops/op.py", line 207, in create_node_with_data
[np.array_equal(old_data_value[id], data_node.value) for id, data_node in enumerate(data_nodes)])
AssertionError
E0701 13:40:49.642694 140493334865728 main.py:329] ---------------- END OF BUG REPORT --------------
E0701 13:40:49.642745 140493334865728 main.py:330] -------------------------------------------------
Hyodo, Katsuya can you give some solution?
Thanks szkudo
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to convert faster_rcnn_resnet50 frozen_inference_graph.pb to IR but unable to do it.
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/iquantela/Desktop/Video_Analytics/video-analytics-master/road_cleaning/tensorflow-1/models/research/object_detection/inference_graph/frozen_inference_graph.pb --output detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.10.json --input_shape [1,1024,600,3]
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/iquantela/Desktop/Video_Analytics/video-analytics-master/road_cleaning/tensorflow-1/models/research/object_detection/inference_graph/frozen_inference_graph.pb
- Path for generated IR: /home/iquantela/.
- IR output name: frozen_inference_graph
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: detection_boxes,detection_scores,num_detections
- Input shapes: 1
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.10.json
Model Optimizer version: 2019.1.1-83-g28dfbfd
[ ERROR ] Input shape "1" cannot be parsed.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page