- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I am trying to port the keras model shared in the link to openvino.
I got keras model converted to tf model first and now trying to convert tf model to openvino by using mo_tf.py.. I used the command
python3 "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo_tf.py" --input_model xyz.pb --input_shape "[1,32, 32, 3]" --data_type FP32
But I am not able to convert and I am getting the error as mentioned below.. Could guys please help me get this model converted to openvino?
- Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,32, 32, 3] - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 2019.3.0-408-gac8584cb7 [ ERROR ] ------------------------------------------------- [ ERROR ] ----------------- INTERNAL ERROR ---------------- [ ERROR ] Unexpected exception happened. [ ERROR ] Please contact Model Optimizer developers and forward the following information: [ ERROR ] shapes (64,2) and (0,) not aligned: 2 (dim 1) != 0 (dim 0) [ ERROR ] Traceback (most recent call last): File "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo/main.py", line 298, in main return driver(argv) File "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo/main.py", line 247, in driver is_binary=not argv.input_model_is_text) File "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 163, in tf2nx for_graph_and_each_sub_graph_recursively(graph, fuse_linear_ops) File "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively func(graph) File "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo/middle/passes/fusing/fuse_linear_ops.py", line 267, in fuse_linear_ops is_fused = _fuse_add(graph, node, fuse_nodes, False) File "/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/mo/middle/passes/fusing/fuse_linear_ops.py", line 206, in _fuse_add fuse_node.in_port(2).data.set_value(bias_value + np.dot(fuse_node.in_port(1).data.get_value(), value)) ValueError: shapes (64,2) and (0,) not aligned: 2 (dim 1) != 0 (dim 0) [ ERROR ] ---------------- END OF BUG REPORT -------------- [ ERROR ] -------------------------------------------------
Thanks,
Abhishek
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Abhishek,
As we had a discussion in an earlier thread, you confirmed to have converted the model to IR. Kindly try converting the model to frozen tensorflow .pb and then use model optimizer as you did previously.
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
--Reserved
Thanks,
Abhishek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Abhishek,
I generated the tensorflow model from keras using a script attached in this thread, and then I was able to generate the IR using model optimizer. Please find the command as well as logs below.
C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer>python mo_tf.py --input_model C:\Users\suryap1x\Desktop\liveness.model.pb --input_shape [1,32,32,3] Model Optimizer arguments: Common parameters: - Path to the Input Model: C:\Users\suryap1x\Desktop\liveness.model.pb - Path for generated IR: C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\. - IR output name: liveness.model - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,32,32,3] - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Use the config file: None Model Optimizer version: 2020.2.0-60-g0bc66e26ff [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\.\liveness.model.xml [ SUCCESS ] BIN file: C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\.\liveness.model.bin [ SUCCESS ] Total execution time: 15.26 seconds.
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Not sure why it crashed when I was trying to convert the model usinf 2019.R1, Anyway I will update the openvino toolkit and try again..
Thanks,
Abhishek

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page