- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm trying transform tensorflow model to OpenVINO IR files.
There are 2 kinds of models.
First model is from tensorflow/models, I follow OpenVINO steps to convert model by below cmd line:
python /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
> --input_model frozen_inference_graph.pb \
> --input_shape [1,513,513,3]
And I got ERROR message:
Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/Ryan/realtime_segmenation/models/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb - Path for generated IR: /home/Ryan/realtime_segmenation/models/deeplabv3_mnv2_pascal_train_aug/. - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,513,513,3] - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.2.185.5335e231 /opt/intel/computer_vision_sdk_fpga_2018.3.343/deployment_tools/model_optimizer/mo/front/common/partial_infer/slice.py:90: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. value = value[slice_idx] [ ERROR ] Cannot infer shapes or values for node "GreaterEqual". [ ERROR ] Input 0 of node GreaterEqual was passed int64 from add_1_port_0_ie_placeholder:0 incompatible with expected int32. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f415f23c620>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Stopped shape/value propagation at "GreaterEqual" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
------------------
Second model is a style transfer model, graph build in python code.
You can download the pretrain model from the github provide link.
The pretrain model are metagraph file, and I got error when convert metagraph > IR
python3.6 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
> --input_meta_graph udnie.ckpt.meta
Model Optimizer arguments: Common parameters: - Path to the Input Model: None - Path for generated IR: /home/Ryan/realtime_segmenation/computex/style_model/. - IR output name: udnie.ckpt - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.2.185.5335e231 [ ERROR ] Cannot load input model: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /home/Ryan/realtime_segmenation/computex/style_model/udnie.ckp [[Node: save/RestoreV2_81 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_81/tensor_names, save/RestoreV2_81/shape_and_slices)]] Caused by op 'save/RestoreV2_81', defined at: File "/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py", line 31, in <module> sys.exit(main(get_tf_cli_parser(), 'tf')) File "/opt/intel/computer_vision_sdk_fpga_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 321, in main return driver(argv) File "/opt/intel/computer_vision_sdk_fpga_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 263, in driver mean_scale_values=mean_scale) File "/opt/intel/computer_vision_sdk_fpga_2018.3.343/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 80, in tf2nx saved_model_tags=argv.saved_model_tags) File "/opt/intel/computer_vision_sdk_fpga_2018.3.343/deployment_tools/model_optimizer/mo/front/tf/loader.py", line 140, in load_tf_graph_def restorer = tf.train.import_meta_graph(input_meta_graph_def) File "/usr/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1960, in import_meta_graph **kwargs) File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 744, in import_scoped_meta_graph producer_op_list=producer_op_list) File "/usr/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func return func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def _ProcessNewOps(graph) File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3563, in _add_new_tf_operations for c_op in c_api_util.new_tf_operations(self) File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3563, in <listcomp> for c_op in c_api_util.new_tf_operations(self) File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3450, in _create_op_from_tf_operation ret = Operation(c_op, self) File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /home/Ryan/realtime_segmenation/computex/style_model/udnie.ckp [[Node: save/RestoreV2_81 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_81/tensor_names, save/RestoreV2_81/shape_and_slices)]]
I try to freeze this model in python code after sess.run
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["clip_by_value"]) graph_io.write_graph(frozen, './style_model/IR/', 'inference_graph.pb', as_text=False)
And convert this .pb file by MO, still got error:
Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/Ryan/realtime_segmenation/computex/style_model/IR/inference_graph.pb - Path for generated IR: /home/Ryan/realtime_segmenation/computex/style_model/IR/. - IR output name: inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.2.185.5335e231 [ ERROR ] Shape is not defined for output 0 of "Slice". [ ERROR ] Cannot infer shapes or values for node "Slice". [ ERROR ] Not all output shapes were inferred or fully defined for node "Slice". For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function tf_slice_infer at 0x7f028badb730>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Stopped shape/value propagation at "Slice" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
My question is
- I try to refrozen first model by export_model.py in the github project, got error in same. How can I fix the error ?
- The style transfer model need to build graph and restore variable from meta, so I try to save the graph from python code after sess.run, but it got an error about Slice layer. How to fix the error ?
Thanks in advanced.
Best regard,
Ryan.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ryan for the TensorFlow DeepLab model please try the following command (it works for me):
python mo.py --scale 1 --model_name test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync --input_shape "(1,513,513,3)" --input 1:mul_1 --input_model "c:\Users\sdramani\Downloads\deeplabv3_mnv2_pascal_train_aug\frozen_inference_graph.pb" --framework tf --output_dir c:\Users\sdramani\Downloads\out_dir --data_type FP32 --output ArgMax
In the output_dir you should see the following 3 files created:
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.xml
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.mapping
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.bin
As for Fast Style Transfer we are still investigating this. Stay tuned.
Thanks for using OpenVino !
Shubha
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ryan for the TensorFlow DeepLab model please try the following command (it works for me):
python mo.py --scale 1 --model_name test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync --input_shape "(1,513,513,3)" --input 1:mul_1 --input_model "c:\Users\sdramani\Downloads\deeplabv3_mnv2_pascal_train_aug\frozen_inference_graph.pb" --framework tf --output_dir c:\Users\sdramani\Downloads\out_dir --data_type FP32 --output ArgMax
In the output_dir you should see the following 3 files created:
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.xml
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.mapping
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.bin
As for Fast Style Transfer we are still investigating this. Stay tuned.
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Shubha R. (Intel) wrote:Ryan for the TensorFlow DeepLab model please try the following command (it works for me):
python mo.py --scale 1 --model_name test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync --input_shape "(1,513,513,3)" --input 1:mul_1 --input_model "c:\Users\sdramani\Downloads\deeplabv3_mnv2_pascal_train_aug\frozen_inference_graph.pb" --framework tf --output_dir c:\Users\sdramani\Downloads\out_dir --data_type FP32 --output ArgMax
In the output_dir you should see the following 3 files created:
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.xml
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.mapping
test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.bin
As for Fast Style Transfer we are still investigating this. Stay tuned.
Thanks for using OpenVino !
Shubha
Thanks Shubha,
These IR files work fine in my project.
So it needs to define model input and output layer when got MO error ?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page