- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I fine tuned the tensorflow slim inception v3 image classification model by running the script
Then, I try to freeze the graph by running
python3 /tensorflow/tensorflow/python/tools/freeze_graph.py --input_graph=/tmp/flowers-models/inception_v3/graph.pbtxt --input_checkpoint=/tmp/flowers-models/inception_v3/model.ckpt-1000 --input_binary=false --output_graph=/tmp/frozen_inception_v3.pb --output_node_names=InceptionV3/Predictions/Reshape_1
and got the ifle
/tmp/frozen_inception_v3.pb
Then I try to convert the model by running
python3 mo_tf.py --input_model /tmp/frozen_inception_v3.pb -b 1 --mean_value [127.5,127.5,127.5] --scale 127.5
and got the error:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /data/train/frozen_inception_v3.pb
- Path for generated IR: /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/.
- IR output name: frozen_inception_v3
- Log level: ERROR
- Batch: 1
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: [127.5,127.5,127.5]
- Scale values: Not specified
- Scale factor: 127.5
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.0-341-gc9b66a2
WARNING: Logging before flag parsing goes to stderr.
E0414 10:12:34.278545 140149041551104 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.input_cut.InputCut'>): Graph contains 0 node after executing <class 'extensions.front.input_cut.InputCut'>. It considered as error because resulting IR will be empty which is not usual
any suggestion?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi cfu,
As your model is a Tensorflow*- Slim Image classification model.
Please follow the steps at Converting TensorFlow*-Slim Image Classification Model.
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chauhan,
I could convert the pre-trained slim image classification model without any issue.
However, after I fine-tune and freeze the model, I am not able to convert it to IR.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi cfu,
Did you generate the inference graph for inception v3 using the following command(for inception v1) given in the documentation.
python3 tf_models/research/slim/export_inference_graph.py \ --model_name inception_v1 \ --output_file inception_v1_inference_graph.pb
Did you try passing the ckpt file to the model optimizer as given in the documentation?
<MODEL_OPTIMIZER_INSTALL_DIR>/mo_tf.py --input_model ./inception_v1_inference_graph.pb --input_checkpoint ./inception_v1.ckpt -b 1 --mean_value [127.5,127.5,127.5] --scale 127.5
Also, please share the output you get on passing the model to summarize_graph.py
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chauhan,
Thanks for the reply, instead of fine-tune inception v3, I try to start with inception v1 first.
I follow the script https://github.com/tensorflow/models/blob/master/research/slim/scripts/finetune_inception_v1_on_flowers.sh
to fine-tune a pre-retrained model, you can find the files under folder fine-tune through below link:
https://drive.google.com/drive/folders/1zFJwYAZSj-aII4wbDqpoQq8_L0ynV6m0?usp=sharing ;
Then, I freeze the model by running
python3 tensorflow/python/tools/freeze_graph.py --input_graph graph.pbtxt --input_checkpoint model.ckpt-3000 --output_graph /tmp/inception_v1_freeze.pb --output_node_names InceptionV1/Logits/Predictions/Softmax
Then, I try to convert the freezed model to IR by running
python3 mo_tf.py --input_model /tmp/inception_v1_freeze.pb -b 1 --mean_value [127.5,127.5,127.5] --scale 127.5
but got the error, the tensorflow version is 1.15.2
Model Optimizer arguments: Common parameters: - Path to the Input Model: /tmp/inception_v1_freeze.pb - Path for generated IR: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/. - IR output name: inception_v1_freeze - Log level: ERROR - Batch: 1 - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: [127.5,127.5,127.5] - Scale values: Not specified - Scale factor: 127.5 - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Use the config file: None Model Optimizer version: 2020.2.0-60-g0bc66e26ff [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.output_cut.OutputCut'>): Graph contains 0 node after executing <class 'extensions.front.output_cut.OutputCut'>. It considered as error because resulting IR will be empty which is not usual
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi cfu,
Can you please attached the ckpt file also in the above mentioned drive link. So, that we may replicate at our end.
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chauhan,
I have attached the ckpt file inception_v1.ckpt
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi cfu,
Please use the following command to generate IR.
sudo python3 mo.py --input_model /home/surya/Downloads/inception_v1_inference_graph.pb --input_checkpoint /home/surya/Downloads/inception_v1.ckpt -b 1 --mean_value [127.5,127.5,127.5] --scale 127.5 Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/surya/Downloads/inception_v1_inference_graph.pb - Path for generated IR: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/. - IR output name: inception_v1_inference_graph - Log level: ERROR - Batch: 1 - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: [127.5,127.5,127.5] - Scale values: Not specified - Scale factor: 127.5 - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Use the config file: None Model Optimizer version: 2020.2.0-60-g0bc66e26ff [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/./inception_v1_inference_graph.xml [ SUCCESS ] BIN file: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/./inception_v1_inference_graph.bin [ SUCCESS ] Total execution time: 13.22 seconds. [ SUCCESS ] Memory consumed: 491 MB.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chauhan,
I can produce the IR model with inception_v1.ckpt. However, I am not able to convert the fine-tune model to IR.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi cfu,
Can you please confirm you are generating the inference graph for inception v3 using:
python3 tf_models/research/slim/export_inference_graph.py \ --model_name inception_v3 \ --output_file inception_v3_inference_graph.pb
Please share the output on passing the inference_graph to summarize_graph.py and model optimizer.
Best regards,
Surya

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page