Intel® Optimized AI Frameworks
Receive community support for questions related to PyTorch* and TensorFlow* frameworks.
73 Discussions

Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer

nikogamulin
Beginner
2,225 Views

Hi,

 

I have downloaded ssd_mobilenet_v2_coco from the listed URL and trained it using custom dataset.

 

To run training in Docker, I used the following command:

/tensorflow/models/research# python object_detection/model_main.py \ --pipeline_config_path=learn_vehicle/ckpt/pipeline.config \ --model_dir=learn_vehicle/train \ --num_train_steps=500 \ --num_eval_steps=100

After training, I tried to generate IE files on the virtual machine, where OpenVINO is installed by copying non-frozen MetaGraph files (from learn_vehicle/train folder) to VM and run the following command:

python3 mo_tf.py --input_meta_graph ~/vehicle_model/model.ckpt-500.meta --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --data_type FP16

(~/vehicle_model is the folder on VM)

 

After running the command above, I got the following error:

Model Optimizer arguments: Common parameters: - Path to the Input Model: None - Path for generated IR: /home/niko/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/. - IR output name: model.ckpt-500 - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: /home/niko/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json Model Optimizer version: 1.5.12.49d067a0 [ ERROR ] Graph contains 0 node after executing add_output_ops and add_input_ops. It may happen due to absence of 'Placeholder' layer in the model. It considered as error because resulting IR will be empty which is not usual

I would appreciate if anyone helped to solve this issue.

 

Thank you,

 

Niko

0 Kudos
6 Replies
Dona_G_Intel
Employee
1,707 Views

IR_ss.pngThank you for reaching out to us!

 

We expect that you are having pipeline.config file for trained model.

Hence please try out with the below command:

python3 mo_tf.py \

--input_meta_graph ~/vehicle_model/model.ckpt-500.meta \

--output_dir /home/uXXXX/ \

--tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \

--tensorflow_object_detection_api_pipeline_config ~/<path_to>/pipeline.config

 

If that does not work, try passing input_shape as an argument(--input_shape [1,X, X, 3] where X is the input shape and 3 is channel).

Kindly find the attached screenshot(Converting meta file to IR) which worked fine for us.

 

Please let me know if you still face this issue. Kindly share the trained model and configuration file for further troubleshooting.

 

0 Kudos
nikogamulin
Beginner
1,707 Views

Thank you for a quick response! I have tried to modify the command the following way, but unfortunately got an error:

python3 mo_tf.py \ > --input_meta_graph ~/vehicle_model/checkpoint/model.ckpt-500.meta \ > --output_dir ~/vehicle_model/checkpoint/output \ > --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \ > --tensorflow_object_detection_api_pipeline_config ~/vehicle_model/checkpoint/pipeline.config \ > --data_type FP16     Model Optimizer arguments: Common parameters: - Path to the Input Model: None - Path for generated IR: /home/niko/vehicle_model/checkpoint/output - IR output name: model.ckpt-500 - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: /home/niko/vehicle_model/checkpoint/pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: /home/niko/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json Model Optimizer version: 1.5.12.49d067a0 [ ERROR ] Graph contains 0 node after executing add_output_ops and add_input_ops. It may happen due to absence of 'Placeholder' layer in the model. It considered as error because resulting IR will be empty which is not usual

includin parameter --input_shape [1,300, 300, 3] didn't work either.

 

Enclosed please find the checkpoint files that the training procedure generated. I would really appreciate if you could help with further troubleshooting.

 

0 Kudos
Dona_G_Intel
Employee
1,707 Views
We were able to recreate the error. We are working on it. Will keep you posted.
0 Kudos
Dona_G_Intel
Employee
1,707 Views
We tried training with other TfRecords for trouble shooting to know if the issue is related to the generated checkpoints. But we are getting the same error. We are still working on it. We will let you know in a day or two.
0 Kudos
Dona_G_Intel
Employee
1,707 Views
We have tried the following methods as well but could not resolve the issue. 1) Tried to find the details of graph using summarize_graph and couldn’t find out any placeholders 2) Trained with a different set of TfRecord and tried out the same conversion 3) Tried to convert the .pbtxt to .pb format and then convert to IR model 4) Tried with passing --output argument as "detection_boxes,detection_scores,num_detections". Also with --output “detection” Hence we have escalated the issue to the Subject Matter Expert on OpenVINO and they suggested to post a query regarding this on the OpenVINO forum for better guidance on the issue. The link for OpenVINO forum: https://software.intel.com/en-us/forums/computer-vision Please post your query on this link.
0 Kudos
Dona_G_Intel
Employee
1,707 Views
As suggested please post a query regarding this on the OpenVINO forum for better guidance. We are closing this thread from our end. After the case closure, you will receive a survey email. We appreciate if you can complete this survey regarding the support you received.
0 Kudos
Reply