<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer in Intel® Optimized AI Frameworks</title>
    <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662678#M4</link>
    <description>We were able to recreate the error. We are working on it. Will keep you posted.</description>
    <pubDate>Tue, 19 Mar 2019 14:51:03 GMT</pubDate>
    <dc:creator>Dona_G_Intel</dc:creator>
    <dc:date>2019-03-19T14:51:03Z</dc:date>
    <item>
      <title>Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662675#M1</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have downloaded ssd_mobilenet_v2_coco &lt;A href="https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#inpage-nav-2-1-2" target="_self" alt="https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#inpage-nav-2-1-2"&gt;from the listed URL&lt;/A&gt; and trained it using custom dataset.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To run training in Docker, I used the following command:&lt;/P&gt;&lt;CODE&gt;/tensorflow/models/research# python object_detection/model_main.py \
--pipeline_config_path=learn_vehicle/ckpt/pipeline.config \
--model_dir=learn_vehicle/train \
--num_train_steps=500 \
--num_eval_steps=100&lt;/CODE&gt;&lt;P&gt;After training, I tried to generate IE files on the virtual machine, where OpenVINO is installed by copying non-frozen MetaGraph files (from learn_vehicle/train folder) to VM and run the following command:&lt;/P&gt;&lt;CODE&gt;python3 mo_tf.py --input_meta_graph ~/vehicle_model/model.ckpt-500.meta --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --data_type FP16&lt;/CODE&gt;&lt;P&gt; (~/vehicle_model is the folder on VM)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After running the command above, I got the following error:&lt;/P&gt;&lt;CODE&gt;Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:  None
    - Path for generated IR:    /home/niko/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/.
    - IR output name:   model.ckpt-500
    - Log level:    ERROR
    - Batch:    Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:    Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:  Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:  FP16
    - Enable fusing:    True
    - Enable grouped convolutions fusing:   True
    - Move mean values to preprocess section:   False
    - Reverse input channels:   False
TensorFlow specific parameters:
    - Input model in text protobuf format:  False
    - Offload unsupported operations:   False
    - Path to model dump for TensorBoard:   None
    - List of shared libraries with TensorFlow custom layers implementation:    None
    - Update the configuration file with input/output node names:   None
    - Use configuration file used to generate the model with Object Detection API:  None
    - Operations to offload:    None
    - Patterns to offload:  None
    - Use the config file:  /home/niko/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer version:    1.5.12.49d067a0
[ ERROR ]  Graph contains 0 node after executing add_output_ops and add_input_ops. It may happen due to absence of 'Placeholder' layer in the model. It considered as error because resulting IR will be empty which is not usual&lt;/CODE&gt;&lt;P&gt;I would appreciate if anyone helped to solve this issue.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Niko&lt;/P&gt;</description>
      <pubDate>Mon, 18 Mar 2019 18:31:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662675#M1</guid>
      <dc:creator>nikogamulin</dc:creator>
      <dc:date>2019-03-18T18:31:22Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662676#M2</link>
      <description>&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="IR_ss.png"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/3574i222A00DED907EEF3/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="IR_ss.png" alt="IR_ss.png" /&gt;&lt;/span&gt;Thank you for reaching out to us!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We expect that you are having pipeline.config file for trained model. &lt;/P&gt;&lt;P&gt;Hence please try out with the below command: &lt;/P&gt;&lt;P&gt;python3 mo_tf.py \&lt;/P&gt;&lt;P&gt;--input_meta_graph ~/vehicle_model/model.ckpt-500.meta \ &lt;/P&gt;&lt;P&gt;--output_dir /home/uXXXX/ \&lt;/P&gt;&lt;P&gt;--tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \&lt;/P&gt;&lt;P&gt;--tensorflow_object_detection_api_pipeline_config ~/&amp;lt;path_to&amp;gt;/pipeline.config &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If that does not work, try passing input_shape as an argument(--input_shape [1,X, X, 3] where X is the input shape and 3 is channel).&lt;/P&gt;&lt;P&gt;Kindly find the attached screenshot(Converting meta file to IR) which worked fine for us.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please let me know if you still face this issue. Kindly share the trained model and configuration file for further troubleshooting.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Mar 2019 22:03:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662676#M2</guid>
      <dc:creator>Dona_G_Intel</dc:creator>
      <dc:date>2019-03-18T22:03:46Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662677#M3</link>
      <description>&lt;P&gt;Thank you for a quick response! I have tried to modify the command the following way, but unfortunately got an error:&lt;/P&gt;&lt;CODE&gt;python3 mo_tf.py \
&amp;gt; --input_meta_graph ~/vehicle_model/checkpoint/model.ckpt-500.meta \
&amp;gt; --output_dir ~/vehicle_model/checkpoint/output \
&amp;gt; --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \
&amp;gt; --tensorflow_object_detection_api_pipeline_config ~/vehicle_model/checkpoint/pipeline.config \
&amp;gt; --data_type FP16
&amp;nbsp;
&amp;nbsp;
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/home/niko/vehicle_model/checkpoint/output
	- IR output name: 	model.ckpt-500
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/home/niko/vehicle_model/checkpoint/pipeline.config
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/niko/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer version: 	1.5.12.49d067a0
[ ERROR ]  Graph contains 0 node after executing add_output_ops and add_input_ops. It may happen due to absence of 'Placeholder' layer in the model. It considered as error because resulting IR will be empty which is not usual&lt;/CODE&gt;&lt;P&gt;includin parameter --input_shape [1,300, 300, 3] didn't work either.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Enclosed please find the checkpoint files that the training procedure generated. I would really appreciate if you could help with further troubleshooting.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Mar 2019 00:04:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662677#M3</guid>
      <dc:creator>nikogamulin</dc:creator>
      <dc:date>2019-03-19T00:04:56Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662678#M4</link>
      <description>We were able to recreate the error. We are working on it. Will keep you posted.</description>
      <pubDate>Tue, 19 Mar 2019 14:51:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662678#M4</guid>
      <dc:creator>Dona_G_Intel</dc:creator>
      <dc:date>2019-03-19T14:51:03Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662679#M5</link>
      <description>We tried training with other TfRecords for trouble shooting to know if the issue is related to the generated checkpoints. But we are getting the same error.
We are still working on it. We will let you know in a day or two.</description>
      <pubDate>Wed, 20 Mar 2019 21:56:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662679#M5</guid>
      <dc:creator>Dona_G_Intel</dc:creator>
      <dc:date>2019-03-20T21:56:42Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662680#M6</link>
      <description>We have tried the following methods as well but could not resolve the issue.
1)	Tried to find the details of graph using summarize_graph and couldn’t find out any placeholders
2)	Trained with a different set of TfRecord and tried out the same conversion
3)	Tried to convert the .pbtxt to .pb format and then convert to IR model
4)	Tried with passing --output argument as "detection_boxes,detection_scores,num_detections". Also with --output “detection”

Hence we have escalated the issue to the Subject Matter Expert on OpenVINO and they suggested to post a query regarding this on the OpenVINO forum for better guidance on the issue.

The link for OpenVINO forum:
&lt;A href="https://software.intel.com/en-us/forums/computer-vision"&gt;https://software.intel.com/en-us/forums/computer-vision&lt;/A&gt; 

Please post your query on this link.</description>
      <pubDate>Thu, 21 Mar 2019 20:45:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662680#M6</guid>
      <dc:creator>Dona_G_Intel</dc:creator>
      <dc:date>2019-03-21T20:45:17Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert retrained TensorFlow ssd_mobilenet_v2_coco using Model Optimizer</title>
      <link>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662681#M7</link>
      <description>As suggested please post a query regarding this on the OpenVINO forum for better guidance. We are closing this thread from our end. After the case closure, you will receive a survey email. We appreciate if you can complete this survey regarding the support you received.</description>
      <pubDate>Fri, 22 Mar 2019 16:51:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Optimized-AI-Frameworks/Unable-to-convert-retrained-TensorFlow-ssd-mobilenet-v2-coco/m-p/662681#M7</guid>
      <dc:creator>Dona_G_Intel</dc:creator>
      <dc:date>2019-03-22T16:51:04Z</dc:date>
    </item>
  </channel>
</rss>

