Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

[IR failed-tensorflow]: Inception_V3

kao__mars
Beginner
463 Views

Hi~ 

I try to using Inception_V3 convert to IR, but got the error log show as below, it shows can't find node output name, need your help to check this, I also test Inception_V3 on movidius NCCompile command, it can convert successful. can you explain how IR work flow on tensorflow stage, does all tensorflow opertaion supported?thanks.

[command list] 

sudo python3 ../model_optimizer/mo.py --framework tf --input_model "/opt/intel/computer_vision_sdk_2018.0.234/deployment_tools/demo/Inception_V3/inception_v3.ckpt" --input=input --output=InceptionV3/Predictions/Reshape_1 --output_dir ir_inceptionV3 --data_type FP32 --model_name InceptionV3

[model link] https://software.intel.com/en-us/inference-engine-devguide-introduction ;

 

0 Kudos
4 Replies
Seunghyuk_P_Intel
463 Views

Hi Kao,

I think your issue is same as other case that I answered. :).

About the issue,

Please check this link which is mentioning how to covert tensorflow models.

https://software.intel.com/en-us/articles/CVSDK-Using-TensorFlow

Looks like, you are trying to covert "unfrozen" tensorflow model.

MO supports only "frozen" tensorflow model.

The extension of frozen model file is ".pb".

Here is the step for how to freeze "unfrozen" model for MO input, shortly.

Please check the link I pasted for detail information.

  • 1. Download the repository, including the models.
  • 2. Export the inference graph for a model.
    • extension of filename will be ".pb"
  • 3. Download the archive with the checkpoint file - this is what you downloaded
  • 4. To find the model output node name, use the "summarize_graph" utility
  • 5. To freeze the graph, use the script "freeze_graph.py"
    • you will use exported graph file ".pb" and checkpoint file ".ckpt" to freeze graph
    • output file name will be ".pb" as well

Now, you will be able to covert frozen tensorflow model with MO.

ex) python3 mo_tf.py --input_model <INPUT_MODEL>.pb

 

Regards,

Peter.

0 Kudos
kao__mars
Beginner
463 Views

Hi Peter,

thanks for your help :)

I have another question is when i downloaded mobilenet_v1_1.0_224 from tensorflow official release, i can't convert this prebuild pb(mobilenet_v1_official_frozen.pb) to ir format with model_optimizer tool. But if i followed the instruction you provided, it works fine (can convert to IR format).  This means i need to use ckpt to generate a "new" frozen pb file (mobile_v1_new_frozen.pb). What's the different between these two pb file?

more information:

[mobilenet_v1_official_frozen.pb] is download from following link:http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz

Thanks.

0 Kudos
VSh00
Beginner
463 Views

Hi

I tried to convert inception model (tensorflow) into IR representation for model optimizer. I have installed the prerequisites.I got the following error.Can anyone suggest me an idea to solve this?

The error message as follows,

sudo python3 mo_tf.py --input_model inception_v1_inference_graph.pb --input_checkpoint inception_v1.ckpt 
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/inception_v1_inference_graph.pb
	- Path for generated IR: 	/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
	- IR output name: 	inception_v1_inference_graph
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	2019.1.1-83-g28dfbfd
Converted 230 variables to const ops.
[ ERROR ]  Shape [ -1 224 224   3] is not fully defined for output 0 of "input". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "input".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "input". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7f8d563dd048>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "input" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

 

0 Kudos
Shubha_R_Intel
Employee
463 Views

Dear G, Shanmuga vadivelu,

You are getting this error because  of the negative number here : Shape [ -1 224 224   3]

Instead of -1 please use a positive number for batch size, which occupies the first position. You can do this in one of two ways --batch 1 or --input_shape [1, 224, 224, 3]

You don't have to use a batch size of 1 but I'm just giving it as an example.

Thanks !

Shubha

 

0 Kudos
Reply