Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Model Optimizer reports error while convert Facenet model using --input_shape

chen__ives
Beginner
1,403 Views

 

Hi,

I'm using R3, but when I try to use --input_shape in Facenet conversion, like

python mo_tf.py --input_model 20180408-102900.pb --input_shape "[1,96,96,3]" --output_dir . --freeze_placeholder_with_value "phase_train->False"

 

Model Optimizer reports error:

[ ERROR ]  Shape [   1   -1   -1 1792] is not fully defined for output 0 of "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_pool_ext.<locals>.<lambda> at 0x00000295EFE1A730>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

Do you have any suggestion?

Thank you very much!

0 Kudos
9 Replies
Mark_L_Intel1
Moderator
1,403 Views

Hi Ives,

I don't know where you got the input shape size, I used tensorflow "summarize_graph.py" but I got "unknown size"

I remove it and mo seems work:

$ python3 ~/deployment_tools/model_optimizer/mo_tf.py --input_model 20180408-102900.pb --output_dir . --freeze_placeholder_with_value "phase_train->False"
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	~/Downloads/tmp/20180408-102900/20180408-102900.pb
	- Path for generated IR: 	~/Downloads/tmp/20180408-102900/.
	- IR output name: 	20180408-102900
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.2.185.5335e231
~/deployment_tools/model_optimizer/mo/front/common/partial_infer/slice.py:90: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  value = value[slice_idx]

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: ~/Downloads/tmp/20180408-102900/./20180408-102900.xml
[ SUCCESS ] BIN file: ~/Downloads/tmp/20180408-102900/./20180408-102900.bin
[ SUCCESS ] Total execution time: 17.95 seconds.

Let me know if this solves your problem

Mark

0 Kudos
chen__ives
Beginner
1,403 Views

Hi Mark,

Thanks for your suggestion firstly.

I know that Facenet conversion will be fine if I don't use the --input_shape, and the result input shape is 1x3x160x160.

But because the document ($(INTEL_CVSDK_DIR)/deployment_tools/documentation/docs/FaceNetTF.html
) says that

--input_shape is applicable with or without --input

therefore I just want to try using --input_shape to change input shape after conversion.

 

 

 

 

 

 

 

 

 

 

 

 

 

0 Kudos
Wu__David
Beginner
1,403 Views

Hi Sir,

I also got an error if I add input shape to the command.

Because the facenet's input shape is [1,160,160, 3], but after converting the input shape is [1,3,160,160] and generate different result.

I use following command and got errors:

im@aim-GeminiLake:~/intel/computer_vision_sdk/deployment_tools/model_optimizer$ python3 ./mo_tf.py --input_model '/home/aim/facenet/camera_detect/facenet/20170512-110547/20170512-110547.pb' --freeze_placeholder_with_value "phase_train->False" --input_shape "[1,160,160,3]"
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/aim/facenet/camera_detect/facenet/20170512-110547/20170512-110547.pb
- Path for generated IR: /home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/.
- IR output name: 20170512-110547
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,160,160,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 1.2.185.5335e231
[ WARNING ]  
Detected not satisfied dependencies:
tensorflow: installed: 1.5.0, required: 1.5
 
Please install required versions of components or use install_prerequisites script
/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  'batch_size'
[ ERROR ]  Traceback (most recent call last):
  File "/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 321, in main
    return driver(argv)
  File "/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    mean_scale_values=mean_scale)
  File "/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 188, in tf2nx
    graph, input_op_nodes = add_input_ops(graph, user_shapes, False)
  File "/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/extractor.py", line 799, in add_input_ops
    n_inputs = len(smart_node.in_nodes())
  File "/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/graph/graph.py", line 232, in in_nodes
    assert self.has('kind')
  File "/home/aim/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/graph/graph.py", line 211, in has
    return k in self.graph.node[self.node]
  File "/usr/local/lib/python3.5/dist-packages/networkx/classes/reportviews.py", line 178, in __getitem__
    return self._nodes
KeyError: 'batch_size'
 
[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

------------------------------------------------

Please help to check this.

And another question, is it possible to generate different result with converted model that uses different input dimension?

 

Thanks

David

 

 

 

0 Kudos
chen__ives
Beginner
1,403 Views

Hi David,

About

is it possible to generate different result with converted model that uses different input dimension?

 

You may use --model_name and assign different name to converted models which have different input dimensions.

0 Kudos
Mark_L_Intel1
Moderator
1,403 Views

Hi Guys,

I have reported this issue and will update you if we find any solution.

Mark

0 Kudos
Mark_L_Intel1
Moderator
1,403 Views

Hi Ives and David,

You both are right, there are actually 2 problem here:

  1. The input size could not be too small than the original input size, as I tried, the minimum size is (1,139, 139, 3)
  2. A bug that document in the release note "known issue"#37, this issue was fixed in next release

To work around, you can do following:

  1. Copy lines 829-830 from <INSTALL_DIR>/deployment_tools/model_optimizer/mo/front/extractor.py and insert it before line 796 line, so it becomes:
    ......
                for port_and_shape_info in user_defined_inputs[node_id]:
                    if 'added' in port_and_shape_info and port_and_shape_info['added']:
                        continue
                    shape = port_and_shape_info['shape'] if 'shape' in port_and_shape_info else None
    ......

     

  2. Run the following command:
    python3 ~/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo.py --input_model 20180408-102900.pb --output_dir . --input_shape "(1,139,139,3)" --freeze_placeholder_with_value "phase_train->False"

 

Let me know if this solves the problem.

Mark

0 Kudos
Wu__David
Beginner
1,403 Views

Hi Mark,

I modified the extractor.py and converted the model again with adding --input_shape or --reverse_input_channels. The model can be generated.

But I still have some questions.

1. The model input order is still [1,3,160,160] after using --input_shape "[1,160,160,3]", Does the order not change even using the input_shape parameter?

2. I converted two version model( 20180402-114759, 20170512-110547) to test, but the output matrices are not the same(or the image are not the same) on the same image.

        The following is my sample codes with python:

        

......
facenet_res_2017 = facenet_2017_exec_net.infer(  {'batch_join/fifo_queue': reshaped}  )

embedding_2017 = facenet_res_2017['normalize']

# original_2017 is the pre-saved data using facenet model
dist_2017 = np.sqrt( np.sum( np.square( np.subtract( embedding_2017, original_2017 ) ) ) )
print( dist_2017 )

        The dist_2017 is larger than 1 on the same image when I compare with the original facenet model. 

        I think the output matrix will be the same( or the dist will be small) even using the converted model. 

        And almost all my test face images(5~6 persons) are recognized to the same face( dist < 0.7 ).

        Is there any idea about this?

 

        Thanks.

        David

0 Kudos
Mark_L_Intel1
Moderator
1,403 Views

Hi David,

Sorry for the late response,

For your first question, do you mean the input order in the generated xml file? I think the order doesn't have to be exact the same as we specified in the command line, the question is, does this order cause the problem?

For your second question, I think this is a different topic, could you open a different post? Let me know the post number so I can follow up.

Also please add the reproducing steps and data you used for this question so I can reproduce it.

Mark

0 Kudos
Wu__David
Beginner
1,403 Views

Hi Mark,

1. I am not sure the order causes this problem(second question) or not.

2. I will create another post.

Thanks for your response.

0 Kudos
Reply