- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm using R3, but when I try to use --input_shape in Facenet conversion, like
python mo_tf.py --input_model 20180408-102900.pb --input_shape "[1,96,96,3]" --output_dir . --freeze_placeholder_with_value "phase_train->False"
Model Optimizer reports error:
[ ERROR ] Shape [ 1 -1 -1 1792] is not fully defined for output 0 of "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool". Use --input_shape with positive integers to override model input shapes. [ ERROR ] Cannot infer shapes or values for node "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool". [ ERROR ] Not all output shapes were inferred or fully defined for node "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool". For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function tf_pool_ext.<locals>.<lambda> at 0x00000295EFE1A730>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Stopped shape/value propagation at "InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
Do you have any suggestion?
Thank you very much!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ives,
I don't know where you got the input shape size, I used tensorflow "summarize_graph.py" but I got "unknown size"
I remove it and mo seems work:
$ python3 ~/deployment_tools/model_optimizer/mo_tf.py --input_model 20180408-102900.pb --output_dir . --freeze_placeholder_with_value "phase_train->False" Model Optimizer arguments: Common parameters: - Path to the Input Model: ~/Downloads/tmp/20180408-102900/20180408-102900.pb - Path for generated IR: ~/Downloads/tmp/20180408-102900/. - IR output name: 20180408-102900 - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.2.185.5335e231 ~/deployment_tools/model_optimizer/mo/front/common/partial_infer/slice.py:90: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. value = value[slice_idx] [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: ~/Downloads/tmp/20180408-102900/./20180408-102900.xml [ SUCCESS ] BIN file: ~/Downloads/tmp/20180408-102900/./20180408-102900.bin [ SUCCESS ] Total execution time: 17.95 seconds.
Let me know if this solves your problem
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Mark,
Thanks for your suggestion firstly.
I know that Facenet conversion will be fine if I don't use the --input_shape, and the result input shape is 1x3x160x160.
But because the document ($(INTEL_CVSDK_DIR)/deployment_tools/documentation/docs/FaceNetTF.html
) says that
--input_shape is applicable with or without
--input
therefore I just want to try using --input_shape to change input shape after conversion.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sir,
I also got an error if I add input shape to the command.
Because the facenet's input shape is [1,160,160, 3], but after converting the input shape is [1,3,160,160] and generate different result.
I use following command and got errors:
------------------------------------------------
Please help to check this.
And another question, is it possible to generate different result with converted model that uses different input dimension?
Thanks
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi David,
About
is it possible to generate different result with converted model that uses different input dimension?
You may use --model_name and assign different name to converted models which have different input dimensions.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Guys,
I have reported this issue and will update you if we find any solution.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ives and David,
You both are right, there are actually 2 problem here:
- The input size could not be too small than the original input size, as I tried, the minimum size is (1,139, 139, 3)
- A bug that document in the release note "known issue"#37, this issue was fixed in next release
To work around, you can do following:
- Copy lines 829-830 from <INSTALL_DIR>/deployment_tools/model_optimizer/mo/front/extractor.py and insert it before line 796 line, so it becomes:
...... for port_and_shape_info in user_defined_inputs[node_id]: if 'added' in port_and_shape_info and port_and_shape_info['added']: continue shape = port_and_shape_info['shape'] if 'shape' in port_and_shape_info else None ......
- Run the following command:
python3 ~/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo.py --input_model 20180408-102900.pb --output_dir . --input_shape "(1,139,139,3)" --freeze_placeholder_with_value "phase_train->False"
Let me know if this solves the problem.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Mark,
I modified the extractor.py and converted the model again with adding --input_shape or --reverse_input_channels. The model can be generated.
But I still have some questions.
1. The model input order is still [1,3,160,160] after using --input_shape "[1,160,160,3]", Does the order not change even using the input_shape parameter?
2. I converted two version model( 20180402-114759, 20170512-110547) to test, but the output matrices are not the same(or the image are not the same) on the same image.
The following is my sample codes with python:
...... facenet_res_2017 = facenet_2017_exec_net.infer( {'batch_join/fifo_queue': reshaped} ) embedding_2017 = facenet_res_2017['normalize'] # original_2017 is the pre-saved data using facenet model dist_2017 = np.sqrt( np.sum( np.square( np.subtract( embedding_2017, original_2017 ) ) ) ) print( dist_2017 )
The dist_2017 is larger than 1 on the same image when I compare with the original facenet model.
I think the output matrix will be the same( or the dist will be small) even using the converted model.
And almost all my test face images(5~6 persons) are recognized to the same face( dist < 0.7 ).
Is there any idea about this?
Thanks.
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi David,
Sorry for the late response,
For your first question, do you mean the input order in the generated xml file? I think the order doesn't have to be exact the same as we specified in the command line, the question is, does this order cause the problem?
For your second question, I think this is a different topic, could you open a different post? Let me know the post number so I can follow up.
Also please add the reproducing steps and data you used for this question so I can reproduce it.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Mark,
1. I am not sure the order causes this problem(second question) or not.
2. I will create another post.
Thanks for your response.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page