Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Empty Inference Model after converting from Tensorflow model (OpenVINO)

Kaiser__Timo
Beginner
852 Views

Hi all together,

i try to implement my custom net on an Intel SoC (the "UpSquared" board). I got the net from here https://github.com/fizyr/keras-retinanet. The backend is Tensorflow so im using the tensorflow workflow.

With OpenVINO i firstly tried this command:

python3 mo_tf.py --input_model /home/up-board/Desktop/inference_graph_ret.pb --output_dir /home/up-board/Desktop/retina-net-inference-machine 

and the output was this:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/up-board/Desktop/inference_graph_ret.pb
	- Path for generated IR: 	/home/up-board/Desktop/retina-net-inference-machine
	- IR output name: 	inference_graph_ret
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.2.185.5335e231
[ ERROR ]  Graph contains a cycle. Can not proceed. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #97. 

 

In the second try I used a predefined operation set: 

python3 mo_tf.py --input_model /home/up-board/Desktop/inference_graph_ret.pb --tensorflow_use_custom_operations_config /home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json --output_dir /home/up-board/Desktop/retina-net-inference-machine 

and the output is:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/up-board/Desktop/inference_graph_ret.pb
	- Path for generated IR: 	/home/up-board/Desktop/retina-net-inference-machine
	- IR output name: 	inference_graph_ret
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json
Model Optimizer version: 	1.2.185.5335e231

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/up-board/Desktop/retina-net-inference-machine/inference_graph_ret.xml
[ SUCCESS ] BIN file: /home/up-board/Desktop/retina-net-inference-machine/inference_graph_ret.bin
[ SUCCESS ] Total execution time: 30.58 seconds. 

The problem is, the resulting .xml and .bin files are empty... So i defined the --input_shape argument to [1,100,100,3] but then the following error occurs:

python3 mo_tf.py --input_model /home/up-board/Desktop/inference_graph_ret.pb --tensorflow_use_custom_operations_config /home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json --output_dir /home/up-board/Desktop/retina-net-inference-machine --input_shape=[1,100,100,3]
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/up-board/Desktop/inference_graph_ret.pb
	- Path for generated IR: 	/home/up-board/Desktop/retina-net-inference-machine
	- IR output name: 	inference_graph_ret
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	[1,100,100,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json
Model Optimizer version: 	1.2.185.5335e231
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  'input_1'
[ ERROR ]  Traceback (most recent call last):
  File "/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 321, in main
    return driver(argv)
  File "/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    mean_scale_values=mean_scale)
  File "/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 188, in tf2nx
    graph, input_op_nodes = add_input_ops(graph, user_shapes, False)
  File "/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/extractor.py", line 799, in add_input_ops
    n_inputs = len(smart_node.in_nodes())
  File "/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/graph/graph.py", line 232, in in_nodes
    assert self.has('kind')
  File "/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/graph/graph.py", line 211, in has
    return k in self.graph.node[self.node]
  File "/usr/local/lib/python3.5/dist-packages/networkx/classes/reportviews.py", line 178, in __getitem__
    return self._nodes
KeyError: 'input_1'

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

Could someone tell me how to identify errors in the net?

Greetings, Timo

0 Kudos
4 Replies
Cary_P_Intel1
Employee
852 Views

Hi, Timo,

Is it possible you can provide your frozen trained model for me to test?          

0 Kudos
Kaiser__Timo
Beginner
852 Views

Hi Cary,

of course! It's an self trained model based on the original Retina-Net Framework (https://github.com/fizyr/keras-retinanet). Here are links to a google drive location where you can download the original net (.h5) and the frozen net (.pb).

Original: https://drive.google.com/file/d/1AnfINhyy9nJp5gN9raemCFvv6sKgnBiX/view?usp=sharing 

Frozen model: https://drive.google.com/open?id=1uVUsoZsDOa4iaXLFgsoeUf8LIRMlbc7l

Thanks for your help in advance!

Greetings, Timo

0 Kudos
Cary_P_Intel1
Employee
852 Views

Hi, Timo,

The problem seems from you use the faster_rcnn_support_api_v1.7.json as pattern match to replace the node within original network, but the target network was fast rcnn but yours is retina-net which cause the problem of generating empty xml. The conversion should yield error message instead of empty files, this issue should be fixed in later OpenVINO release.

Your case is to convert network with custom layer, but the conversion with custom layer is not easy, you can refer to the article attached to write your own json file for pattern match replacement.

https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer#sub-graph-replacement-in-MO

0 Kudos
Kaiser__Timo
Beginner
852 Views

Thanks for your fast reply!

I will in work on your hint in future time. If I managed to convert the net i will report! If you got some interesting tips or news in new OpenVINO releases let me now!

0 Kudos
Reply