Community
cancel
Showing results for 
Search instead for 
Did you mean: 
GAnthony_R_Intel
Employee
57 Views

Bug R5 with

I'm trying to use R5 model optimizer to convert a frozen tensorflow model (pb text).  I've attached the graph as a zip file.

This bug report was generated:

(tf112_mkl_p36) [bduser@merlin-param01 frozen_tensorflow_model]$ python /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --input_model unet_model_for_inference_dice08771.pb --input_model_is_text
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/bduser/tony/unet/single-node/frozen_tensorflow_model/unet_model_for_inference_dice08771.pb
	- Path for generated IR: 	/home/bduser/tony/unet/single-node/frozen_tensorflow_model/.
	- IR output name: 	unet_model_for_inference_dice08771
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	True
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 256, in tf2nx
    partial_infer(graph)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 218, in partial_infer
    control_flow_infer(graph, n)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 74, in control_flow_infer
    node.cf_infer(node, is_executable, mark_executability)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/ops/switch.py", line 67, in control_flow_infer
    assert 1 <= len(switch_data_0_port_node_id) + len(switch_data_1_port_node_id) <= 2
AssertionError

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

I'm not sure what to do next. Could someone help?

Thanks so much.

-Tony

 

0 Kudos
0 Replies
Reply