Community
cancel
Showing results for 
Search instead for 
Did you mean: 
nnain1
New Contributor I
137 Views

'update_custom_layer_attributes' must be implemented in the sub-class

My model is frcnn mobilenetv1. I used pretrained model from here. The network topology is supported by OpenVino.

When I convert to IR, i have errors.

My command is

python3 mo_tf.py --input_model ${PROJECT_PATH}/frozen_inference_graph.pb --tensorflow_custom_operations_config_update extensions/front/tf/faster_rcnn_support_api_v1.10.json --tensorflow_use_custom_operations_config ${PROJECT_PATH}/faster_rcnn_mobilenet_v1_coco.config --reverse_input_channels --model_name openvino_frcnn_mobilenetv1 --output_dir ${PROJECT_PATH}/OpenvinoModel/fp16 --input_shape [1,300,900,3] --data_type FP16

I have tensorflow 1.12.2 and Openvino version is 2019.1.094

The errors are

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/upsquared/NumberPlate/recognition/frcnn_mobilenetv1_1/frozen_inference_graph.pb
	- Path for generated IR: 	/home/upsquared/NumberPlate/recognition/frcnn_mobilenetv1_1/OpenvinoModel/fp16
	- IR output name: 	openvino_frcnn_mobilenetv1
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	[1,300,900,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	True
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.10.json
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/upsquared/NumberPlate/recognition/frcnn_mobilenetv1_1/faster_rcnn_mobilenet_v1_coco.config
Model Optimizer version: 	2019.1.0-341-gc9b66a2
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.front.tf.tensorflow_custom_operations_config_update.TensorflowCustomOperationsConfigUpdate'>)": The function 'update_custom_layer_attributes' must be implemented in the sub-class.
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/front/tf/tensorflow_custom_operations_config_update.py", line 59, in find_and_replace_pattern
    replacement_desc.update_custom_replacement_attributes(graph)
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/utils/custom_replacement_config.py", line 136, in update_custom_replacement_attributes
    raise Exception("The function 'update_custom_layer_attributes' must be implemented in the sub-class.")
Exception: The function 'update_custom_layer_attributes' must be implemented in the sub-class.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/main.py", line 312, in main
    return driver(argv)
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 127, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 190, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.front.tf.tensorflow_custom_operations_config_update.TensorflowCustomOperationsConfigUpdate'>)": The function 'update_custom_layer_attributes' must be implemented in the sub-class.

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

0 Kudos
3 Replies
nnain1
New Contributor I
137 Views

i have converted to IR for similar trained model. That was ok.

The difference between the current tf model and previous tf model is feature extraction layers are freezed in training.

Is that matter?

 

nnain1
New Contributor I
137 Views

i used another command, without using frozen file.

python3 mo_tf.py --input_meta_graph ${PROJECT_PATH}/model.ckpt-50000.meta --output_dir ${PROJECT_PATH}/OpenvinoModel/fp32 --input_shape [1,300,900,3] --data_type FP32

The error is similar.

[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

Shubha_R_Intel
Employee
137 Views

Dearest naing, nyan,

Hmmm....this should work. Can you try again on the latest OpenVino Release 2019R1.1 ? It was just released this week. Let me know if you still have issues.

Also keep in mind that the *.json file you use must match the Tensorflow API version. I usually use the very unscientific method of "trying all the *.jsons"  and seeing which one works - for the switch --tensorflow_use_custom_operations_config

I noticed that you are using  --tensorflow_custom_operations_config_update instead of --tensorflow_use_custom_operations_config. Any reason why ? The below online doc tells you to use --tensorflow_use_custom_operations_config

http://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_O...

Thanks,

Shubha

Reply