Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

Converting model from TF to IR with CTCBeamSearchDecoder openvino-2020.2

Sutaria__Payal
Beginner
1,596 Views

Hi, 

I would like help to convert my TF model into IR Format.

now if I use the mo_tf.py script with specifying the output node parameter to convert my model, I get the following error: 

 

    <meta_data>
        <MO_version value="2020.2.0-60-g0bc66e26ff"/>
        <cli_parameters>
            <blobs_as_inputs value="True"/>
            <data_type value="float"/>
            <disable_nhwc_to_nchw value="False"/>
            <disable_resnet_optimization value="False"/>
            <disable_weights_compression value="False"/>
            <enable_concat_optimization value="False"/>
            <extensions value="DIR"/>
            <framework value="tf"/>
            <freeze_placeholder_with_value value="{}"/>
            <generate_deprecated_IR_V2 value="False"/>
            <generate_deprecated_IR_V7 value="False"/>
            <generate_experimental_IR_V10 value="True"/>
            <input value="input_placeholder"/>
            <input_model value="DIR/LPRecog_model_updated_mxpl_mul_64_hvshear_channels_last.pb"/>
            <input_model_is_text value="False"/>
            <input_shape value="[1,128,256,3]"/>
            <keep_quantize_ops_in_IR value="True"/>
            <keep_shape_ops value="False"/>
            <log_level value="DEBUG"/>
            <mean_scale_values value="{}"/>
            <mean_values value="()"/>
            <model_name value="LPRecog_model_updated_mxpl_mul_64_hvshear_channels_last"/>
            <move_to_preprocess value="False"/>
            <output value="['output_nodes/CTCBeamSearchDecoder']"/>
            <output_dir value="DIR"/>
            <placeholder_data_types value="{}"/>
            <placeholder_shapes value="{'input_placeholder': array([  1, 128, 256,   3])}"/>
            <progress value="False"/>
            <reverse_input_channels value="False"/>
            <scale_values value="()"/>
            <silent value="False"/>
            <stream_output value="False"/>
            <unset unset_cli_parameters="batch, disable_fusing, disable_gfusing, finegrain_fusing, input_checkpoint, input_meta_graph, saved_model_dir, saved_model_tags, scale, tensorboard_logdir, tensorflow_custom_layer_libraries, tensorflow_custom_operations_config_update, tensorflow_object_detection_api_pipeline_config, tensorflow_use_custom_operations_config, transformations_config"/>
        </cli_parameters>
    </meta_data>
</net>

[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      CTCBeamSearchDecoder (1)
[ ERROR ]          output_nodes/CTCBeamSearchDecoder
[ ERROR ]  Part of the nodes was not converted to IR. Stopped. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #24. 
[ 2020-05-08 18:20:31,762 ] [ DEBUG ] [ main:317 ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 307, in main
    return driver(argv)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 272, in driver
    ret_res = emit_ir(prepare_ir(argv), argv)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 256, in emit_ir
    meta_info=get_meta_info(argv))
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/pipeline/common.py", line 252, in prepare_emit_ir
    meta_info=meta_info)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/back/ie_ir_ver_2/emitter.py", line 433, in generate_ie_ir
    refer_to_faq_msg(24))
mo.utils.error.Error: Part of the nodes was not converted to IR. Stopped. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #24. 

Now if I use the mo_tf.py script without specifying the output node parameter to convert my model, I get the following error: 

[ ERROR ]  Cannot infer shapes or values for node "output_nodes/CTCGreedyDecoder".
[ ERROR ]  index 1 is out of bounds for axis 0 with size 1
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function CTCGreedyDecoderOp.ctc_greedy_decoder_infer at 0x7fe54b776b70>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2020-05-08 18:22:38,139 ] [ DEBUG ] [ infer:196 ]  Node "output_nodes/CTCGreedyDecoder" attributes: {'is_partial_inferred': False, 'kind': 'op', 'op': 'CTCGreedyDecoder', 'pb': name: "output_nodes/CTCGreedyDecoder"
op: "CTCGreedyDecoder"
input: "LPRecognition_model/Logit_tail/transpose"
input: "LPRecognition_model/Logit_tail/Fill"
attr {
  key: "merge_repeated"
  value {
    b: true
  }
}
, 'name': 'output_nodes/CTCGreedyDecoder', 'ctc_merge_repeated': 1, 'infer': <function CTCGreedyDecoderOp.ctc_greedy_decoder_infer at 0x7fe54b776b70>, 'in_ports_count': 2, 'is_undead': False, 'out_ports_count': 1, 'is_const_producer': False, 'is_output_reachable': True, 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x7fe54338d9d8>), 'name', 'type', 'version'], [('data', ['ctc_merge_repeated'], []), '@ports', '@consts'])], '_in_ports': {0: {'control_flow': False}, 1: {'control_flow': False}}, 'type': 'CTCGreedyDecoder', 'dim_attrs': ['spatial_dims', 'batch_dims', 'channel_dims', 'axis'], '_out_ports': {0: {'control_flow': False}, 1: {'control_flow': False}, 2: {'control_flow': False}}, 'shape_attrs': ['stride', 'output_shape', 'window', 'shape', 'pad']}
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "output_nodes/CTCGreedyDecoder" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 
[ 2020-05-08 18:22:38,140 ] [ DEBUG ] [ main:317 ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 134, in partial_infer
    node.infer(node)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/extensions/ops/ctc_greedy_decoder.py", line 47, in ctc_greedy_decoder_infer
    assert inn.shape[1] == inn2.shape[1], 'Batch for CTCGreedyDecoder should be the same in both inputs'
IndexError: index 1 is out of bounds for axis 0 with size 1

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 288, in apply_transform
    for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
    func(graph)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/extensions/middle/PartialInfer.py", line 32, in find_and_replace_pattern
    partial_infer(graph)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 198, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "output_nodes/CTCGreedyDecoder" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 307, in main
    return driver(argv)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 272, in driver
    ret_res = emit_ir(prepare_ir(argv), argv)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/main.py", line 237, in prepare_ir
    graph = unified_pipeline(argv)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/pipeline/unified.py", line 29, in unified_pipeline
    class_registration.ClassType.BACK_REPLACER
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 334, in apply_replacements
    apply_replacements_list(graph, replacers_order)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 324, in apply_replacements_list
    num_transforms=len(replacers_order))
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/logger.py", line 124, in wrapper
    function(*args, **kwargs)
  File "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 304, in apply_transform
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "output_nodes/CTCGreedyDecoder" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 

 

Can anyone help me with the conversion of my model to IR format? 

Thank you,

Payal

0 Kudos
5 Replies
Luis_at_Intel
Moderator
1,590 Views

Hello,

Thanks for reaching out. It's unclear what the problem could be without many details of the model and commands used to convert. If you can please share more information about your model, is it an object/classification model, if custom model what type of layers does the model use, command given to the Model Optimizer to convert model. If possible to share model files for us to replicate and find possible mistake (files can be shared via Private Message).

Also any details about your environment are welcome (for example OS, versions of Python, TF, CMake, etc.).
 

Regards,

Luis

0 Kudos
Sutaria__Payal
Beginner
1,590 Views

Hello,

This is a Text Recognition Model. 

command given to the Model Optimizer to convert model :

python3 python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_model/LPRecog_model_updated_mxpl_mul_64_hvshear_channels_last.pb --input_shape [1,128,256,3] --input=input_placeholder --output=output_nodes/CTCBeamSearchDecoder --log_level=DEBUG

Environment Details are as below:

OS: Ubuntu 16.04 LTS

Version of python: Python 3.5.2

TF Version: 1.14.0

cmake version 3.16.2

0 Kudos
Luis_at_Intel
Moderator
1,590 Views

Hi,

I have sent you a private message containing new information. 

 

Best Regards,

Luis

0 Kudos
DJLee_GE
Beginner
1,462 Views

Hi,

I have the same problem with converting model from TF to IR with CTCBeamSearchDecoder openvino-2021.2. 

Could you help me with this problem?

[ ERROR ]  Cannot infer shapes or values for node "CTCGreedyDecoder".
[ ERROR ]  Incorrect rank of sequence length tensor for CTCGreedyDecoder node
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function CTCGreedyDecoderOp.infer at 0x7f993aa49488>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "CTCGreedyDecoder" node. 

 

0 Kudos
Sutaria__Payal
Beginner
1,590 Views
Hello Luis, I received it and Thanks a lot to resolve this.
0 Kudos
Reply