Showing results for 
Search instead for 
Did you mean: 

Model Optimize keras.GRU layer

I would like to a convert a tensorflow model to OpenVINO using the model optimizer, but I got an error during mo script about the GRU layer.

I created the following minimal model to reporduce the ERROR:


model = keras.Sequential()


Model: "sequential_19"
Layer (type)                 Output Shape              Param #   
gru_14 (GRU)                 (None, 2)                 1176      
Total params: 1,176
Trainable params: 1,176
Non-trainable params: 0


I freeze this model then I run the following command:

python --input_shape [1,64,192,3] --input "input_1" --input_model "frozen.pb"

Which produce the following output:

Common parameters:
        - Path to the Input Model:      /frozen.pb
        - Path for generated IR:        /.
        - IR output name:       frozen
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         input_1
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,64,192,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  None
fatal: not a git repository (or any of the parent directories): .git
        - Inference Engine found in:    /usr/local/lib/python3.6/dist-packages/openvino
Inference Engine version:       2.1.2021.3.0-2774-d6ebaa2cd8e-refs/pull/4731/head
Model Optimizer version:            unknown version
[ ERROR ]  Cannot infer shapes or values for node "sequential_19/gru_14/PartitionedCall/TensorArrayV2_1".
[ ERROR ]  Tensorflow type 21 not convertible to numpy dtype.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f18e529b378>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ANALYSIS INFO ]  It looks like there are input nodes of boolean type:
If this input node is as switch between the training and an inference mode, then you need to freeze this input with value True or False.
In order to do this run the Model Optimizer with the command line parameter:
        --input "unused_control_flow_input_2->False" or --input "unused_control_flow_input_2->True"
        --input "unused_control_flow_input_5->False" or --input "unused_control_flow_input_5->True"
to switch graph to inference mode.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "sequential_19/gru_14/PartitionedCall/TensorArrayV2_1" node. 
 For more information please refer to Model Optimizer FAQ, question #38. (


How can I optimize such a model?


And a bonus question:

What are these "unused_control_flow_input" do, and how should I handle them? 

Thanks in advance

0 Kudos
3 Replies

Hello Adam Nemes,

Thank you for reaching out.

Based on the errors you get, it seems like there is an incorrect input shape or value. Please share both of your models with us (Keras model and frozen model) and any necessary information that could help us to reproduce the issue.




Dear Zulkifli

Thank you, but I managed to solve the issue.

Turned out the problem rooted in the freezing of the model, which based on this article:

Converting from saved_model format solved the issue.

Thank you anyway!


Hello Adam Nemes,

I glad to hear that your issue has been resolved.  If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.