Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Novice
793 Views

Input 1 of node RetinaFaceModel/cond/StatefulPartitionedCall_1 was passed float from RetinaFaceModel

Jump to solution

Hi,

I was using https://github.com/peteryuX/retinaface-tf2  link for retinaface face detection. The model was saved in SavedModel format. Then, I converted the model to frozen format using the below code:

%tensorflow_version 1.x
from tensorflow import keras
model = keras.models.load_model('/content/drive/My Drive/OPENVINO/openvino_fmd_tf/')
from keras import backend as K
import tensorflow as tf

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    """
    Freezes the state of a session into a pruned computation graph.

    Creates a new computation graph where variable nodes are replaced by
    constants taking their current value in the session. The new graph will be
    pruned so subgraphs that are not necessary to compute the requested
    outputs are removed.
    @param session The TensorFlow session to be frozen.
    @param keep_var_names A list of variable names that should not be frozen,
                          or None to freeze all the variables in the graph.
    @param output_names Names of the relevant graph outputs.
    @param clear_devices Remove the device directives from the graph for better portability.
    @return The frozen graph definition.
    """
    from tensorflow.python.framework.graph_util import convert_variables_to_constants
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        # Graph -> GraphDef ProtoBuf
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = convert_variables_to_constants(session, input_graph_def,
                                                      output_names, freeze_var_names)
        return frozen_graph
#tf.keras.backend.set_learning_phase(0)

frozen_graph = freeze_session(K.get_session(),
                              output_names=[out.op.name for out in model.outputs])
tf.train.write_graph(frozen_graph, "model", "new_tf_model.pb", as_text=False)

As Openvino doesn't support TF2.x, I used TF 1.x for freezing.

Then, I tried converting the frozen .pb file to IR form using the below command.

python3 mo_tf.py --input_model /home/g2-test/Opt_AI/ASHNA/FMD-tf/frozen_tf_model.pb --output_dir /home/g2-test/Opt_AI/ASHNA/FMD-tf/output/  

When I run this, I got the following error:

[ ERROR ]  Shape [ -1 640 640   3] is not fully defined for output 0 of "input_1". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "input_1".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "input_1". 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Parameter.infer at 0x7f3578c5ea70>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "input_1" node. 

Then, I tried giving the input shape externally and the command used was:

python3 mo_tf.py --input_model /home/g2-test/Opt_AI/ASHNA/FMD-tf/frozen_tf_model.pb --output_dir /home/g2-test/Opt_AI/ASHNA/FMD-tf/output/  --input_shape=[1,640,640,3]

Now, I'm getting the following error:

[ ERROR ]  Cannot infer shapes or values for node "RetinaFaceModel/cond/StatefulPartitionedCall_1".
[ ERROR ]  Input 1 of node RetinaFaceModel/cond/StatefulPartitionedCall_1 was passed float from RetinaFaceModel/cond/StatefulPartitionedCall_1/Switch_1_port_0_ie_placeholder:0 incompatible with expected resource.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f2f3e9405f0>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "RetinaFaceModel/cond/StatefulPartitionedCall_1" node. 

  The input shape is 640x640 in the model summary. I'm attaching the model summary with this.

Can someone help me in solving this issue.

0 Kudos

Accepted Solutions
Moderator
703 Views

Hi ashna_12,

 

Could you please try the latest OpenVINO Release (2020.4)? This release enables initial support for TensorFlow 2.2.0 for computer vision use cases. If you see the same issue, please provide the frozen model and log file from the latest release.

 

Regards,

Jesus

 

View solution in original post

0 Kudos
4 Replies
Highlighted
Community Manager
737 Views

Hi ashna_12,


Could you try running your model optimizer with the --log_level=DEBUG flag and provide the output? Also, try running the model optimizer with --batch 1 instead of --input_shape=[1,640,640,3].


Regards,

Jesus


0 Kudos
Highlighted
Novice
727 Views

Hi,

I tried the replacing the input shape with --batch 1. Following was the command I used:

python3 mo_tf.py --input_model /home/ashna/Ashna/tf-retinaface/frozen_tf_model.pb --output_dir /home/ashna/Ashna/tf-retinaface/output/ --batch 1 --log_level=DEBUG 

The log file is attached with this. Kindly go through the same.

Hope to hear from you soon.

0 Kudos
Moderator
704 Views

Hi ashna_12,

 

Could you please try the latest OpenVINO Release (2020.4)? This release enables initial support for TensorFlow 2.2.0 for computer vision use cases. If you see the same issue, please provide the frozen model and log file from the latest release.

 

Regards,

Jesus

 

View solution in original post

0 Kudos
Highlighted
Moderator
678 Views

Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.


0 Kudos