Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5960 Discussions

How to convert a custom TF model to IR models?

New Contributor I

This question is not the usually asked one, please read entirely.

I have a tensorflow model in saved_model format i.e.,

saved_model.pb Assets variables

Now, this model is not a regular Tensorflow Model. It is custom built and runs few Tensorflow Operations. It does not contain any layers as such. I am only generating this model to run few TF specific operations.

For Example,

import tensorflow as tf
class Sample(tf.Module):
    def __init__(self):

    def DeNorm(self,Tensor):
        x_min = Tensor[0]
        x_max = Tensor[1]
        y_min = Tensor[2]
        y_max = Tensor[3]
        Tensor = tf.cast(tf.math.multiply([x_min,x_max,y_min,y_max],[416,416,416,416]),dtype=tf.float32)
        return Tensor

    def __call__(self,detections):
        detections = tf.convert_to_tensor(detections,dtype=tf.float32)
        size_0 = tf.cast(tf.math.divide(detections.shape[0], 4),dtype=tf.int32)
        detections = tf.reshape(detections,shape=[size_0,4])
        Tensor_output = tf.TensorArray(dtype=tf.float32,size=size_0)
        for i in tf.range(size_0):
            deNorm_tensor = self.DeNorm(detections[i])
            Tensor_output = Tensor_output.write(i,deNorm_tensor)
        return Tensor_output.stack()

model = Sample()

rects = [0.52,0.2,0.6,0.6,0.84,0.56,0.15,0.99]



<tf.Tensor: shape=(2, 4), dtype=float32, numpy=
array([[216.31999 ,  83.200005, 249.6     , 249.6     ],
       [349.44    , 232.96    ,  62.4     , 411.84    ]], dtype=float32)>


I have saved this script in saved_model format and have the .pb file. Now my task is to compile this model into a blob file, for which I have to optimize this using the model optimizer.


So, I tried to convert using the following command. 

python3  --saved_model_dir /home/dranzer/Desktop/test_model/ --output_dir /home/dranzer/Desktop/test_model/


But I am now facing this error,

[ ERROR ]  Cannot infer shapes or values for node "StatefulPartitionedCall".
[ ERROR ]  Expected DataType for argument 'dtype' not <class 'str'>.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f4afc2cf280>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall" node. 


I have also include the log (--log_level=DEBUG) file as attachment. Please help me solve this.

Thank you.

0 Kudos
5 Replies

dilip96, Thank you for posting in the Intel® Communities Support.

In order for us to provide the most accurate assistance on this matter, we just wanted to confirm a few details about your system:

What is the model of the Intel® product that you are using?

Are you a developer?

Are you working on a project?

Are you building, designing, or modifying hardware/software?

Are you working with a specific Intel® hardware/software platform?

Is this a new computer?

Was it working fine before?

Was it working fine before?

Did you make any recent hardware/software changesWhen did the issue start?

Does the problem happen at home or in the work environment?

Depending on the Operating System that you are using, please attach the SSU report so we can verify further details about the components in your platform, check all the options in the report including the one that says "3rd party software logs":



Any questions, please let me know.


Albert R.

Intel Customer Support Technician

New Contributor I

Hi Albert,

Thank you for replying back. sure, I will provide for the details you requested.

  • What is the model of the Intel® product that you are using?

OpenVINO toolkit 2021.4.689 and Intel NCS 2 (hardware)

  • Are you a developer?


  • Are you working on a project?


  • Are you building, designing, or modifying hardware/software?


  • Are you working with a specific Intel® hardware/software platform?

yes, I am using the OAK boards from  Luxonis, which runs on Myriad X VPU. I am trying to run my custom model on these boards, for which I need to optimise the model file using Model Optimiser and compile them into a blob file.

  • Was it working fine before?

This is the first time I am trying to run this. I have compiled models trained using TensorFlow Object Detection API previously.


Please do provide me with any kind of help possible. Thank you.




Hi dilip96, You are very welcome, thank you very much for providing that information.

Since you are working with the Intel® OpenVINO toolkit, just to let you know, we actually have a specific department that will provide additional support on this topic. I just moved your thread to them so they can further assist you with this matter as soon as possible.


Albert R.

Intel Customer Support Technician



It seems that your custom model does not have enough requirements to be converted into IR. That's what the error prompted.

Maybe this would help:

You'll also need to have the correct values and shapes for the model to be successfully converted.

Probably you could refer to this model to see what are required:





Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.