Showing results for 
Search instead for 
Did you mean: 

StatefulPartitionedCall issues in converting tensorflow model

I am currently converting a custom tensorflow model in OpenVINO 2020.4 using Tensorflow 2.2.0

I am running this command (I know my input shape is correct): 

"sudo python3.6 --saved_model_dir ~/Downloads/saved_model --output_dir ~/Downloads/it_worked --input_shape [1,120,20] ".

I'm running into issues with one of the operations: "StatefulPartitionedCall". I read that this particular node can have issues in openvino here: 

It says that "TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the StatefulPartitionedCall/* subgraph of TensorFlow 2.x SavedModel format."

Could I please get more detail into how exactly I should be 'pruning' these node's input?





Oh, here is the error, and I know that my input shape is correct,:

Model Optimizer version:  
Progress: [.......             ]  35.71% done
[ ERROR ]  Cannot infer shapes or values for node "StatefulPartitionedCall/sequential/lstm/StatefulPartitionedCall/TensorArrayUnstack/TensorListFromTensor".
[ ERROR ]  Tensorflow type 21 not convertible to numpy dtype.
[ ERROR ] 
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f39ccd08400>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/sequential/lstm/StatefulPartitionedCall/TensorArrayUnstack/TensorListFromTensor" node.
 For more information please refer to Model Optimizer FAQ (, question #38.

0 Kudos
8 Replies

Here is my model.summary(), for reference:

Model: "sequential"
Layer (type)                 Output Shape              Param #  
lstm (LSTM)                  (None, 120, 20)           3280     
batch_normalization (BatchNo (None, 120, 20)           80       
dropout (Dropout)            (None, 120, 20)           0        
lstm_1 (LSTM)                (None, 120, 64)           21760    
batch_normalization_1 (Batch (None, 120, 64)           256      
dropout_1 (Dropout)          (None, 120, 64)           0        
lstm_2 (LSTM)                (None, 64)                33024    
batch_normalization_2 (Batch (None, 64)                256      
dropout_2 (Dropout)          (None, 64)                0        
dense (Dense)                (None, 32)                2080     
dropout_3 (Dropout)          (None, 32)                0        
dense_1 (Dense)              (None, 1)                 33       
Total params: 60,769
Trainable params: 60,473
Non-trainable params: 296

0 Kudos


First and foremost, please help to check and ensure the custom model that you are using supported by OpenVino. You can refer here:

As a reminder, your model need to be frozen, if not, you need to do so, you can refer to Freezing Custom model in the same link as above.


There are a few ways to convert custom TF model:

  1. Checkpoint: you need to have an inference graph file & use in OpenVino's folder (model optimizer). Run the script with the path to the checkpoint file to convert a model.
  2. MetaGraph: In this case, a model consists of three or four files stored in the same directory:model_name.meta, model_name.index, and checkpoint (optional). Then, run the script with a path to the MetaGraph .meta file to convert a model
  3. SavedModel format of TensorFlow 1.x and 2.x versions: Similar concept applied where you need to route script to the correct directories.

TensorFlow 2.x SavedModel format strictly requires the 2.x version of TensorFlow.

Regarding your Error:

  1. Tensorflow type 21 not convertible to numpy dtype -- this indicate there are certain things that are not convertible which in this case a dtype object of numpy array

There are several possible reasons for this to happens and you can see them in you Error Log.

I'm not sure which custom Tensorflow's Topology you are using but I'm assuming its incompatible. It's good if you could cross check your model with OpenVino's supported Topology:

This is our official tutorial's video which might help you:



0 Kudos

Hello, thanks this cleared up a bunch of my issues.

It turns out the LSTM layer in Keras wasn't compatible for some reason, so for now I've changed to the keras TCN layer which I know is compatible as it is listed as a accepted network topology. Once I changed the model, it fully converted, but now I'm having issues actually using it. 

When I try to import the model in Python, I am getting:

Traceback (most recent call last):
  File "", line 9, in <module>
    exec_net = ie.load_network(network=net, device_name="MYRIAD", num_requests=2)
  File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network
  File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Failed to compile layer "StatefulPartitionedCall/sequential/tcn/residual_block_0/conv1D_0/Pad": AssertionFailed: layer->pads_begin.size() == 4

For reference this is the only code leading up to it:

from openvino.inference_engine import IENetwork, IEPlugin, IECore  

ie = IECore()
net = ie.read_network(model="Model1/saved_model.xml", weights="Model1/saved_model.bin")
exec_net = ie.load_network(network=net, device_name="MYRIAD", num_requests=2) 
The issue could be that I'm trying to run this model on a Raspberry Pi with OpenVINO 2020.3, because that's the latest version I can download for the Pi. However, when I build the model on my computer however, the latest version is 2020.4 which has the tensorflow 2 support. 
Does it look like the difference between building the model with 2020.3 and importing it with 2020.4 is the problem, or is there another issue with the model?
Thanks again, 
0 Kudos

Building with 2020.3 and import it to 2020.4 should have no problems if they are in the same OS platform and using the same version of TF.

If you are building with 2020.3 (from Raspbian) and import it into 2020.4 in let say Windows OS, this I believe would cause conflict since they are in different platform and toolkit package.

In addition to that, if you are going to use TF2 you need to ensure the imported saved model also trained using TF2



0 Kudos

I'm building in 2020.4 Linux Openvino, and converting in the same environment. I know that I'm using TF2 on this machine for both the training and converting of the model.

The Pi is what has 2020.3 Openvino. Since they're both technically Linux distributions of Openvino I didn't think this would be a problem? 

Also I'm including the tensorboard logs for the model before I converted it, since I'm also confused why there even is a StatefulPartionedcall operation in the final model.


0 Kudos

Although Raspbian and Ubuntu are under the same Linux distribution you need to keep in mind that they are for different purposes and architectures.

For instance, their bootloader processes. In Debian (other Linux OS suc as Ubuntu), you use GRUB to configure booting. On Raspbian, the configuration is entirely different, with many parameters set in /boot/config.txt.

The RPi uses a different architecture than Intel-based PCs. This means that .deb (installable binary) packages must be build specifically for the RPi ARM architecture. If a package is available in the Raspbian repository, it should (usually) install just like on any other Debian-based system. If not, you may have to build it yourself from source, which can be a challenge.

If you notices Openvino have different toolkit packages for Raspbian and Linux OS. These indicate there are definitely differences in the toolkit architecture. Hence, sharing model between these two platforms results in high chance of provoking conflicts.

Linux OS:

Raspbian OS:



0 Kudos

Okay, I realized that the architecture difference could be the issue and started looking into this issue.

I found this forum post , which talks about there being a known issue with the Pi on this version:

"There is an incompatibility issue between the OpenVINO™ Toolkit 2020.3 version (for RaspbianOS) and IR version 10 files, so you should add the flag --generate_deprecated_IR_V7 when converting the model to IR format."

So on the 2020.4 Openvino Linux distribution, I added this flag to convert the model to a hopefully compatible format and ran into the same sort of error that I got previously when trying to run the already converted model on the Pi:

[ WARNING ]  Use of deprecated cli option --generate_deprecated_IR_V7 detected. Option use in the following releases will be fatal.
Progress: [...............     ]  76.00% done[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.back.PadToV7.PadToV7'>)": Fill value is not constants for node "StatefulPartitionedCall/sequential/tcn/residual_block_0/conv1D_0/Pad"
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo/utils/", line 288, in apply_transform
    for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
  File "/opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo/middle/", line 58, in for_graph_and_each_sub_graph_recursively
  File "/opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/extensions/back/", line 43, in find_and_replace_pattern
    assert fill_value is not None, 'Fill value is not constants for node "{}"'.format(pad_name)
AssertionError: Fill value is not constants for node "StatefulPartitionedCall/sequential/tcn/residual_block_0/conv1D_0/Pad"

This at least tells me there is some connection between the Pi's version and having the StatefulPartitionedCall layer in my model. Should I be trying a different workaround, or is there a larger issue with my current model or architecture?


0 Kudos

Hi Port,

Just to be clear: are you converting and running your model on the Pi? Or have you converted your model on our Linux system and then trying to deploy the model on your Pi?

Either way, can you please send me the Model Optimizer command you used? 

If you are able to attach your model, please do so. I can also send you a PM and you can send it there if you prefer to not share it publicly.

Best Regards,

Tags (1)
0 Kudos