Showing results for 
Search instead for 
Did you mean: 

R5 model optimizer tells me multiple inputs but there is only one input in model

I've got a Keras/TF model for a 2D U-Net. It is saved as a TensorFlow Serving model protobuf. When I try to use the MO I get an error that there are multiple inputs and not enough shapes specified. However, there is only one input to the model.  I'm not sure what is wrong. I have attached the model as a zip file.


(tf112_mkl_p36) [bduser@merlin-param01 FP32]$ python /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/ --saved_model_dir ../../saved_2dunet_model_protobuf/ --input_shape=[1,144,144,4]
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     None
    - Path for generated IR:     /home/bduser/tony/unet/single-node/openvino_saved_model/FP32/.
    - IR output name:     saved_model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,144,144,4]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:
[ ERROR ]  No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.


0 Kudos
6 Replies

@G Anthony R. (Intel) I tried to examine the structure of your network. However, Your ".pb" file could not be processed successfully with "summarize_graph". [libprotobuf ERROR external/protobuf_archive/src/google/protobuf/] Error parsing text-format tensorflow.GraphDef: 1:1: Invalid control characters encountered in text. [libprotobuf ERROR external/protobuf_archive/src/google/protobuf/] Error parsing text-format tensorflow.GraphDef: 1:4: Interpreting non ascii codepoint 199. [libprotobuf ERROR external/protobuf_archive/src/google/protobuf/] Error parsing text-format tensorflow.GraphDef: 1:4: Expected identifier, got: 2019-01-01 09:06:26.048435: E tensorflow/tools/graph_transforms/] Loading graph '/home/b920405/Downloads/saved_unet_model/saved_model.pb' failed with Can't parse /home/b920405/Downloads/saved_unet_model/saved_model.pb as binary proto Conversion to ". pbtxt" also results in an error. Traceback (most recent call last): File "", line 22, in graphdef_to_pbtxt('saved_model.pb') # here you can write the name of the file to be converted File "", line 16, in graphdef_to_pbtxt graph_def.ParseFromString( google.protobuf.message.DecodeError: Error parsing message 1. Please tell us the version of Tensorflow and Keras you used. 2. Please tell us the URL if you have a Github repository you refer to.
Valued Contributor I

I tried too. Same issues as Katsuya-san found. How did you freeze? or maybe the model was not frozen; if not could you try again after freezing?


Thanks.  I managed to get the model to convert to IR by running it through  The GitHub directory for the project is:

I'm now having trouble with the inference script.  It is telling me that it doesn't support upsampling with nearest neighbors, but I believe that both nearest neighbors and bilinear interpolation are supported by OpenVino.  Can you confirm?


(tf112_mkl_p36) [bduser@merlin-param01 openvino_saved_model]$ source /opt/intel/computer_vision_sdk/bin/
[] OpenVINO environment initialized
(tf112_mkl_p36) [bduser@merlin-param01 openvino_saved_model]$ python
[ INFO ] Loading U-Net model to the plugin
[ INFO ] Loading network files:
[ ERROR ] Following layers are not supported by the plugin  for specified device CPU:
 up6/ResizeBilinear, up7/ResizeBilinear, up8/ResizeBilinear, up9/ResizeBilinear
[ ERROR ] Please try to specify cpu extensions library path in sample's command line parameters using -l or --cpu_extension command line argument


This is with TensorFlow 1.12 and Keras 2.2.4.






@G Anthony R. (Intel) Please try below. Please change the path to "" yourself. plugin = IEPlugin(device="CPU") plugin.add_cpu_extension("") net = IENetwork(model="xxx.xml", weights="xxx.bin")

Thanks. Do I need to put the absolute path in for the shared library?


All I can find is ./deployment_tools/inference_engine/lib/centos_7.4/intel64/

Is that correct?  It is giving me the error that the resource isn't available.

Also, this is the code I used initially to freeze the model ( I'm not sure why this script didn't work. It's the same that Dmitry was using and worked in the past. Can you suggest an update to it so that I don't have to have the extra step of running through the script?





Valued Contributor I

Hi Tony,

> All I can find is ./deployment_tools/inference_engine/lib/centos_7.4/intel64/

You may also try to rebuild from source - the libcpu_extension makefile is part of the samples project

source  ~/intel/computer_vision_sdk/bin/
cd  ~/intel/computer_vision_sdk/deployment_tools/inference_engine/samples
mkdir build 
cd build
cmake ..
make -j13

. . .

[ 32%] Linking CXX executable ../intel64/Release/hello_autoresize_classification
[ 32%] Built target end2end_video_analytics_opencv
[ 33%] Building CXX object speech_sample/CMakeFiles/speech_sample.dir/main.cpp.o
[ 34%] Building CXX object human_pose_estimation_demo/CMakeFiles/human_pose_estimation_demo.dir/src/render_human_pose.cpp.o
[ 34%] Built target hello_autoresize_classification
[ 35%] Building CXX object human_pose_estimation_demo/CMakeFiles/human_pose_estimation_demo.dir/main.cpp.o
[ 36%] Linking CXX shared library ../intel64/Release/lib/
[ 36%] Built target ie_cpu_extension
. . . 



> Do I need to put the absolute path in for the shared library?

Needs to be in the LD_LIBRARY_PATH - probably safer to put absolute path for first tests.