Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

R5 model optimizer tells me multiple inputs but there is only one input in model

GAnthony_R_Intel
Employee
763 Views

I've got a Keras/TF model for a 2D U-Net. It is saved as a TensorFlow Serving model protobuf. When I try to use the MO I get an error that there are multiple inputs and not enough shapes specified. However, there is only one input to the model.  I'm not sure what is wrong. I have attached the model as a zip file.

 

(tf112_mkl_p36) [bduser@merlin-param01 FP32]$ python /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir ../../saved_2dunet_model_protobuf/ --input_shape=[1,144,144,4]
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     None
    - Path for generated IR:     /home/bduser/tony/unet/single-node/openvino_saved_model/FP32/.
    - IR output name:     saved_model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,144,144,4]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.5.12.49d067a0
[ ERROR ]  No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

 

0 Kudos
6 Replies
Hyodo__Katsuya
Innovator
763 Views
@G Anthony R. (Intel) I tried to examine the structure of your network. However, Your ".pb" file could not be processed successfully with "summarize_graph". [libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307] Error parsing text-format tensorflow.GraphDef: 1:1: Invalid control characters encountered in text. [libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307] Error parsing text-format tensorflow.GraphDef: 1:4: Interpreting non ascii codepoint 199. [libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307] Error parsing text-format tensorflow.GraphDef: 1:4: Expected identifier, got: 2019-01-01 09:06:26.048435: E tensorflow/tools/graph_transforms/summarize_graph_main.cc:320] Loading graph '/home/b920405/Downloads/saved_unet_model/saved_model.pb' failed with Can't parse /home/b920405/Downloads/saved_unet_model/saved_model.pb as binary proto Conversion to ". pbtxt" also results in an error. Traceback (most recent call last): File "tfconverter.py", line 22, in graphdef_to_pbtxt('saved_model.pb') # here you can write the name of the file to be converted File "tfconverter.py", line 16, in graphdef_to_pbtxt graph_def.ParseFromString(f.read()) google.protobuf.message.DecodeError: Error parsing message 1. Please tell us the version of Tensorflow and Keras you used. 2. Please tell us the URL if you have a Github repository you refer to.
0 Kudos
nikos1
Valued Contributor I
763 Views

I tried too. Same issues as Katsuya-san found. How did you freeze? or maybe the model was not frozen; if not could you try again after freezing?

0 Kudos
GAnthony_R_Intel
Employee
763 Views

Thanks.  I managed to get the model to convert to IR by running it through freeze_graph.py.  The GitHub directory for the project is:  https://github.com/IntelAI/unet/tree/master/single-node/openvino_saved_model

I'm now having trouble with the inference script.  It is telling me that it doesn't support upsampling with nearest neighbors, but I believe that both nearest neighbors and bilinear interpolation are supported by OpenVino.  Can you confirm?

 

(tf112_mkl_p36) [bduser@merlin-param01 openvino_saved_model]$ source /opt/intel/computer_vision_sdk/bin/setupvars.sh
[setupvars.sh] OpenVINO environment initialized
(tf112_mkl_p36) [bduser@merlin-param01 openvino_saved_model]$ python inference_openvino.py
[ INFO ] Loading U-Net model to the plugin
[ INFO ] Loading network files:
    ./FP32/saved_model.xml
    ./FP32/saved_model.bin
[ ERROR ] Following layers are not supported by the plugin  for specified device CPU:
 up6/ResizeBilinear, up7/ResizeBilinear, up8/ResizeBilinear, up9/ResizeBilinear
[ ERROR ] Please try to specify cpu extensions library path in sample's command line parameters using -l or --cpu_extension command line argument

 

This is with TensorFlow 1.12 and Keras 2.2.4.

 

Best,

-Tony

 

0 Kudos
Hyodo__Katsuya
Innovator
763 Views
@G Anthony R. (Intel) Please try below. Please change the path to "libcpu_extension.so" yourself. plugin = IEPlugin(device="CPU") plugin.add_cpu_extension("libcpu_extension.so") net = IENetwork(model="xxx.xml", weights="xxx.bin")
0 Kudos
GAnthony_R_Intel
Employee
763 Views

Thanks. Do I need to put the absolute path in for the shared library?

 

All I can find is ./deployment_tools/inference_engine/lib/centos_7.4/intel64/libcpu_extension_avx2.so

Is that correct?  It is giving me the error that the resource isn't available.

Also, this is the code I used initially to freeze the model (https://github.com/IntelAI/unet/blob/8752f15ab247aad0ad6caa0d9b460780f00c7ead/single-node/helper_scripts/convert_keras_to_frozen_tf_model.py). I'm not sure why this script didn't work. It's the same that Dmitry was using and worked in the past. Can you suggest an update to it so that I don't have to have the extra step of running through the freeze_model.py script?

 

Thanks.

-Tony

 

0 Kudos
nikos1
Valued Contributor I
763 Views

Hi Tony,

> All I can find is ./deployment_tools/inference_engine/lib/centos_7.4/intel64/libcpu_extension_avx2.so

You may also try to rebuild from source - the libcpu_extension makefile is part of the samples project

source  ~/intel/computer_vision_sdk/bin/setupvars.sh
cd  ~/intel/computer_vision_sdk/deployment_tools/inference_engine/samples
mkdir build 
cd build
cmake ..
make -j13

. . .

[ 32%] Linking CXX executable ../intel64/Release/hello_autoresize_classification
[ 32%] Built target end2end_video_analytics_opencv
[ 33%] Building CXX object speech_sample/CMakeFiles/speech_sample.dir/main.cpp.o
[ 34%] Building CXX object human_pose_estimation_demo/CMakeFiles/human_pose_estimation_demo.dir/src/render_human_pose.cpp.o
[ 34%] Built target hello_autoresize_classification
[ 35%] Building CXX object human_pose_estimation_demo/CMakeFiles/human_pose_estimation_demo.dir/main.cpp.o
[ 36%] Linking CXX shared library ../intel64/Release/lib/libcpu_extension.so
[ 36%] Built target ie_cpu_extension
. . . 

 

 

> Do I need to put the absolute path in for the shared library?

Needs to be in the LD_LIBRARY_PATH - probably safer to put absolute path for first tests.

nikos

0 Kudos
Reply