- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've got a Keras/TF model for a 2D U-Net. It is saved as a TensorFlow Serving model protobuf. When I try to use the MO I get an error that there are multiple inputs and not enough shapes specified. However, there is only one input to the model. I'm not sure what is wrong. I have attached the model as a zip file.
(tf112_mkl_p36) [bduser@merlin-param01 FP32]$ python /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir ../../saved_2dunet_model_protobuf/ --input_shape=[1,144,144,4]
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/bduser/tony/unet/single-node/openvino_saved_model/FP32/.
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,144,144,4]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 1.5.12.49d067a0
[ ERROR ] No or multiple placeholders in the model, but only one shape is provided, cannot set it.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I tried too. Same issues as Katsuya-san found. How did you freeze? or maybe the model was not frozen; if not could you try again after freezing?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks. I managed to get the model to convert to IR by running it through freeze_graph.py. The GitHub directory for the project is: https://github.com/IntelAI/unet/tree/master/single-node/openvino_saved_model
I'm now having trouble with the inference script. It is telling me that it doesn't support upsampling with nearest neighbors, but I believe that both nearest neighbors and bilinear interpolation are supported by OpenVino. Can you confirm?
(tf112_mkl_p36) [bduser@merlin-param01 openvino_saved_model]$ source /opt/intel/computer_vision_sdk/bin/setupvars.sh
[setupvars.sh] OpenVINO environment initialized
(tf112_mkl_p36) [bduser@merlin-param01 openvino_saved_model]$ python inference_openvino.py
[ INFO ] Loading U-Net model to the plugin
[ INFO ] Loading network files:
./FP32/saved_model.xml
./FP32/saved_model.bin
[ ERROR ] Following layers are not supported by the plugin for specified device CPU:
up6/ResizeBilinear, up7/ResizeBilinear, up8/ResizeBilinear, up9/ResizeBilinear
[ ERROR ] Please try to specify cpu extensions library path in sample's command line parameters using -l or --cpu_extension command line argument
This is with TensorFlow 1.12 and Keras 2.2.4.
Best,
-Tony
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks. Do I need to put the absolute path in for the shared library?
All I can find is ./deployment_tools/inference_engine/lib/centos_7.4/intel64/libcpu_extension_avx2.so
Is that correct? It is giving me the error that the resource isn't available.
Also, this is the code I used initially to freeze the model (https://github.com/IntelAI/unet/blob/8752f15ab247aad0ad6caa0d9b460780f00c7ead/single-node/helper_scripts/convert_keras_to_frozen_tf_model.py). I'm not sure why this script didn't work. It's the same that Dmitry was using and worked in the past. Can you suggest an update to it so that I don't have to have the extra step of running through the freeze_model.py script?
Thanks.
-Tony
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tony,
> All I can find is ./deployment_tools/inference_engine/lib/centos_7.4/intel64/libcpu_extension_avx2.so
You may also try to rebuild from source - the libcpu_extension makefile is part of the samples project
source ~/intel/computer_vision_sdk/bin/setupvars.sh cd ~/intel/computer_vision_sdk/deployment_tools/inference_engine/samples mkdir build cd build cmake .. make -j13
. . .
[ 32%] Linking CXX executable ../intel64/Release/hello_autoresize_classification
[ 32%] Built target end2end_video_analytics_opencv
[ 33%] Building CXX object speech_sample/CMakeFiles/speech_sample.dir/main.cpp.o
[ 34%] Building CXX object human_pose_estimation_demo/CMakeFiles/human_pose_estimation_demo.dir/src/render_human_pose.cpp.o
[ 34%] Built target hello_autoresize_classification
[ 35%] Building CXX object human_pose_estimation_demo/CMakeFiles/human_pose_estimation_demo.dir/main.cpp.o
[ 36%] Linking CXX shared library ../intel64/Release/lib/libcpu_extension.so
[ 36%] Built target ie_cpu_extension
. . .
> Do I need to put the absolute path in for the shared library?
Needs to be in the LD_LIBRARY_PATH - probably safer to put absolute path for first tests.
nikos
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page