Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

TF model (MO) optimization problem

Savsunenko__Alex
Beginner
694 Views

Hello,

I am trying to test MO optimizer for my DeepLab/MobileNetV2 model. I am aware of this thread, but the solution doesn't help me. Intel, please, assist ;-)

Model - non-trained, exported with the export script from official deeplab, input node in input:0, output is segmap:0. Link

python mo.py --input_model /data/1.pb --input_shape "(1,513,513,3)" --log_level=DEBUG --data_type FP32 --output segmap  --input input --scale 1 --model_name test --framework tf --output_dir ./

And error:

[ ERROR ]  Stopped shape/value propagation at "GreaterEqual" node.

tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node GreaterEqual was passed int64 from add_1_port_0_ie_placeholder:0 incompatible with expected int32.

 

Thanks a lot! 

Alex

0 Kudos
3 Replies
Zhen_Z_Intel
Employee
694 Views

Hello Alex,

 

Please use below commend to generate IR file:

python mo_tf.py --input_model 1.pb --input 0:MobilenetV2/Conv/Conv2D --output ArgMax --input_shape [1,513,513,3]

 

BTW, the IR file only contains main workloads of the Deeplab model, if you need to infer the whole model, please use TF operation to finish left pre/post processing, you can refer to my repo on Github for DeepLabV3-MobileNetV2:

https://github.com/FionaZZ92/OpenVINO

0 Kudos
Savsunenko__Alex
Beginner
694 Views

Hello Fiona,

Thanks for your help and github - makes much more sense now.

Can you please also explain - I'am currently getting "libcpu_extension.so: cannot open shared object file: No such file or directory" error and there's no such file in the /opt/intel/. Do I have to run the build myself? Compiled file isn't a part of the distribution ?

Thanks, Alex.

0 Kudos
Zhen_Z_Intel
Employee
694 Views

Hello Alex,

In OpenVINOR release, the libcpu_extension.so is built from our inference sample "extension". You have to build this sample firstly, then access the dynamic lib from ${INTEL_CVSDK_DIR}/deployment_tools/inference_engine/samples/intel64/Release/lib/libcpu_extension.so

0 Kudos
Reply