- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am trying to test MO optimizer for my DeepLab/MobileNetV2 model. I am aware of this thread, but the solution doesn't help me. Intel, please, assist ;-)
Model - non-trained, exported with the export script from official deeplab, input node in input:0, output is segmap:0. Link
python mo.py --input_model /data/1.pb --input_shape "(1,513,513,3)" --log_level=DEBUG --data_type FP32 --output segmap --input input --scale 1 --model_name test --framework tf --output_dir ./
And error:
[ ERROR ] Stopped shape/value propagation at "GreaterEqual" node.
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node GreaterEqual was passed int64 from add_1_port_0_ie_placeholder:0 incompatible with expected int32.
Thanks a lot!
Alex
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Alex,
Please use below commend to generate IR file:
python mo_tf.py --input_model 1.pb --input 0:MobilenetV2/Conv/Conv2D --output ArgMax --input_shape [1,513,513,3]
BTW, the IR file only contains main workloads of the Deeplab model, if you need to infer the whole model, please use TF operation to finish left pre/post processing, you can refer to my repo on Github for DeepLabV3-MobileNetV2:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Fiona,
Thanks for your help and github - makes much more sense now.
Can you please also explain - I'am currently getting "libcpu_extension.so: cannot open shared object file: No such file or directory" error and there's no such file in the /opt/intel/. Do I have to run the build myself? Compiled file isn't a part of the distribution ?
Thanks, Alex.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Alex,
In OpenVINOR release, the libcpu_extension.so is built from our inference sample "extension". You have to build this sample firstly, then access the dynamic lib from ${INTEL_CVSDK_DIR}/deployment_tools/inference_engine/samples/intel64/Release/lib/libcpu_extension.so
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page