Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Cannot convert TensorFlow YOLOv3 TIny .pb file to IR Format for RPi

Mohammed__Nadeem
552 Views
When giving this command, 


sudo python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model pbmodels/irim.pb --output_dir lrmodels/tiny-YoloV3/FP16/ --data_type FP16 --batch 1 --tensorflow_use_custom_operations_config yolo_v3_tiny_changed.json

The following is the error I'm getting.

OpenVINO R1 2019, Ubuntu 18.04, TensorFlow 1.12, CUDA 10.1
 

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/nadeem/OpenVINO-YoloV3-master/pbmodels/irim.pb
    - Path for generated IR:     /home/nadeem/OpenVINO-YoloV3-master/lrmodels/tiny-YoloV3/FP16/
    - IR output name:     irim
    - Log level:     ERROR
    - Batch:     1
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP16
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /home/nadeem/OpenVINO-YoloV3-master/yolo_v3_tiny_changed.json
Model Optimizer version:     2019.1.0-341-gc9b66a2
/usr/lib/python3/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
[ ERROR ]  Cannot infer shapes or values for node "detector/yolo-v3-tiny/Conv/LeakyRelu".
[ ERROR ]  Op type not registered 'LeakyRelu' in binary running on Nadeem-XPS. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fc876610048>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "detector/yolo-v3-tiny/Conv/LeakyRelu" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

0 Kudos
3 Replies
Shubha_R_Intel
Employee
552 Views

Dear Mohammed, Nadeem,

Is this a custom-trained model or a pre-trained one ?

If you carefully followed These instructions for a pre-trained model it should work. Custom trained should also work but who knows - there may be a bug in custom trained models.

Please report back here regarding your status.

Thanks,

Shubha

 

0 Kudos
Mohammed__Nadeem
552 Views

Hi Shubha,

Yes, it is a custom trained model for just one class. I'm able to import the same using caffe but not tensorflow. 

Best,

Nadeem

0 Kudos
Shubha_R_Intel
Employee
552 Views

Dear Mohammed, Nadeem

 If it's custom trained then it's probably an MO bug. Can you kindly attach your custom-trained tiny.pb file as a *.zip ? Please allow me to reproduce this issue.

Thanks kindly,

Shubha

0 Kudos
Reply