Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

tensorflow model to openvino IR but have some error

Hung__Edwin
Beginner
892 Views

Hi I transform my tensorflow model to openvino IR but have some error as below

(testAI) C:\TF_to_IR>python "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo_tf.py" --input_model frozen_inference_graph.pb --output_dir C:\openvino_IR --data_type FP32 --batch 1


Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\TF_to_IR\frozen_inference_graph.pb
        - Path for generated IR:        C:\openvino_IR
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.2.0-436-gf5827d4
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\88691\Anaconda3\envs\testAI\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING: Logging before flag parsing goes to stderr.
E1212 12:23:34.616821  2512 infer.py:160] Shape [ 1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
E1212 12:23:34.618852  2512 infer.py:180] Cannot infer shapes or values for node "image_tensor".
E1212 12:23:34.618852  2512 infer.py:181] Not all output shapes were inferred or fully defined for node "image_tensor".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
E1212 12:23:34.618852  2512 infer.py:182]
E1212 12:23:34.619849  2512 infer.py:183] It can happen due to bug in custom shape infer function <function Parameter.__init__.<locals>.<lambda> at 0x0000027CF28D4EA0>.
E1212 12:23:34.619849  2512 infer.py:184] Or because the node inputs have incorrect values/shapes.
E1212 12:23:34.619849  2512 infer.py:185] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E1212 12:23:34.620846  2512 infer.py:194] Run Model Optimizer with --log_level=DEBUG for more information.
E1212 12:23:34.620846  2512 main.py:307] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

What does it mean ??

 

 

0 Kudos
1 Solution
Luis_at_Intel
Moderator
892 Views

Hi Hung, Edwin

Thanks for the info, I can see that you are using TF v1.15.0, is that correct? We have seen issues with this TF version, if possible could you try downgrading to TF version 1.14.0? You can try sharing your frozen model and files for us to try converting to IR, feel free to send me a PM in case you don't want to share it publicly.

You can try converting using a command similar to the following:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config faster_rcnn_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels


Regards,

Luis

 

View solution in original post

0 Kudos
3 Replies
Luis_at_Intel
Moderator
892 Views

Hi Hung, Edwin

Thanks for reaching out. May I ask which model did you train on and which TensorFlow version did you use? If by any chance you used a TensorFlow* Object Detection API Models you have to use some required parameters specified in in the guide here.

For example, if you trained on YoloV3, the command would look something like this:

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--tensorflow_use_custom_operations_config <>/openvino/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json \
--input_shape [1,416,416,3] \
--data_type FP32 \
--reverse_input_channels \
--output_dir C:\openvino_IR \ 
--log_level DEBUG 

 

If you'd like please share your model so we can attempt to convert it to IR, you can share it via PrivateMessage if you don't want to make it public.

Regards,

Luis

0 Kudos
Hung__Edwin
Beginner
892 Views

Hi Luis 

Tensorflow  version : 1.5.0

Python version 3.5.6

Tensorflow  trained model : faster_rcnn_inception_v2_coco_2018_01_28

Can I provide my  frozen_inference_graph.pb to you ?? 

And you can help me to convert IR file 

0 Kudos
Luis_at_Intel
Moderator
893 Views

Hi Hung, Edwin

Thanks for the info, I can see that you are using TF v1.15.0, is that correct? We have seen issues with this TF version, if possible could you try downgrading to TF version 1.14.0? You can try sharing your frozen model and files for us to try converting to IR, feel free to send me a PM in case you don't want to share it publicly.

You can try converting using a command similar to the following:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config faster_rcnn_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels


Regards,

Luis

 

0 Kudos
Reply