Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Townsend__Ryan
Beginner
103 Views

Bug report when running mo_tf.py to convert pb file to IR

I'm trying to convert my tensorflow 2.0 .pb file for a custom model  into IR for use on NCS2

First I was getting an error saying that an input shape was incorrect, so i tried using both the -b 1 and --input_shape="[1,360,480,3]" commands, but in both cases i get the following error:

(pyguy2) C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model tf_modelv2.pb --input_shape="[1, 360, 480, 3]"
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\tf_modelv2.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       tf_modelv2
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1, 360, 480, 3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.3.0-408-gac8584cb7
2019-11-18 14:41:18.260189: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2019-11-18 14:41:18.264654: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  shapes (32,16) and (0,) not aligned: 16 (dim 1) != 0 (dim 0)
[ ERROR ]  Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 298, in main
    return driver(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 247, in driver
    is_binary=not argv.input_model_is_text)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 163, in tf2nx
    for_graph_and_each_sub_graph_recursively(graph, fuse_linear_ops)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
    func(graph)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\fusing\fuse_linear_ops.py", line 267, in fuse_linear_ops
    is_fused = _fuse_add(graph, node, fuse_nodes, False)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\fusing\fuse_linear_ops.py", line 206, in _fuse_add
    fuse_node.in_port(2).data.set_value(bias_value + np.dot(fuse_node.in_port(1).data.get_value(), value))
  File "<__array_function__ internals>", line 6, in dot
ValueError: shapes (32,16) and (0,) not aligned: 16 (dim 1) != 0 (dim 0)

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

I've found similar problems on other forums, but none of them were quite the same and none of the solutions seemed to work.

I am running openvino_2019.3.379, which i reinstalled today.

Any help?

Ryan

0 Kudos
4 Replies
Gouveia__César
New Contributor I
103 Views

Hi Ryan,

Did you tried using an input shape of input_shape="[1,3,360,480]? Can you provide the .pb tensorflow file? Another thing you should provide is the full logs (which shows more information) which is enabled by using the flag --log_level DEBUG.

Hope it helps,

César.

Townsend__Ryan
Beginner
103 Views

Thank you for responding César.

I tried using your input shape, and it gave me the following new error:

[ ERROR ]  Shape [  1  -1 177  32] is not fully defined for output 0 of "conv2d_1/Conv2D". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "conv2d_1/Conv2D".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "conv2d_1/Conv2D".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Convolution.infer at 0x7f12abbfabf8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Not all output shapes were inferred or fully defined for node "conv2d_1/Conv2D".

 

I'm having a hard time understanding this new error, because that input shape ( [  1  -1 177  32] ) is not what I entered. What I entered what what you mentioned above.

Furthermore, I cannot attach the pb file because it is far too big (over 300 mb) and i cannot attach the debug log files because they are similarly too big (over a few thousand lines).

These are the last few lines before the error occurs:

[ 2019-11-20 13:59:58,963 ] [ DEBUG ] [ fuse_linear_ops:126 ]  Fused: Mul1_13485/Fused_Mul_ to conv2d_90/Conv2D
[ 2019-11-20 13:59:58,968 ] [ DEBUG ] [ fuse_linear_ops:210 ]  Fused: Add1_13486/Fused_Add_ to conv2d_90/Conv2D
[ 2019-11-20 13:59:58,979 ] [ DEBUG ] [ fuse_linear_ops:126 ]  Fused: Mul1_13533/Fused_Mul_ to conv2d_92/Conv2D
[ 2019-11-20 13:59:58,984 ] [ DEBUG ] [ fuse_linear_ops:210 ]  Fused: Add1_13534/Fused_Add_ to conv2d_92/Conv2D
[ 2019-11-20 13:59:58,994 ] [ DEBUG ] [ fuse_linear_ops:126 ]  Fused: Mul1_13521/Fused_Mul_ to conv2d_91/Conv2D
[ 2019-11-20 13:59:58,999 ] [ DEBUG ] [ fuse_linear_ops:210 ]  Fused: Add1_13522/Fused_Add_ to conv2d_91/Conv2D
[ 2019-11-20 13:59:59,005 ] [ DEBUG ] [ fuse_linear_ops:210 ]  Fused: dense/BiasAdd/Add to dense/MatMul
[ 2019-11-20 13:59:59,012 ] [ DEBUG ] [ fuse_linear_ops:210 ]  Fused: dense_1/BiasAdd/Add to dense_1/MatMul
[ 2019-11-20 13:59:59,017 ] [ DEBUG ] [ fuse_linear_ops:210 ]  Fused: predictions/BiasAdd/Add to predictions/MatMul
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------

 

Is there anything else I can try?

Thank you,

Ryan

Gouveia__César
New Contributor I
103 Views

Hi Ryan,

Plz attach the logs as a txt file.

César.

Townsend__Ryan
Beginner
103 Views

Here is the log file