Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Deng__ChangJian
Beginner
49 Views

Why is there loss of precision when converting a tensorflow model by mo_tf.py?

Environment:

  • Ubuntu 16.04.5 LTS, 8G, i5-8350HQ.
  • OpenVINO toolkit 2018 R4 (OpenVINO R5 has been tested and same issue exists)
  • Tensorflow  1.9.0  CPU
  • Python 3.5.2, jupyter notebook.

I created a simple convolution network with tensorflow, which structure is shown below:

tf_input_data = tf.placeholder(tf.float32, shape=[6, 10, 3], name='tf_input_data')
first_depth = 3

def original_tf_net(net, first_depth):
    for i in range(10):
        with tf.variable_scope('conv_pool_{}'.format(i)):
            weight = tf.get_variable('weight', shape=[3, 3, first_depth, first_depth * 2],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
            biases = tf.get_variable('biases', shape=[first_depth * 2],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
            net = tf.nn.conv2d(net, weight, strides=[1, 1, 1, 1], padding='SAME')
            net = tf.nn.relu(tf.nn.bias_add(net, biases))
            net = tf.nn.max_pool(net, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='SAME')
            first_depth *= 2
    return net
            
tf_output_data = original_tf_net(tf.expand_dims(tf_input_data, axis=0), first_depth)


I saved this model as .meta in tf.Session(), then convert it to OpenVINO's model, test it with the same input data, and compare the results of tensorflow and openvino.inference_engine. The part of results is as follows:

================================================================================

tf_output_data.shape: (1, 6, 10, 3072)
[[  0.         0.         0.         0.         0.         0.
    0.        30.921452  82.065575  82.065575]
 [  0.         0.         0.         0.         0.         0.
   28.27745   57.495445 106.27278  106.27278 ]
 [  0.         0.         0.         0.         0.        10.490319
   94.11426   98.799255 120.11466  120.11466 ]
 [  0.        49.01517   64.75061   67.82081   67.82081   78.42449
  123.26729  123.26729  129.36755  129.36755 ]
 [ 81.17451  110.32983  110.32983  102.462364 102.06022  121.95374
  152.20992  152.20992  134.12062  129.36755 ]
 [ 81.17451  110.32983  110.32983  102.462364 102.06022  121.95374
  152.20992  152.20992  134.12062  110.37463 ]]
--------------------------------------------------------------------------------
ir_output_data.shape: (1, 6, 10, 3072)
[[  0.         0.         0.         0.         0.         0.
    0.        39.849155  91.38842   91.38842 ]
 [  0.         0.         0.         0.         0.         0.
    7.067332  51.827957 105.46957  105.46957 ]
 [  0.         0.         0.         0.         0.        15.302341
   78.706894  95.9026   122.1913   122.1913  ]
 [  0.        36.32107   62.048756  63.11119   63.11119   73.7578
  117.59961  117.59961  122.68975  122.68975 ]
 [ 73.45404   90.483696  90.483696  92.27614  101.98306  115.726616
  142.83017  142.83017  125.39823  122.68975 ]
 [ 73.45404   90.483696  90.483696  92.27614  101.98306  115.726616
  142.83017  142.83017  125.39823  101.81243 ]]

================================================================================

It can be clearly observed that the results of openvino.inference_engine are not equal to tensorflow, which will seriously affect the performance of the model. Is there any problems with my settings? I tried to freeze the model to a pb file, but the same issue still exists.

 

the convert command is:

mo_tf.py --input_meta_graph ./cpkt/model.cpkt.meta --input tf_input_data --output conv_pool_9/MaxPool --output_dir ./ir --data_type FP32

 

Regards,

    Deng ChangJian

0 Kudos
0 Replies
Reply