Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Tensorflow accuracy issue during use custom model

idata
Employee
659 Views

I get problems during trying to use custom TF model with NCS. The major problem is the NCS inference results big different with TF test result. As shown below, the first figure is the compare the label value and reference value by Tensorflow. The second figure is results of NCS. We use the same test data, but get the different reference result.

 

 

 

The image doesn't show up, post the link below.

 

Tensorflow test result : https://www.dropbox.com/s/e3wkg2k9ckxrjdk/0-test.png

 

NCS reference result: https://www.dropbox.com/s/2ii2n39dw8fyh25/0-ref.png

 

The network model is attached below for your reference

 

def deepnn(x):

 

with tf.name_scope('reshape'):

 

x_image = tf.reshape(x, [-1, 120,160, 3])

 

# First convolutional layer - maps one grayscale image to 32 feature maps.

 

with tf.name_scope('conv1'):

 

W_conv1 = weight_variable([3, 3, 3, 8])

 

b_conv1 = bias_variable([8])

 

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

 

print("h_conv1",h_conv1.shape)

 

# Pooling layer - downsamples by 2X.

 

with tf.name_scope('pool1'):

 

h_pool1 = max_pool_2x2(h_conv1)

 

# Second convolutional layer

 

with tf.name_scope('conv2'):

 

W_conv2 = weight_variable([3, 3, 8, 16])

 

b_conv2 = bias_variable([16])

 

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

 

# Second pooling layer.

 

with tf.name_scope('pool2'):

 

h_pool2 = max_pool_2x2(h_conv2)

 

with tf.name_scope('conv3'):

 

W_conv3 = weight_variable([3, 3, 16, 32])

 

b_conv3 = bias_variable([32])

 

h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)

 

# third pooling layer.

 

with tf.name_scope('pool3'):

 

h_pool3 = max_pool_2x2(h_conv3)

 

with tf.name_scope('conv4'):

 

W_conv4 = weight_variable([3, 3, 32, 64])

 

b_conv4 = bias_variable([64])

 

h_conv4 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4)

 

# pooling layer.

 

with tf.name_scope('pool4'):

 

h_pool4 = max_pool_2x2(h_conv4)

 

# Fully connected layer

 

with tf.name_scope('fc1'):

 

W_fc1 = weight_variable([8 * 10 * 64, 1024])

 

b_fc1 = bias_variable([1024])

 

h_pool4_flat = tf.reshape(h_pool4, [-1, 8 * 10 * 64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool4_flat, W_fc1) + b_fc1)

 

print("h_fc1",h_fc1.shape)

 

# Dropout - controls the complexity of the model, prevents co-adaptation of

 

# features.

 

with tf.name_scope('dropout'):

 

if is_training:

 

keep_prob = tf.placeholder(tf.float32)

 

h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

 

else:

 

h_fc1_drop = tf.nn.dropout(h_fc1, 1.0)

 

with tf.name_scope('fc2'):

 

W_fc2 = weight_variable([1024, 1])

 

b_fc2 = bias_variable([1])

 

y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

 

if is_training:

 

return y_conv, keep_prob

 

else:

 

return y_conv
0 Kudos
6 Replies
idata
Employee
411 Views

@hasan Thanks for bringing this to our attention. Problems like this can possibly arise from pre-processing issues (how you are reading the image in, color space (bgr vs rgb), mean normalization, etc). See https://movidius.github.io/ncsdk/configure_network.html for more information. Can you provide your complete program for further analysis?

0 Kudos
idata
Employee
411 Views

@Tome_at_Intel I have similar problem as hasan had: when running on NCS, the accuracy of my super simple model drops from around 86% to 80%

 

I have checked the documentation for the preprocessing issues that you have recommended, but I still do not see the problem. Any suggestions?

 

The code is available here: https://www.dropbox.com/s/mgw0xab8urnoimh/tf_mnist.tar.gz?dl=0
0 Kudos
idata
Employee
411 Views

@elzkit I'm trying to reproduce the issue that you're receiving and I assume that you used mvNCCompile to convert your model and then found that the accuracy was lower than expected.

 

If you did follow this workflow, I was wondering if you could provide more information regarding the command you used to convert the model (mvNCCompile). Thanks!

0 Kudos
idata
Employee
411 Views

@Tome_at_Intel The command to convert the model is contained in comp_model.sh. Which is basically

 

mvNCCompile tf_model/mnist_model.ckpt.meta -s 12 -in input -on Add_MulxW_b -is 784 1 -o ncs_model/mnist.graph

 

It seems that on NCS the model behaves as if it would be ignoring the value of the tensor "b" in the last tf.add.

 

Thank you for looking into the issue!
0 Kudos
idata
Employee
411 Views

@elzkit Maybe the issue you're observing has relation with other thread here. Refer to https://ncsforum.movidius.com/discussion/comment/1245/#Comment_1245

 

A good test will be to make the same with caffe model and see if same inconsistency will appear on NCS i.e. compare behavior of NCS using both models it supports.
0 Kudos
idata
Employee
411 Views

We tested the same model under caffe, it works. The inference result from NCS is acceptable.

0 Kudos
Reply