Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

MNIST NCS Implementation: [Error 5] Toolkit Error: Stage Details Not Supported: fc1/add

idata
Employee
1,510 Views

Hi Intel,

 

I am using https://movidius.github.io/ncsdk/tf_compile_guidance.html for generating a model and I got model_inference.meta as described in the blog.

 

But when I am trying to compile the same, I am getting error as : [Error 5] Toolkit Error: Stage Details Not Supported: fc1/add.

 

Can you please help me to solve the error

 

Tensorflow code for generating tensorflow model.:

 

Copyright 2015 The TensorFlow Authors. All Rights Reserved.

 

#

 

Licensed under the Apache License, Version 2.0 (the "License");

 

you may not use this file except in compliance with the License.

 

You may obtain a copy of the License at

 

#

 

http://www.apache.org/licenses/LICENSE-2.0

 

#

 

Unless required by applicable law or agreed to in writing, software

 

distributed under the License is distributed on an "AS IS" BASIS,

 

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

 

See the License for the specific language governing permissions and

 

limitations under the License.

 

==============================================================================

 

"""A deep MNIST classifier using convolutional layers.

 

See extensive documentation at

 

https://www.tensorflow.org/get_started/mnist/pros

 

"""

 

Disable linter warnings to maintain consistency with tutorial.

 

pylint: disable=invalid-name

 

pylint: disable=g-bad-import-order

 

from future import absolute_import

 

from future import division

 

from future import print_function

 

import argparse

 

import sys

 

import tempfile

 

from tensorflow.examples.tutorials.mnist import input_data

 

import tensorflow as tf

 

FLAGS = None

 

def deepnn(x):

 

"""deepnn builds the graph for a deep net for classifying digits.

 

Args:

 

x: an input tensor with the dimensions (N_examples, 784), where 784 is the

 

number of pixels in a standard MNIST image.

 

Returns:

 

A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values

 

equal to the logits of classifying the digit into one of 10 classes (the

 

digits 0-9). keep_prob is a scalar placeholder for the probability of

 

dropout.

 

"""

 

# Reshape to use within a convolutional neural net.

 

# Last dimension is for "features" - there is only one here, since images are

 

# grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.

 

with tf.name_scope('reshape'):

 

x_image = tf.reshape(x, [-1, 28, 28, 1])

 

# First convolutional layer - maps one grayscale image to 32 feature maps.

 

with tf.name_scope('conv1'):

 

W_conv1 = weight_variable([5, 5, 1, 32])

 

b_conv1 = bias_variable([32])

 

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

 

# Pooling layer - downsamples by 2X.

 

with tf.name_scope('pool1'):

 

h_pool1 = max_pool_2x2(h_conv1)

 

# Second convolutional layer -- maps 32 feature maps to 64.

 

with tf.name_scope('conv2'):

 

W_conv2 = weight_variable([5, 5, 32, 64])

 

b_conv2 = bias_variable([64])

 

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

 

# Second pooling layer.

 

with tf.name_scope('pool2'):

 

h_pool2 = max_pool_2x2(h_conv2)

 

# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image

 

# is down to 7x7x64 feature maps -- maps this to 1024 features.

 

with tf.name_scope('fc1'):

 

W_fc1 = weight_variable([7 * 7 * 64, 1024])

 

b_fc1 = bias_variable([1024])

 

h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

 

# Dropout - controls the complexity of the model, prevents co-adaptation of

 

# features.

 

with tf.name_scope('dropout'):

 

keep_prob = tf.placeholder(tf.float32)

 

h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

 

# Map the 1024 features to 10 classes, one for each digit

 

with tf.name_scope('fc2'):

 

W_fc2 = weight_variable([1024, 10])

 

b_fc2 = bias_variable([10])

 

y_conv = tf.matmul(h_fc1, W_fc2) + b_fc2

 

return y_conv

 

def conv2d(x, W):

 

"""conv2d returns a 2d convolution layer with full stride."""

 

return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

 

def max_pool_2x2(x):

 

"""max_pool_2x2 downsamples a feature map by 2X."""

 

return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],

 

strides=[1, 2, 2, 1], padding='SAME')

 

def weight_variable(shape):

 

"""weight_variable generates a weight variable of a given shape."""

 

initial = tf.truncated_normal(shape, stddev=0.1)

 

return tf.Variable(initial)

 

def bias_variable(shape):

 

"""bias_variable generates a bias variable of a given shape."""

 

initial = tf.constant(0.1, shape=shape)

 

return tf.Variable(initial)

 

def main(_):

 

# Import data

 

mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)

 

# Create the model

 

#x = tf.placeholder(tf.float32, [None, 784])

 

x = tf.placeholder(tf.float32, [None, 784], name="input")

 

# Define loss and optimizer

 

y_ = tf.placeholder(tf.float32, [None, 10])

 

# Build the graph for the deep net

 

y_conv= deepnn(x)

 

output = tf.nn.softmax(y_conv, name='output')

 

with tf.name_scope('loss'):

 

cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_,

 

logits=y_conv)

 

cross_entropy = tf.reduce_mean(cross_entropy)

 

with tf.name_scope('adam_optimizer'):

 

train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

 

with tf.name_scope('accuracy'):

 

correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))

 

correct_prediction = tf.cast(correct_prediction, tf.float32)

 

accuracy = tf.reduce_mean(correct_prediction)

 

graph_location = tempfile.mkdtemp()

 

print('Saving graph to: %s' % graph_location)

 

train_writer = tf.summary.FileWriter(graph_location)

 

train_writer.add_graph(tf.get_default_graph())

 

saver = tf.train.Saver()

 

with tf.Session() as sess:

 

sess.run(tf.global_variables_initializer())

 

for i in range(5000):

 

batch = mnist.train.next_batch(50)

 

if i % 100 == 0:

 

train_accuracy = accuracy.eval(feed_dict={

 

x: batch[0], y_: batch[1]})

 

print('step %d, training accuracy %g' % (i, train_accuracy))

 

train_step.run(feed_dict={x: batch[0], y_: batch[1]})

 

print('test accuracy %g' % accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels})) graph_location = "." save_path = saver.save(sess, graph_location + "/mnist_model")

 

if name == 'main':

 

parser = argparse.ArgumentParser()

 

parser.add_argument('--data_dir', type=str,

 

default='/tmp/tensorflow/mnist/input_data',

 

help='Directory for storing input data')

 

FLAGS, unparsed = parser.parse_known_args()

 

tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

 

And you can download model_inference files from https://1drv.ms/f/s!AioA6iXbzJf_gQxaLQOWszmvMeN1

 

Please help me to solve the error ASAP. Will be waiting for your reply.

 

Thanks-in-advance.

0 Kudos
8 Replies
idata
Employee
1,226 Views

Hi @saikrishnaTheGreat

 

Which Neural Compute SDK (NCSDK) are you using? I was able to compile your model into a graph with the NCSDK v2.08.

 

mvNCCompile mnist_inference.meta -s 12 -in input -on output -o mnist_inference.graph

 

Also, there is an example in the Neural Compute App Zoo (NCAPPZOO) for hand written digits 0 - 9.

 

https://github.com/movidius/ncappzoo/tree/master/tensorflow/mnist

 

Regards,

 

Jesus
0 Kudos
idata
Employee
1,226 Views

Hi Jesus,

 

Still same problem. NCSDK version is : "mvNCCompile v02.00, Copyright @ Intel Corporation 2017"

 

And when I use mnist from ncappzoo, same problem I got.

 

Regards,

 

Sai Krishna.
0 Kudos
idata
Employee
1,226 Views

@Jesus_at_Intel ,

 

FYI, I am using NCSDK v2.10.0.1
0 Kudos
idata
Employee
1,226 Views

Hi @saikrishnaTheGreat

 

You are correct, I ran into the same issue with the NCSDK v2.10. We have opened a ticket with the development team to take a look.

 

In the meantime, I recommend using the NCSDK v2.08 as it doesn't seem to have the same issue.

 

Regards,

 

Jesus
0 Kudos
idata
Employee
1,226 Views

Hi,

 

Is there any updating for this issue?

 

Yung-Sheng Lu

0 Kudos
idata
Employee
1,226 Views

Hi @yungshenglu

 

There are no updates to this issue, the workaround was to use the Intel® Movidius™ Neural Compute SDK version 2.08. Due to the Intel® Movidius™ Neural Compute Stick discontinuation there are no further software updates planned for the Intel® Movidius™ Neural Compute SDK.

 

Take a look at the discontinuation notice for additional information:

 

https://www.intel.com/content/www/us/en/support/articles/000033257/boards-and-kits/neural-compute-sticks.html

 

Regards,

 

Jesus
0 Kudos
idata
Employee
1,226 Views

Hi @Jesus_at_Intel ,

 

I have already used Neural Compute SDK version 2.08 but still meet the same problem. Is there any help?

 

Thanks.

 

Yung-Sheng Lu

0 Kudos
idata
Employee
1,226 Views

Hi @yungshenglu

 

I would like to replicate your issue, could you provide me the model and command you are using?

 

Regards,

 

Jesus
0 Kudos
Reply