@GoldenWings, in order to run your own model on the NCS, you have to follow these steps:
I have explained this process in my webinar (see timestamp 00:26:34) - https://software.seek.intel.com/Edge_Devices_Webinar_Reg
Here's an example of a custom CNN deployed on NCS - https://github.com/ashwinvijayakumar/ncappzoo/tree/single-conv/tensorflow/single-conv
@AshwinVijayakumar Thanks so much i will try to run my own model through the steps you provided and will let you know how things going, to keep you updated here is my tf model along with training code i want to know do i have to change anything to make it adapt with NCS:-
sess = tf.InteractiveSession(config=tf.ConfigProto())
x = tf.placeholder(tf.float32, shape=[None, 240, 320, 3], name='x') y_ = tf.placeholder(tf.float32, shape=[None, 3], name='y_') phase = tf.placeholder(tf.bool, name='phase') conv1 = batch_norm_pool_conv_layer('layer1', x, [6, 6, 3, 24], phase) conv2 = batch_norm_conv_layer('layer2',conv1, [6, 6, 24, 24], phase) conv3 = batch_norm_pool_conv_layer('layer3',conv2, [6, 6, 24, 36], phase) conv4 = batch_norm_conv_layer('layer4',conv3, [6, 6, 36, 36], phase) conv5 = batch_norm_pool_conv_layer('layer5',conv4, [6, 6, 36, 48], phase) conv6 = batch_norm_conv_layer('layer6',conv5, [6, 6, 48, 64], phase) conv7 = batch_norm_pool_conv_layer('layer7',conv6, [6, 6, 64, 64], phase) h_pool7_flat = tf.reshape(conv7, [-1, 15 * 20 * 64]) h8 = batch_norm_fc_layer('layer8',h_pool7_flat, [15 * 20 * 64, 512], phase) h9 = batch_norm_fc_layer('layer9',h8, [512, 256], phase) h10 = batch_norm_fc_layer('layer10',h9, [256, 128], phase) h11 = batch_norm_fc_layer('layer11',h10, [128, 64], phase) W_final = weight_variable('layer12',[64, 3]) b_final = bias_variable('layer12',) logits = tf.add(tf.matmul(h11, W_final), b_final, name='logits') cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_)) train_step = tf.train.AdamOptimizer(1e-5,name='train_step').minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32),name='accuracy') update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_step = tf.train.AdamOptimizer(1e-5).minimize(cross_entropy)
Also i have another model that is written in keras with tensorflow backend, i wanted to know is it possible to run on NCS?
@GoldenWings I don't see anything that would cause it not to run on the NCS. If you have issues with the NCSDK complaining about the placeholder, you may have to change the
None in your code.
x = tf.placeholder(tf.float32, shape=[None, 240, 320, 3], name='x')
y_ = tf.placeholder(tf.float32, shape=[None, 3], name='y_') will become
x = tf.placeholder(tf.float32, shape=[1, 240, 320, 3], name='x')
y_ = tf.placeholder(tf.float32, shape=[1, 3], name='y_')
Let me know if there are any problems you face when running your network.
@Tome_at_Intel I will keep you updated once i run it . i have another model that is written in keras with tensorflow backend, would like to know if it is supported by ncs and if it is is there any example out there?