Community
cancel
Showing results for 
Search instead for 
Did you mean: 
idata
Community Manager
442 Views

Movidius gives wrong inference values with batchnormalization

Hi,

 

I have the following neural network architecture.

 

model = models.Sequential() model.add(layers.Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=(512, 512, 3))) model.add(layers.BatchNormalization()) model.add(layers.Conv2D(16, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.MaxPool2D(pool_size=pool_size, strides=pool_stride)) model.add(layers.Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D(pool_size=pool_size, strides=pool_stride)) model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D(pool_size=pool_size, strides=pool_stride)) model.add(layers.Conv2D(96, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D(pool_size=pool_size, strides=pool_stride)) model.add(layers.Conv2D(96, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D(pool_size=pool_size, strides=pool_stride)) model.add(layers.Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D(pool_size=pool_size, strides=pool_stride)) model.add(layers.Flatten()) model.add(layers.Dense(96, activation='relu')) model.add(layers.Dense(5, activation='softmax'))

 

     

  • I created a graph file opt_keras_frozen.pb to be used for inference on the the movidius stick .

  •  

  • I created the graph for the movidius stick :

     

    mvNCCompile opt_keras_frozen.pb -in=conv2d_1_input -on=dense_2/Softmax /usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:46: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(False, "Layer type not supported by Convolution: " + obj.type) /usr/local/bin/ncsdk/Controllers/Parsers/Phases.py:322: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(len(pred) == 1, "Slice not supported to have >1 predecessors") mvNCCompile v02.00, Copyright @ Intel Corporation 2017 shape: [1, 512, 512, 3] res.shape: (1, 5) TensorFlow output shape: (1, 1, 5) /usr/local/bin/ncsdk/Controllers/FileIO.py:65: UserWarning: You are using a large type. Consider reducing your data sizes for best performance Blob generated
  •  

  • Here is the input file on which the prediction was made.

  •  

  • I see that the softmax values and the final prediction(i.e argmax) are completely different between movidius and my laptop.

  •  

  • Predction on my laptop-

     

    import tensorflow as tf import os import sys from tensorflow.python.platform import gfile sess=tf.InteractiveSession() f = gfile.FastGFile("opt_keras_frozen.pb", 'rb') graph_def = tf.GraphDef() # Parses a serialized binary message into the current message. graph_def.ParseFromString(f.read()) f.close() sess.graph.as_default() # Import a serialized TensorFlow `GraphDef` protocol buffer # and place into the current default `Graph`. tf.import_graph_def(graph_def) r_1 = np.load('input.npy') softmax_tensor = sess.graph.get_tensor_by_name('import/dense_2/Softmax:0') predictions = sess.run(softmax_tensor, {'import/conv2d_1_input:0': r_1}) print(predictions) [p.argmax() for p in predictions]

     

    Output -

     

    [[3.8008511e-03 2.5446274e-04 4.8956516e-01 6.0401794e-02 4.4597772e-01]]

     

    predicted [2]
  •  

  • Prediction on movidius -

     

    from mvnc import mvncapi as mvnc import numpy as np import cv2 #Now the NCSDK part # get a list of names for all the devices plugged into the system devices = mvnc.enumerate_devices() if len(devices) == 0: print('No devices found') quit() # get the first NCS device by its name. For this program we will always open the first NCS device. dev = mvnc.Device(devices[0]) # try to open the device. this will throw an exception if someone else has it open already try: dev.open() except: print("Error - Could not open NCS device.") quit() graph_filepath = 'graph' # Read a compiled network graph from file (set the graph_filepath correctly for your graph file) with open(graph_filepath, mode='rb') as f: graphFileBuff = f.read() graph = mvnc.Graph('graph1') # Allocate the graph on the device and create input and output Fifos in_fifo, out_fifo = graph.allocate_with_fifos(dev, graphFileBuff) r_1 = np.load('input.npy') # Write the input to the input_fifo buffer and queue an inference in one call graph.queue_inference_with_fifo_elem(in_fifo, out_fifo, r_1.astype(np.float32), 'user object') # Read the result to the output Fifo output, userobj = out_fifo.read_elem() # Deallocate and destroy the fifo and graph handles, close the device, and destroy the device handle try: in_fifo.destroy() out_fifo.destroy() graph.destroy() dev.close() dev.destroy() except: print("Error - could not close/destroy Graph/NCS device.") quit() print("NCS \r\n", output, '\r\nPredicted:',output.argmax())

     

    Output -

     

    NCS

     

    [1.3380051e-03 1.7547607e-04 1.2988281e-01 1.2351990e-02 8.5595703e-01]

     

    Predicted: 4
  •  

 

As you can see that the predictions from laptop and movidius are completely different . Can anyone please here ?

 

Thanks,

 

Ankit
0 Kudos
0 Replies
Reply