Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6480 Discussions

Help me get started with turning keras network into Tensorflow so I can run on NCS

idata
Employee
1,174 Views

Given this network as defined for keras in python

 

def default_categorical(): from keras.layers import Input, Dense, merge from keras.models import Model from keras.layers import Convolution2D, MaxPooling2D, Reshape, BatchNormalization from keras.layers import Activation, Dropout, Flatten, Dense img_in = Input(shape=(120, 160, 3), name='img_in') # First layer, input layer, Shape comes from camera.py resolution, RGB x = img_in x = Convolution2D(24, (5,5), strides=(2,2), activation='relu')(x) # 24 features, 5 pixel x 5 pixel kernel (convolution, feauture) window, 2wx2h stride, relu activation x = Convolution2D(32, (5,5), strides=(2,2), activation='relu')(x) # 32 features, 5px5p kernel window, 2wx2h stride, relu activatiion x = Convolution2D(64, (5,5), strides=(2,2), activation='relu')(x) # 64 features, 5px5p kernal window, 2wx2h stride, relu x = Convolution2D(64, (3,3), strides=(2,2), activation='relu')(x) # 64 features, 3px3p kernal window, 2wx2h stride, relu x = Convolution2D(64, (3,3), strides=(1,1), activation='relu')(x) # 64 features, 3px3p kernal window, 1wx1h stride, relu # Possibly add MaxPooling (will make it less sensitive to position in image). Camera angle fixed, so may not to be needed x = Flatten(name='flattened')(x) # Flatten to 1D (Fully connected) x = Dense(100, activation='relu')(x) # Classify the data into 100 features, make all negatives 0 x = Dropout(.1)(x) # Randomly drop out (turn off) 10% of the neurons (Prevent overfitting) x = Dense(50, activation='relu')(x) # Classify the data into 50 features, make all negatives 0 x = Dropout(.1)(x) # Randomly drop out 10% of the neurons (Prevent overfitting) #categorical output of the angle angle_out = Dense(15, activation='softmax', name='angle_out')(x) # Connect every input with every output and output 15 hidden units. Use Softmax to give percentage. 15 categories and find best one based off percentage 0.0-1.0 #continous output of throttle throttle_out = Dense(1, activation='relu', name='throttle_out')(x) # Reduce to 1 number, Positive number only model = Model(inputs=[img_in], outputs=[angle_out, throttle_out]) model.compile(optimizer='adam', loss={'angle_out': 'categorical_crossentropy', 'throttle_out': 'mean_absolute_error'}, loss_weights={'angle_out': 0.9, 'throttle_out': .001}) return model

 

where do I start in trying to convert this to train and run on my NCS?

 

Im guessing tensorflow would have similarly named layers.. but .. any tips appreicated

 

(example code is from autonomous RC car project called donkeycar)

0 Kudos
6 Replies
idata
Employee
883 Views

I think the first big step is to convert each layer to tensorflow syntax..

 

sorta like this..

 

img_in = Input(shape=(120, 160, 3), name='img_in')

 

becomes

 

img_in = tf.shape(img_in, [120, 360, 3])

 

.. repeat till I build the network in tensorflow ??

0 Kudos
idata
Employee
883 Views

@wheatgrinder, here's an example tensorflow network that uses the same layers defined in the network you referenced - https://github.com/ashwinvijayakumar/ncappzoo/tree/single-conv/tensorflow/single-conv. I converted this network from http://danialk.github.io/blog/2017/09/29/range-of-convolutional-neural-networks-on-fashion-mnist-dataset/.

 

I'm exploring methods to export meta/pb files from keras, so that I can compile it into an NCS binary graph. No luck so far, let me know if you find a way.

0 Kudos
idata
Employee
883 Views

I have tried the same for two days, now it's working with the normal inception-v3 model from Keras. I also tested the predictions and they are how they are supposed to be.

 

The main problem was in the movidius compiler, in the TensorFlowParser.py.

 

They expect after an mean operation from 1x8x8x2048 an output from 1x1x1x2048 but i have an output of 1x2048.

 

For test purposes I changed this part and it worked for inception-v3.
0 Kudos
idata
Employee
883 Views

 

I have tried the same for two days, now it's working with the normal inception-v3 model from Keras

 

@aaron@orc4.de, did you mean "export meta/pb files from keras"? if so, can you please share your instructions? It'll help our community.

 

The main problem was in the movidius compiler, in the TensorFlowParser.py. They expect after an mean operation from 1x8x8x2048 an output from 1x1x1x2048 but i have an output of 1x2048. For test purposes I changed this part and it worked for inception-v3.

 

Can you explain more about this?

 

0 Kudos
idata
Employee
883 Views

@AshwinVijayakumar

 

yes the things i did where very simple:

 

     

  1. create normal inception v3
  2.  

  3. Find out the output node (bsp. 'top_dense_2/Softmax')
  4.  

  5. get tensorflow-session
  6.  

  7. export frozen_graph_def (tf.graph_util.convert_variables_to_constants)
  8.  

  9. compile with mvNCCompile
  10.  

 

My problem with mvNCCompile where on multiple places in the TensorFlowParser.py:

 

the Mean Operation expect an 4 dimensional output, but the output is only two dimensional

 

the ReshapeOperation also expect other outputs

 

Some other Problems where with missing Operations, because Keras implements some thing on their own, but tensorflow also have an operation for this (like relu6).

 

At this place, some changes in the Model helped me to ship around these problems.

 

The Implementation of the cast and Max funktion would be very helpful.

 

With a bit of time, i could run my own VGG16, Inceptionv3 and Mobilenet on the Movidius Stick.

0 Kudos
idata
Employee
883 Views

@AshwinVijayakumar can you please provide your keras inception v3 model that compiles on NCS i have issues with the Keras batch norm implementation.

0 Kudos
Reply