Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6506 Discussions

Problem with BatchNormalization (trying to build custom Mobilnet_v1 based network using Keras)

idata
Employee
1,201 Views

Hi!

 

I've been working with Keras and Tensorflow for about a year now, but I'm fairly new to the NCS, and have basically only been building and running the examples on the stick. I have a simple MNIST example built and trained using Keras up and running, but as soon as I start adding more complex layers, such as BatchNormalization it breaks. I have seen many tricks involving manually traversing the graph and removing offending nodes or changing types of nodes, but none of those solutions have worked for me. (To be fair, I really don't know what I'm doing…) I built a super simple example project that demonstrates the problem: https://github.com/kajohansson/keras_ncs_batchnorm_fail

 

If anyone can point me in the right direction I'd be very happy! Perhaps an end-to-end implementation of mobilnet v1 in Keras that is known to run on NCS? Or should I learn Caffe? Any pointers welcome!

 

My end goal is to have a network based on mobilenet v1, pretrained on imagenet, with the output completely replaced by something else than the normal classifier.

 

Thanks in advance!

 

Cheers,

 

Karl-Anders Johansson
0 Kudos
5 Replies
idata
Employee
860 Views

@kajoh Hi, thanks for the intro and sharing your experience with us. I can give you a quick intro to the NCS device. The NCS device is designed to be a low powered solution to run inferences on the edge. We have several out of the box models including object detection and classification models. The NCS device can be used with the Intel Movidius Neural Compute SDK or with the Intel OpenVINO toolkit and supports Caffe and some TensorFlow networks. The Neural Compute SDK comprises of tools you can use with to check/profile/compile your deep learning models and a Python and C++ API.

 

You can find the NCSDK doc site at https://developer.movidius.com/docs and more information on OpenVINO can be found at https://software.intel.com/en-us/openvino-toolkit. OpenVINO is a tool that can be used with various types of hardware. Think of OpenVINO like the Neural Compute SDK, but for all Intel hardware.

 

We don't have an end-to-end Keras guide for the NCS right now. Looking at the logs from your links, it seems there are some Tensorflow ops that the NCSDK currently doesn't support that are being used in your model. Regarding the different input size, once you use the mvNCCompile tool to compile a Movidius graph file, the input size (along with other features/attributes of the model) for the generated Movidius graph file is set and cannot be changed. If you want to make any changes to the model, you will have to compile a new Movidius graph file. That's why you're getting the shape mismatch error.

 

Welcome and feel free to browse the forums (NCSDK and OpenVINO and ask any questions you have.

0 Kudos
idata
Employee
860 Views

Hi, thanks @Tome_at_Intel for the info on the input size/shape being fixed! That makes very much sense!

 

What would be much appreciated is an end-to-end example showing how to get from building a network from scratch, training it (possibly using pretraining / transfer learning), and then deploying it to the NCS device. All (?) the examples show already generated networks being downloaded and patched and run, but nothing shows the entire pipeline. I would very much appreciate getting an insight into how this should be done. I'm currently downloading OpenVINO to see if it can help me out somehow!

 

Looking into the makefile for the ncappzoo mobilenets project https://github.com/movidius/ncappzoo/blob/master/tensorflow/mobilenets/Makefile I get some insight into the steps needed, but unfortunately, the checkpoint tar files downloaded (e.g. http://download.tensorflow.org/models/mobilenet_v1_1.0_160_2017_06_14.tar.gz) differ far too much from the ones generated by Keras and are not compatible with the rest of the procedure.

 

Given my situation, wanting to rip out the last layers following the last depthwise convolution block and replacing it with a couple of more custom layers _and_ then training the model (using pre-trained weights for the original mobilenet parts), what would be a good way forward?

 

Cheers,

 

Karl-Anders
0 Kudos
idata
Employee
860 Views

Hi Kajoh,

 

Were you able to run BatchNormalization with keras on the neural compute stick ? . I am also facing a similar issue , any help would be appreciated .

 

Thanks,

 

Ankit
0 Kudos
idata
Employee
860 Views

Hi @ankit,

 

Sorry, no luck! In the end I gave up on my mobilenet track in Keras and switched to squeeeznet. Funny thing is that mobilenet (including BatchNorm) works like a charm when importing from a Caffe prototxt/model… That's a shame since I really like Keras…

 

Cheers,

 

K-A
0 Kudos
idata
Employee
860 Views

HI @kajoh ,

 

I am facing similar issues to the ones you are having. My end goal is to run Keras inception which also includes Batch Norm layers. The error:

 

[Error 5] Toolkit Error: Stage Details Not Supported: IsVariableInitialized

 

on compile i managed to debug and fix by setting all layers in Keras as Training=True. The output Tensorflow model works fine but however when compiling to NCS Batch norm layers are not being included.

 

Best Regards

0 Kudos
Reply