Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Output tensor indexing

idata
Community Manager
414 Views

Hi all,

 

I'm currently working with keras (2.1.3) and tensorflow (1.4.0) as backend and I'm trying to export my own network to NCS. I have already designed, exported and run successfully some basic MLP for NCS.

 

Now, I'm trying to do the same process but with a larger CNN (such as squeezenet). It seems that the global average pooling layer from keras is not properly handled as it throw this error :

 

"IndexError: list index out of range"

 

I've tried several work around such as average pooling with the input tensor size as kernel shape, without any success. I've also tried to do it directly with tensorflow API but this time I get the following issue :

 

"ValueError: negative dimensions are not allowed"

 

I assume that I have made some mistake in handling dimensions but that not my point.

 

My though was then to set the output tensor right before the average pooling as it's the last stage. It seems to compile as the mvNCCompile and mvNCProfile work properly. My problem is then that the output tensor shape is [1,13,11,4] with keras/tensorflow and [572,] for NCS (by using GetResult function). I didn't find how to reshape my vector to get the same values in both side. There is a way to know how the scalar have been sorted in this case ? Is it safe to use this work around ?

 

I'm also worrying about the NCS process precision. As my model has been trained with float32 format and the NCS seem to use float16, this could lead to approximations. With my MLP test, I have trained MLP with several data format, and I always get some differences between keras/tensorflow and NCS activation (from 2.2e-4 +/- 2.7 to 8.3e-5 +/- 15), and my MLP got only 100 neurons. Does this error could scale up with network size ? And could lead to totally wrong classification ?

 

With this idea in mind, I've replaced global average pooling stage by a MLP in my CNN, which also compile properly. There is no dimension issue as the output vector shape is [4,] for both execution. But this time, I get wrong classification for almost all my test dataset (which even worse than activation error). This could mean that conv2d layer is not properly handled from keras to NCS ?

 

Any help or though is welcome ! :smile:

 

Axel

0 Kudos
6 Replies
idata
Community Manager
156 Views

@axeldutr Unfortunately we don't officially support Keras at the moment and you may have to experiment on with manipulating the parser if you want to use your custom Keras network.

 

Regarding the precision of float16, we have performed tests with various models and found that the classification result difference between float 32 and float 16 to be minimal. The difference that you see could be due to a number of things: preprocessing with mvNCCheck (mean subtraction and scaling with options -S and -M).

idata
Community Manager
156 Views

@Tome_at_Intel Is there any plan to support it KERAS?

idata
Community Manager
156 Views

@Tome_at_Intel I understand that Keras is not supported for NCS. There is any documentation that could help me to understand how the mvNCCompile works and maybe understand where is my issue ?

 

Still, if I create a tensorflow network with an output tensor shape with more than 1 dimension, how the mvNCCompile will handle it ?

 

Regarding the precision, this is precisely what I meant. I have designed and trained a MLP with float16 dtype (with tensorflow), and then ran my own comparison process between computer and NCS activation values. Most of the values are matching (which make sense) but some don't … I don't understand where these differences can occur since I use float16 everywhere in the process.

idata
Community Manager
156 Views

@axeldutr If you have an output tensor shape of more than 1 dimension, then you will receive an output result with more than 1 dimension. I agree that there can be some rounding and accumulation issues and if you can provide the models for testing, we would be happy to take a look and check it out. Thanks.

idata
Community Manager
156 Views

@Tome_at_Intel Any help is welcome ! :smile:

 

How can I share sample of code to you ?
idata
Community Manager
156 Views

@axeldutr PM with a download link would work

Reply