Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

NCSDK v2.05 convolutional op shows wrong results

idata
Employee
583 Views

I have a minimal example with a caffe model that consists of one convolutional operation.

 

The result that NCStick is showing differs completely from the expected result of caffe.

 

I prepared a minimal example with the model, graph file and python code in this github repository:

 

https://github.com/jofk/mvnc-conv-minimal

 

This is how the model looks like:

 

name: "PNet" input: "data" input_shape { dim:1 dim:3 dim:15 dim:15 } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 10 kernel_size: 3 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0 } } }
0 Kudos
3 Replies
idata
Employee
254 Views

@johfk Which NCSDK version are you using? I tested your model and I did not witness any discrepancies between Caffe and NCSDK. For my test, I used NCSDK v 2.05.

 

Here are my results from mvNCCheck:

 

mvNCCheck det1_conv1.prototxt -w det1_conv1.caffemodel -s 12 /usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:44: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(False, "Layer type not supported by Convolution: " + obj.type) mvNCCheck v02.00, Copyright @ Intel Corporation 2017 /usr/local/bin/ncsdk/Controllers/FileIO.py:65: UserWarning: You are using a large type. Consider reducing your data sizes for best performance Blob generated USB: Transferring Data... USB: Myriad Execution Finished USB: Myriad Connection Closing. USB: Myriad Connection Closed. Result: (10, 13, 13) 1) 1498 1.8584 2) 1513 1.6865 3) 726 1.6729 4) 1176 1.626 5) 778 1.5938 Expected: (10, 13, 13) 1) 1498 1.8584 2) 1513 1.6855 3) 726 1.6738 4) 1176 1.625 5) 778 1.5938 ------------------------------------------------------------ Obtained values ------------------------------------------------------------ Obtained Min Pixel Accuracy: 0.09808730101212859% (max allowed=2%), Pass Obtained Average Pixel Accuracy: 0.015084019105415791% (max allowed=1%), Pass Obtained Percentage of wrong values: 0.0% (max allowed=0%), Pass Obtained Pixel-wise L2 error: 0.02158212047471063% (max allowed=1%), Pass Obtained Global Sum Difference: 0.5075993537902832 ------------------------------------------------------------
0 Kudos
idata
Employee
254 Views

@Tome_at_Intel thanks for pointing me to mvNCCheck. After studying the code of mvNCCheck I found the issue why I get different results for my example. For running inference on the stick the input needs to be reshaped and storage order to be change according to the function convert_network_input_to_yxz in Network.py. Then after inference the storage order of the result needs to be changed again according to the function storage_order_convert in DataTransforms.py.

 

However, I tested it only for the Caffe model. Do I need to do the same storage order conversion steps for a TensorFlow model?

 

A hint in the documentation regarding storage orders would be helpful.

0 Kudos
idata
Employee
254 Views

@johfk I believe it is the same for TensorFlow. If you take a look at the image classifier example on the ncappzoo (this app works with Caffe and TensorFlow models), you can use it as a reference: https://github.com/movidius/ncappzoo/blob/ncsdk2/apps/image-classifier/image-classifier.py.

0 Kudos
Reply