Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6574 Discussions

Incorrect inference results from a minimal tensorflow model

idata
Employee
5,769 Views

Hi,

 

I have a minimal example of a trivial tensorflow (v1.4) conv net that I train (to overfitting) with only two examples, freeze, convert with mvNCCompile, and then test on a compute stick.

 

The code, and steps, are fully described in the github repo movidius_minimal_example

 

No steps have warnings or errors; but the inference results I get on the stick are incorrect.

 

What should be my next debugging step?

 

Thanks,

 

Mat

 

note. mvccheck does fail, but i'm unsure if it's because of the structure of my minimal example…

 

$ mvNCCheck graph.frozen.pb -in imgs -on output mvNCCheck v02.00, Copyright @ Movidius Ltd 2016 /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase) /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) USB: Transferring Data... USB: Myriad Execution Finished USB: Myriad Connection Closing. USB: Myriad Connection Closed. Result: (1, 1) 1) 0 0.46216 Expected: (1, 1) 1) 0 0.79395 ------------------------------------------------------------ Obtained values ------------------------------------------------------------ Obtained Min Pixel Accuracy: 41.789668798446655% (max allowed=2%), Fail Obtained Average Pixel Accuracy: 41.789668798446655% (max allowed=1%), Fail Obtained Percentage of wrong values: 100.0% (max allowed=0%), Fail Obtained Pixel-wise L2 error: 41.789667896678964% (max allowed=1%), Fail Obtained Global Sum Difference: 0.331787109375 ------------------------------------------------------------
0 Kudos
22 Replies
idata
Employee
643 Views

I have been working on the conv_with_regression bug and it looks like the input to the final fully connected layer is too big and the NCS is running out of memory. The output matches perfectly to the final layer so it doesn't look like it is any NCS weirdness going on. If I reduce it from 512x368 to 128x92 the results match:

 

512x384:

 

expected positive_prediction [10]

 

expected negativee_prediction [5]

 

host_positive_prediction (1,) [10.]

 

host_negative_prediction (1,) [4.9999995]

 

ncs_positive_prediction (1,) [7.7265625]

 

ncs_negative_prediction (1,) [4.1445312]

 

256x192:

 

expected positive_prediction [10]

 

expected negativee_prediction [5]

 

host_positive_prediction (1,) [10.]

 

host_negative_prediction (1,) [5.]

 

ncs_positive_prediction (1,) [8.671875]

 

ncs_negative_prediction (1,) [4.8515625]

 

128x96

 

expected positive_prediction [10]

 

expected negativee_prediction [5]

 

host_positive_prediction (1,) [9.999998]

 

host_negative_prediction (1,) [5.000002]

 

ncs_positive_prediction (1,) [9.8828125]

 

ncs_negative_prediction (1,) [4.9179688]

 

Why this is happening….I have no idea :)

 

But the simplest fix for this is to add more convolutional layers or process the image in patches.

 

I can't seem to get the other examples working but if you can show me I will have a look at them.

0 Kudos
idata
Employee
643 Views

thanks jon for looking at all these!

 

for the record, if anyone else is reviewing this, we've made great progress in the github repo

0 Kudos
Reply