- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have a minimal example of a trivial tensorflow (v1.4) conv net that I train (to overfitting) with only two examples, freeze, convert with mvNCCompile, and then test on a compute stick.
The code, and steps, are fully described in the github repo movidius_minimal_example
No steps have warnings or errors; but the inference results I get on the stick are incorrect.
What should be my next debugging step?
Thanks,
Mat
note. mvccheck does fail, but i'm unsure if it's because of the structure of my minimal example…
$ mvNCCheck graph.frozen.pb -in imgs -on output
mvNCCheck v02.00, Copyright @ Movidius Ltd 2016
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
USB: Transferring Data...
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.
Result: (1, 1)
1) 0 0.46216
Expected: (1, 1)
1) 0 0.79395
------------------------------------------------------------
Obtained values
------------------------------------------------------------
Obtained Min Pixel Accuracy: 41.789668798446655% (max allowed=2%), Fail
Obtained Average Pixel Accuracy: 41.789668798446655% (max allowed=1%), Fail
Obtained Percentage of wrong values: 100.0% (max allowed=0%), Fail
Obtained Pixel-wise L2 error: 41.789667896678964% (max allowed=1%), Fail
Obtained Global Sum Difference: 0.331787109375
------------------------------------------------------------
- Tags:
- Tensorflow
Link Copied
- « Previous
-
- 1
- 2
- Next »
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have been working on the conv_with_regression bug and it looks like the input to the final fully connected layer is too big and the NCS is running out of memory. The output matches perfectly to the final layer so it doesn't look like it is any NCS weirdness going on. If I reduce it from 512x368 to 128x92 the results match:
512x384:
expected positive_prediction [10]
expected negativee_prediction [5]
host_positive_prediction (1,) [10.]
host_negative_prediction (1,) [4.9999995]
ncs_positive_prediction (1,) [7.7265625]
ncs_negative_prediction (1,) [4.1445312]
256x192:
expected positive_prediction [10]
expected negativee_prediction [5]
host_positive_prediction (1,) [10.]
host_negative_prediction (1,) [5.]
ncs_positive_prediction (1,) [8.671875]
ncs_negative_prediction (1,) [4.8515625]
128x96
expected positive_prediction [10]
expected negativee_prediction [5]
host_positive_prediction (1,) [9.999998]
host_negative_prediction (1,) [5.000002]
ncs_positive_prediction (1,) [9.8828125]
ncs_negative_prediction (1,) [4.9179688]
Why this is happening….I have no idea :)
But the simplest fix for this is to add more convolutional layers or process the image in patches.
I can't seem to get the other examples working but if you can show me I will have a look at them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks jon for looking at all these!
for the record, if anyone else is reviewing this, we've made great progress in the github repo

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »