Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Converted model from Caffe VGG SSD produces bounding boxes with very different confidences?

idata
Employee
911 Views

Hello,

 

I have had problems with converting models from caffe to NCS that work as expected.

 

I have used this version of caffe where I got the model from. https://github.com/weiliu89/caffe

 

As an example I have used the model available in the link. https://drive.google.com/open?id=0BzKzrI_SkD1_TkFPTEQ1Z091SUE

 

I have run the example https://github.com/weiliu89/caffe/blob/ssd/examples/ssd_detect.ipynb script and changed the caffe root labels, model and weights file to the new one.

 

I got expected outputs when I ran this script. the detections has the lenght (1400) when flattned.

 

Later I had compiled this caffe model with the command "mvNCCompile -w VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.caffemodel -s 12 deploy.prototxt"

 

I have modified the python run script of the Mobilenet_sdd as seen here. https://pastebin.com/3u5KQmgb

 

When I run this script I get two major differences

 

1) The Output of the graph has 1407 elements and not 1400.

 

Where did the extra 7 elements come from?

 

2)The confidences of the bounding boxes are very different.

 

I have also observed similar problems when converting a darknet model to NCS.

0 Kudos
6 Replies
idata
Employee
632 Views

@ashwinnair14 If you could provide a log of the error message, it would be helpful in debugging the issue. Where exactly are you seeing the output discrepancies (during mvNCProfile/while running the network)? You can also try running mvNCCheck and see if you get the same result as Caffe.

 

Regarding object detection, we have support for Mobilenet SSD on Caffe. If you are interested, you can find some sample code @ https://github.com/movidius/ncappzoo/tree/master/caffe/SSD_MobileNet

0 Kudos
idata
Employee
632 Views

Hello,

 

This is the output I get when I run the mvNCCheck.

 

I have also attached the report of the Profile. https://svgshare.com/s/56j

 

Here the output of the graph is a [7,201,1] dimensional vector but the output of the caffe model is a [1,200,7] vector.

 

I dont really know why this happens?

 

anilas1@anilas1-WS:/media/anilas1/data/Ncs/caffe/models/SSD_300x300_ft$ mvNCCheck deploy.prototxt -w VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.caffemodel -s 12 mvNCCheck v02.00, Copyright @ Movidius Ltd 2016 /usr/local/bin/ncsdk/Controllers/FileIO.py:52: UserWarning: You are using a large type. Consider reducing your data sizes for best performance "Consider reducing your data sizes for best performance\033[0m") USB: Transferring Data... USB: Myriad Execution Finished USB: Myriad Connection Closing. USB: Myriad Connection Closed. Result: (1, 200, 7) 1) 1394 18.0 2) 1387 16.0 3) 1359 16.0 4) 1380 16.0 5) 1373 16.0 Expected: (1, 200, 7) 1) 1394 18.0 2) 1380 16.0 3) 1373 16.0 4) 1366 16.0 5) 1387 16.0 ------------------------------------------------------------ Obtained values ------------------------------------------------------------ Obtained Min Pixel Accuracy: 22.22222238779068% (max allowed=2%), Fail Obtained Average Pixel Accuracy: 1.4000526629388332% (max allowed=1%), Fail Obtained Percentage of wrong values: 25.928571428571427% (max allowed=0%), Fail Obtained Pixel-wise L2 error: 2.6753161798218112% (max allowed=1%), Fail Obtained Global Sum Difference: 352.81329345703125 ------------------------------------------------------------

0 Kudos
idata
Employee
632 Views

@ashwinnair14 There seems to be a bug with the profiler's visualizer and the output svg file is not showing the correct output dimensions with NCSDK version 1.12. Thanks for bringing this to our attention.

0 Kudos
idata
Employee
632 Views

@Tome_at_Intel I understand that. But what are the fails on the Obtained Min Pixel Accuracy, Obtained Average Pixel Accuracy, Obtained Percentage of wrong values and Obtained Pixel-wise L2 error.

 

Cause even if the profiler's visualizer is buggy it does not explain the differences in values obtained from the stick and PC version.

 

Or would this also be fixed in the next release.

0 Kudos
idata
Employee
632 Views

@ashwinnair14 It looks like the inferences for results 2-4 all have a confidence of 16.0. I wonder if this is just a sorting difference because results 2-4 have the same output result. Additionally the top result matches with the PC version. I assume that this result could be a result of how the model is designed and results 2-4 could just be a random ordering since each has an equal confidence value.

0 Kudos
idata
Employee
632 Views

Was this resolved? If so how?

0 Kudos
Reply