Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
18 Views

the result of CPU(openVINO)、CPU(caffe)、NCS is different

hello,

I have five classes pictures,and train it by using caffeNet.

a、ubuntu16.04

b、openvino R5

c、movidius NCS

 

1、./classification_sample -i /home/action/code/caffe/data_two/a.BMP -m   home/action/code/caffe/data_two/optmi_FP16/caffenet_train_iter_300.xml -d MYRIAD

4 0.6923828 label #4
1 0.1995850 label #1
0 0.1069336 label #0
3 0.0008817 label #3
2 0.0000000 label #2


2、./classification_sample -i /home/action/code/caffe/data_two/a.BMP -m /home/action/code/caffe/data_two/optmi_FP32/caffenet_train_iter_300.xml -d CPU

4 0.8377678 label #4
1 0.1036089 label #1
0 0.0577552 label #0
3 0.0008420 label #3
2 0.0000262 label #2

 

3、/home/action/code/caffe/caffe-1.0/bui/examples/cpp_classification/classification ./model_guai/deploy.prototxt ./caffenet_train_iter_300.caffemodel ./mean/mean.binaryproto ./test.txt ./a.BMP

 

---------- Prediction for ./a.BMP ----------
0.6189 - "horse"
0.2696 - "daxiang"
0.1098 - "bus"
0.0017 - "flower"
0.0000 - "dynamic"

 

the results are different,can you give me a hand? thank you!

0 Kudos
1 Reply
Highlighted
Valued Contributor I
18 Views

Hello Yang,

Are you seeing the issue with other test inputs? Could it be that one sample statistically is not enough to draw conclusions, maybe it is an outlier?  What results are you seeing if you use the validation workflow described in 

computer_vision_sdk_2018.5.445/deployment_tools/documentation/_samples_validation_app_README.html

You could try with -t "C" for classification and a subset of your validation data to have a better picture.

In some cases I had similar issues and realized I was not scaling the input properly.  The scale and mean_values can cause issues if wrong. For example  

--mean_values [123.68,116.779,103.939] --scale 255

when you use python3 mo_caffe.py 

regards,

nikos

 

 

0 Kudos