Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

mnist network result is different from Movidius's

idata
Employee
856 Views

Hi,

 

Now I am working the mnist network realized with Movidius stick, but I have gotten the different result from pycaffe's.

 

python code is the following for setting;

 

network_blob='../networks/mnist/graph' #compile succeeded

 

dim=(28,28)

 

with open(network_blob, mode='rb') as f:

 

blob = f.read()

 

graph = device.AllocateGraph(blob)

 

graph.SetGraphOption(mvnc.GraphOption.ITERATIONS, 1)

 

iterations = graph.GetGraphOption(mvnc.GraphOption.ITERATIONS)

 

img = cv2.imread('../images/two-0.jpg')

 

img = cv2.resize(img,dim)

 

inp = (255 - img.astype(numpy.float16)) / 255

 

graph.LoadTensor(inp, 'user object')

 

output, userobj = graph.GetResult()

 

print(output)

 

Here is the prototxt for the network;

 

 

name: "LeNet"

 

input: "data"

 

input_shape {

 

dim: 1

 

dim: 1

 

dim: 28

 

dim: 28

 

}

 

layer {

 

name: "conv1"

 

type: "Convolution"

 

bottom: "data"

 

top: "conv1"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

convolution_param {

 

num_output: 20

 

kernel_size: 5

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "pool1"

 

type: "Pooling"

 

bottom: "conv1"

 

top: "pool1"

 

pooling_param {

 

pool: MAX

 

kernel_size: 2

 

stride: 2

 

}

 

}

 

layer {

 

name: "conv2"

 

type: "Convolution"

 

bottom: "pool1"

 

top: "conv2"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

convolution_param {

 

num_output: 50

 

kernel_size: 5

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "pool2"

 

type: "Pooling"

 

bottom: "conv2"

 

top: "pool2"

 

pooling_param {

 

pool: MAX

 

kernel_size: 2

 

stride: 2

 

}

 

}

 

layer {

 

name: "ip1"

 

type: "InnerProduct"

 

bottom: "pool2"

 

top: "ip1"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

inner_product_param {

 

num_output: 500

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "relu1"

 

type: "ReLU"

 

bottom: "ip1"

 

top: "ip1"

 

}

 

layer {

 

name: "ip2"

 

type: "InnerProduct"

 

bottom: "ip1"

 

top: "ip2"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

inner_product_param {

 

num_output: 10

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "prob"

 

type: "Softmax"

 

bottom: "ip2"

 

top: "prob"

 

}

 

"inp" values from movidius code are almost the same as pycaffe's which is read by "a" in following code;

 

 

#!/usr/bin/env python

 

import numpy as np

 

import pandas as pd

 

import os

 

import argparse

 

import time

 

import caffe

 

net = caffe.Classifier('lenet.prototxt','lenet_iter_10000.caffemodel', image_dims = (28, 28))

 

caffe._caffe.set_mode_cpu()

 

net.transformer.set_raw_scale('data', 1)

 

a=1 - caffe.io.load_image('three-0.jpg', color = False, ) #maximum 1

 

print(a)

 

scores = net.predict([a], oversample=False)

 

print (scores)

 

 

If somebody can give the good advises to me, I can send the trained parameter.

 

Could somebody teach the reason why I got the almost different result from pycaffe?

 

Thanks

0 Kudos
2 Replies
idata
Employee
560 Views

@KKK Regarding your results from LeNet, I was just wondering what your results were and what your expected results are. If you can provide the caffemodel file, that would be great also. Thanks.

0 Kudos
idata
Employee
560 Views

@KKK Additionally, the Movidius SDK toolkit has an example called top5_over_a_dataset.py which uses mnist and LeNet. It may be helpful to check it out.

0 Kudos
Reply