Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6506 Discussions

Wrong Results with Graph Translated from Caffe Model

idata
Employee
747 Views

I'm working on translating ONet(the final part of MTCNN) to NCS graph.

 

The model was translated into NCS graph without any errors. But when I do the test on it, the results are just so far away from the origin caffe model. There must be some calculating things misunderstood by the translating, but I don't know how to find it..

 

The origin caffe model is here 48net.caffemodel, and 48net.prototxt.

 

I'm not sure if the warning message shows the problem:

 

mvNCCompile v02.00, Copyright @ Intel Corporation 2017

 

Fusing Pad and Convolution2D Fusing BatchNorm and Scale after Convolution Replacing BN with Bias&Scale Fusing Permute and Flatten Fusing Eltwise and Relu Eliminate layers that have been parsed as NoOp Evaluating input and weigths for each hw layer -------------------------------------- ---------------------- ---------------------- # Network Input tensors ['data#21'] # Network Output tensors ['conv6-3#30'] Blob generated

 

It seems that some ops are fused. Would it change the calculation in caffe model?

 

Here is my test code.

 

import cv2 import numpy as np import caffe caffe.set_device(0) caffe.set_mode_gpu() ONet = caffe.Net("tmp/48net.prototxt", "tmp/48net.caffemodel", caffe.TEST) from mvnc import mvncapi devices = mvncapi.enumerate_devices() if len(devices) > 0: device = mvncapi.Device(devices[0]) device.open() with open("tmp/graph", mode="rb") as f: data = f.read() graph = mvncapi.Graph('graph') input_fifo, output_fifo = graph.allocate_with_fifos(device,data) print("inside") else: print("no device") def test(): with open("../align_label.txt") as f: lines = f.readlines() for line in lines: words = line.strip().split() img_path = "../Images/"+ words[0] raw_img = cv2.imread(img_path) showvalidate(raw_img, words) process(raw_img) process_ncs(raw_img) def testwithoutvalidate(): img_path = "test.jpg" img = cv2.imread(img_path) process(img) process_ncs(img) def process(raw_img): (h,w,c) = raw_img.shape test_img = raw_img.copy() inp_img = cv2.resize(test_img.copy(),(48,48)) inp_img = np.swapaxes(inp_img, 0, 2) inp_img = (inp_img - 127.5) / 127.5 ONet.blobs['data'].data[...] = inp_img out = ONet.forward() #test_pts = out['conv6-3'][0] test_pts = ONet.blobs['conv6-3'].data[0] print("test_shape:", test_pts.shape) print("test:",test_pts) for i in range(5): cv2.circle(test_img,(int(test_pts[i+0] * w),int(test_pts[i+5] * h)),2,(0,255,0)) cv2.imshow("test",test_img) cv2.waitKey(0) def process_ncs(raw_img): (h,w,c) = raw_img.shape test_img = raw_img.copy() inp_img = cv2.resize(test_img.copy(),(48,48)) inp_img = (inp_img - 127.5) / 127.5 inp_img = inp_img.astype(np.float32) graph.queue_inference_with_fifo_elem(input_fifo, output_fifo, inp_img, None) test_pts, _ = output_fifo.read_elem() print("test2:", test_pts) for i in range(5): cv2.circle(test_img,(int(test_pts[i+0] * w),int(test_pts[i+5] * h)),2,(0,255,0)) cv2.imshow("test2",test_img) cv2.waitKey(0) def showvalidate(raw_img, words): (h,w,c) = raw_img.shape validate_pts = words[1:] print("validate:",validate_pts) validate_img = raw_img.copy() for i in range(5): cv2.circle(validate_img,(int(float(validate_pts[i*2+0]) * w),int(float(validate_pts[i*2+1]) * h)),2,(0,255,0)) cv2.imshow("validate",validate_img) cv2.waitKey(0) try: #test() testwithoutvalidate() except Exception as e: print(e) finally: input_fifo.destroy() output_fifo.destroy() graph.destroy()
0 Kudos
1 Reply
idata
Employee
531 Views

ok, I got the information in the ncsdk2 release

 

so the prelu is supported now, but MTCNN still gives unexpected results.

 

I still want to ask. @Tome_at_Intel hi, have you found what's wrong with MTCNN? It seems the ncs will fuse some ops, is that the problem? Because all the layers in ONet are supported.
0 Kudos
Reply