Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

MvncStatus Error

idata
Employee
767 Views

Hello all,

 

I am experimenting with the application Video_face_matcher_multiFace which uses Facenet.

 

I made some changes in the code which reads images. And I'm having the following error:

 

Traceback (most recent call last):

 

File "./matcher.py", line 349, in

 

sys.exit(main())

 

File "./matcher.py", line 322, in main

 

out, _ = run_inference(validated_image, graph)

 

File "./matcher.py", line 57, in run_inference

 

out, userobj = get_graph_result(facenet_graph, im.astype(numpy.float16))

 

File "./matcher.py", line 34, in get_graph_result

 

graph.LoadTensor(img, None)

 

File "/usr/local/lib/python3.5/dist-packages/mvnc/mvncapi.py", line 253, in LoadTensor

 

raise Exception(Status(status))

 

Exception: mvncStatus.ERROR

 

Graph recompilation didn't help. Any ideas what could have caused this error?

 

Here is the part of main() which I changed:

 

def main():

 

use_camera = True

 

# Get a list of ALL the sticks that are plugged in # we need at least one devices = mvnc.EnumerateDevices() if len(devices) == 0: print('No NCS devices found') quit() # Pick the first stick to run the network device = mvnc.Device(devices[0]) # Open the NCS device.OpenDevice() # The graph file that was created with the ncsdk compiler graph_file_name = GRAPH_FILENAME # read in the graph file to memory buffer with open(graph_file_name, mode='rb') as f: graph_in_memory = f.read() # create the NCAPI graph instance from the memory buffer containing the graph file. graph = device.AllocateGraph(graph_in_memory) face_vectors = []

 

for person in white_list:

 

person_imgs = os.listdir(os.path.join('./validated_images/', person))

 

person_vectors = []

 

tmp = []

 

for i in person_imgs: validated_image = cv2.imread(os.path.join("./validated_images/", person, i)) out, _ = run_inference(validated_image, graph) if(len(out) != 0): person_vectors.append(out) tmp.append(out) # numpy.ndarray.flatten(out)) tmp = numpy.array(tmp).astype('float32') # Use k-means to separate data into clusters: k = round(max(1, len(tmp) / 4)) compactness, labels, centers = cv2.kmeans(tmp, k, None, (cv2.TERM_CRITERIA_COUNT|cv2.TERM_CRITERIA_EPS, 1000, 0.001), 10, cv2.KMEANS_PP_CENTERS) face_vec = get_kmeans_clusters(person_vectors, labels, len(centers)) face_vectors.append(face_vec) #face_vector = numpy.zeros(valid_output[0].shape)

 

In fact, graph can load without a problem any two tensors and then it fails to load any third tensor, no matter which pictures I give it.

 

Thank you a lot in advance

0 Kudos
6 Replies
idata
Employee
431 Views

@jelena It seems like there isn't a img tensor based on your log (below). graph_get_result has im, but img is being passed to LoadTensor()? I don't see your entire code so I am just assuming this is the error.

 

out, userobj = get_graph_result(facenet_graph, im.astype(numpy.float16))

 

File "./matcher.py", line 34, in get_graph_result

 

graph.LoadTensor(img, None)
0 Kudos
idata
Employee
431 Views

@Tome_at_Intel thanks for your response,

 

the image is being passed to LoadTensor like this:

 

def get_graph_result(graph, img):

 

graph.LoadTensor(img, None)

 

print("Loaded")

 

return graph.GetResult()

 

The function get_graph_result is being called in a loop and any 2 images (=> 2 first iterations) are processed successfully, and any 3rd image causes this error, no matter which images I put in my data folder.

0 Kudos
idata
Employee
431 Views

@jelena Please make sure you are running the preprocessing steps on every image you read in like the original app.

0 Kudos
idata
Employee
431 Views

@Tome_at_Intel thank you. I apologize for not responding for so long. I do all the preprocessing steps just like the original app:

 

//Detect a face and crop the rectangle:

 

108 def get_face_rect(image):

 

109 detector = dlib.get_frontal_face_detector()

 

110 dets = detector(image, 1)

 

111 roi_color = []

 

112 faces=[]

 

113

 

114 for d in dets:

 

115 cv2.rectangle(image,(d.left(),d.top()),(d.right(),d.bottom()),(255,0

 

116 # roi_gray = gray[y:y+h, x:x+w]

 

117 roi = image[d.top():d.bottom(), d.left():d.right()]

 

118 faces.append((d.left(), d.top(), d.right()-d.left(), d.bottom()-d.to

 

119 roi_color.append(roi)

 

120 return roi_color, faces

 

121

 

//Do the preprocessing:

 

125 def preprocess_image(src):

 

126 # scale the image

 

127 NETWORK_WIDTH = 160

 

128 NETWORK_HEIGHT = 160

 

129 preprocessed_images, face_rect = get_face_rect(src)

 

130

 

131 for im in preprocessed_images:

 

132 im = cv2.resize(im, (NETWORK_WIDTH, NETWORK_HEIGHT))

 

133

 

134 #convert to RGB

 

135 im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)

 

136

 

137 #whiten

 

138 im = whiten_image(im)

 

139 # return the preprocessed image

 

140 return preprocessed_images, face_rect

 

141

 

33 def get_graph_result(graph, img):

 

34 graph.LoadTensor(img, None)

 

35 print("Loaded")

 

36 return graph.GetResult()

 

37

 

44 def run_inference(image_to_classify, facenet_graph):

 

45

 

46 # get a resized version of the image that is the dimensions

 

47 # SSD Mobile net expects

 

48 resized_images, face_rect = preprocess_image(image_to_classify)

 

49

 

50 # ___________________________

 

51 # Send the image to the NCS

 

52 # ___________________________

 

53 output = []

 

54 print("size = ", len(resized_images))

 

55 for im in resized_images:

 

56 cv2.imshow('',im)

 

57 cv2.waitKey(0)

 

58 out, userobj = get_graph_result(facenet_graph, im.astype(numpy.float

 

59 print('OK')

 

60 output.append(out)

 

61 return output, face_rect

 

62

 

Thank you in advance for any help!

0 Kudos
idata
Employee
431 Views

@jelena I see that you are running a face detection (detect_face_rect()) on the source frame, cropping and saving the detected faces to preprocessed_faces, and then trying to perform preprocessing on the detected faces in your for loop. You aren't actually doing any preprocessing because in lines 132-138, you are assigning the preprocessed data to the iterator variable (im) and then returning the preprocessed_faces array.

 

Keep in mind that the NCS can only perform inference on one image at a time. If you are using NCSDK v 2.xx.xx you can queue up inferences and read them in a FIFO manner.

0 Kudos
idata
Employee
431 Views

@Tome_at_Intel thank you, you are right! I was absolutely sure that such cycle created a reference, not a copy of an element in Python…

 

Thanks for the note about images inference, too

0 Kudos
Reply