Community
cancel
Showing results for 
Search instead for 
Did you mean: 
idata
Community Manager
372 Views

about mvnc v2 read_elem

hi

 

in mvnc v2 , i found "fifo.read_elem" is replace the graph.getresult(). but the result is not same .

 

in py program with v2, i tried like this code

 

graph.queue_inference_with_fifo_elem( fifo_in, fifo_out, img.astype(numpy.float32), None ) output, userobj = fifo_out.read_elem()

 

and i show output[0] to output[6] every one is 0

 

but in program with v1, i see a example in ncappzoo used like this

 

ssd_mobilenet_graph.LoadTensor(resized_image.astype(numpy.float16), None) output, userobj = ssd_mobilenet_graph.GetResult()

 

output[0] is the num of box

 

so….my problem is how can i get the num of box in my image or num of item in my image

 

thx

0 Kudos
10 Replies
idata
Community Manager
60 Views

@zhaoanguo222 GetResult() and read_elem() are essentially the same functionality-wise. That being said the output you receive from GetResult() or read_elem() is based on the output defined by the model. Which model are you working with? Can you share your code so that I may help you debug the issue? Thanks.

idata
Community Manager
60 Views

coding=utf-8

 

from mvnc import mvncapi as mvnc import cv2 import numpy Graph_buffer = None Graph_Path = './tensorflow_graph/inception_v1/graph' Image_Name = 'test.jpg' Label = None NUM_PREDICTIONS = 5 CONFIDANCE_THRESHOLD = 0.50 def open_device(): device_list = mvnc.enumerate_devices() if len(device_list)>0 : device = mvnc.Device(device_list[0]) print(device.get_option(mvnc.DeviceOption.RO_DEVICE_NAME)) device.open() print('open device') return device else: print('no device') quit() def close_device(device): device.close() device.destroy() print('close device') def close_graph_v2(graph): graph.destroy() def close_fifo(fifo_in,fifo_out): fifo_in.destroy() fifo_out.destroy() def open_graph_v2(device): #graph fifo_in fifo_out with open (Graph_Path,'rb') as f: blob = f.read() print('open graph') graph = mvnc.Graph(Graph_Path) fifo_in, fifo_out = graph.allocate_with_fifos( device, blob ) return graph, fifo_in, fifo_out def infer_img_v2(graph, img, fifo_in, fifo_out): Label = getlabels() graph.queue_inference_with_fifo_elem( fifo_in, fifo_out, img.astype(numpy.float32), None ) output, userobj = fifo_out.read_elem() graph.queue_inference_with_fifo_elem( fifo_in, fifo_out, img.astype(numpy.float32), None ) output, userobj = fifo_out.read_elem() itemnum=int(output[0]) order = output.argsort()[::-1][:NUM_PREDICTIONS] print(itemnum) print("================") inference_time = graph.get_option( mvnc.GraphOption.RO_TIME_TAKEN ) print( "Execution time: " + str( numpy.sum( inference_time ) ) + "ms" ) print( "--------------------------------------------------------------" ) for i in range( 0, NUM_PREDICTIONS ): print( "%3.1f%%\t" % (100.0 * output[ order[i] ] ) + Label[ order[i] ] ) print( "==============================================================" ) def whiten_image(source_image): source_mean = numpy.mean(source_image) source_standard_deviation = numpy.std(source_image) std_adjusted = numpy.maximum(source_standard_deviation, 1.0 / numpy.sqrt(source_image.size)) whitened_image = numpy.multiply(numpy.subtract(source_image, source_mean), 1 / std_adjusted) return whitened_image def getlabels(): categories = [] with open('./categories.txt', 'r') as f: for line in f: cat = line.split('\n')[0] if cat != 'classes': categories.append(cat) f.close() print('Number of categories:', len(categories)) return categories def main(): print('main begin') device=open_device() Graph ,fifo_in ,fifo_out = open_graph_v2(device) #get img img = cv2.imread(Image_Name,cv2.IMREAD_COLOR) #set img mean = 128 std = 1.0/128.0 img = cv2.resize(img, (224,224)) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img=whiten_image(img) for i in range(3): img[:,:,i] = (img[:,:,i] - mean) * std #infer img infer_img_v2(Graph,img,fifo_in,fifo_out) #clean up close_fifo(fifo_in,fifo_out) close_graph_v2(Graph) close_device(device) if __name__ == '__main__': main()

 

@Tome_at_Intel

 

i used tenforflow/inception_v1 model

 

this is my code and there are four animals in my test image

 

thx

idata
Community Manager
60 Views

@zhaoanguo222 The output you are receiving from inception_V1 are the confidence values for all 1000 categories. These confidence values are floating point numbers from 0.0 to 0.99 so if you cast them to an int, you will get a zero value. To get the item number (category number), you can do a print("top 5 categories: ", order) after the order = output.argsort()[::-1][:NUM_PREDICTIONS] line in infer_img_v2() function and you can see the categories/item number of the top 5 detected items.

idata
Community Manager
60 Views

@zhaoanguo222 Also I had to remove the line img=whiten_image(img) in main() to get more accurate results.

idata
Community Manager
60 Views

@Tome_at_Intel in my mind , output[0] is the number of item(box) by program found. am i right?or i need to know top 5 items first,then get them item number?

 

thank you very much
idata
Community Manager
60 Views

@zhaoanguo222 For inception_v1, the output array holds all 1000 confidence values for all 1000 categories, so it will be a 1000 element array and output[0] would hold the confidence score for category #0 (in categories.txt), output[1] would hold the confidence score for category #1, etc. The item numbers are related to each of the categories of the output array.

 

It makes sense to sort the array by highest score if you want to know what you are detecting. If you want to know the category or item number for the top 5 scores, you just have to do some minor post-processing like sorting the results by score (order = output.argsort()[::-1][:5]). After doing this, the order list will have the array elements with the 5 highest scores and these values coincide with the category/item numbers from categories.txt.

idata
Community Manager
60 Views

@Tome_at_Intel ok i get it, and there is anothor qustion ,in ncappzpp ,i found this:

 

Get the result from the NCS

 

output, userobj = ssd_mobilenet_graph.GetResult() # a. First fp16 value holds the number of valid detections = num_valid. # b. The next 6 values are unused. # c. The next (7 * num_valid) values contain the valid detections data # Each group of 7 values will describe an object/box These 7 values in order. # The values are: # 0: image_id (always 0) # 1: class_id (this is an index into labels) # 2: score (this is the probability for the class) # 3: box left location within image as number between 0.0 and 1.0 # 4: box top location within image as number between 0.0 and 1.0 # 5: box right location within image as number between 0.0 and 1.0 # 6: box bottom location within image as number between 0.0 and 1.0 # number of boxes returned

 

so if i want get result like this,should i change my model?i think this is esay to use

idata
Community Manager
60 Views

@zhaoanguo222 If you need to detect and localize an object(s) in an image, then yes, you would need to use an object detector like SSD MobileNet or Tiny Yolo. Object detection models usually return bounding box coordinates along with a class id and score like the example you provided above.

idata
Community Manager
60 Views

@Tome_at_Intel i see , i'll try this ,you answered my question perfectly. thank you very much!

idata
Community Manager
60 Views

@zhaoanguo222 You're very welcomed.