Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Exception: Status.INVALID_DATA_LENGTH

idata
Employee
944 Views

@Tome_at_Intel @PINTO Hi, when i try run live-image-classifier.py with GoogleNet, it returns the error below:

 

python3 live-image-classifier.py --graph ../../caffe/GoogLeNet/graph --labels ../../data/ilsvrc12/synset_words.txt E: [ 0] ncFifoWriteElem:2570 input tensor length (618348) doesnt match expected value (602112) Traceback (most recent call last): File "live-image-classifier.py", line 191, in <module> main() File "live-image-classifier.py", line 137, in main infer_image( graph, img, frame, fifo_in, fifo_out ) File "live-image-classifier.py", line 97, in infer_image graph.queue_inference_with_fifo_elem( fifo_in, fifo_out, img.astype(numpy.float32), None ) File "/usr/local/lib/python3.5/dist-packages/mvnc/mvncapi.py", line 769, in queue_inference_with_fifo_elem raise Exception(Status(status)) Exception: Status.INVALID_DATA_LENGTH

 

by the way,it runs well in the directory ncappzoo/caffe/GoogLeNet
0 Kudos
11 Replies
idata
Employee
646 Views

@luna When running GoogleNet with live_image_classifier, be sure to change the default input dimensions to 224x224 using the option --dim 224 224. Most of the Caffe networks like AlexNet, SqueezeNet and the Age/GenderNets use 227x227 and so the default dimensions for the app is set to 227x227. So the command you should try is python3 live-image-classifier.py --graph ../../caffe/GoogLeNet/graph --labels ../../data/ilsvrc12/synset_words.txt --dim 224 224.

0 Kudos
idata
Employee
646 Views

@Tome_at_Intel thanks very much, now it works very well!

0 Kudos
idata
Employee
646 Views

@Tome_at_Intel

 

I used SSD_Mobilenet and the input dimension was 300x300 (V1) it work fine. After using V2 (2.05), I got same error

 

ncFifoWriteElem:2570 input tensor length (540000) doesnt match expected value (1080000). I've tried different dimension but cannot get the exact value 1080000, such as 425x425,

 

Thanks
0 Kudos
idata
Employee
646 Views

@Sramctc

 

You are using graph created with "input sizes = 300x300 and shave core = 12"

 

Probably,

 

Yourgraph = 300 x 300 x 12(shave core) = 1,080,000

 

Please try recreating the grah file in v2 environment.

 

$ mvNCCompile aaa.prototxt -w bbb.caffemodel -s 12

 

Or you need to review the vertical and horizontal sizes of the input.

 

For example)

 

input size = 200x200 x12=480,000

 

input size = 300x300 x6=540,000

 

input size = 400x400 x1=160,000

 

input size = 224x224 x12=602,112

 

input size = 224x224 x6=301,056

 

input size = 227x227 x12=618,348
0 Kudos
idata
Employee
646 Views

@PINTO

 

Dear Pinto, Thanks. I used SDK version 1(graph.LoadTensor(image,None), image is 300x300) before and it works fine. Now, I would like to use version 2 (2.05), using graph.queue_inference_with_fifo_elem(input_fifo,output_fifo,image,None) (?) according to the V2 document ("convenience function") but fail :(

0 Kudos
idata
Employee
646 Views

@Sramctc

 

The v1 graph file and v2 graph file are incompatible.

 

You must recreate the grah file in v2 environment.

 

 

using graph.queue_inference_with_fifo_elem(input_fifo,output_fifo,image,None) (?)

 

 

It is not wrong.

0 Kudos
idata
Employee
646 Views

@PINTO

 

Thanks, I did recreate the graph from the very beginning because error occurred when allocating V1 graph (previous used) using graph.allocate_with_fifos. I am not sure such issue is related to preprocessing image.

 

processedImage = image.astype(np.float 16) => input tensor length mismatched (504000 vs 1080000)

 

processedImage = image.astype(np.float32) => DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode

 

inputs. Use frombuffer instead tensor = numpy.fromstring(tensor.raw, dtype=numpy.float32)

 

processedImage = numpy.fromstring(image, dtype=numpy.float32) => input tensor length mismatched (2160000 vs 1080000)

 

processedImage = numpy.fromstring(image, dtype=numpy.float16) => input tensor length mismatched (2160000 vs 1080000)

 

:(

0 Kudos
idata
Employee
646 Views

@Sramctc

 

def inferencer(results, frameBuffer): graph = None graphHandle0 = None graphHandle1 = None mvnc.global_set_option(mvnc.GlobalOption.RW_LOG_LEVEL, 4) devices = mvnc.enumerate_devices() if len(devices) == 0: print("No NCS devices found") sys.exit(1) print(len(devices)) with open(join(graph_folder, "graph"), mode="rb") as f: graph_buffer = f.read() graph = mvnc.Graph('MobileNet-SSD') devopen = False for devnum in range(len(devices)): try: device = mvnc.Device(devices[devnum]) device.open() graphHandle0, graphHandle1 = graph.allocate_with_fifos(device, graph_buffer) devopen = True break except: continue if devopen == False: print("NCS Devices open Error!!!") sys.exit(1) print("Loaded Graphs!!! "+str(devnum)) while True: try: if frameBuffer.empty(): continue color_image = frameBuffer.get() prepimg = preprocess_image(color_image) graph.queue_inference_with_fifo_elem(graphHandle0, graphHandle1, prepimg.astype(np.float32), color_image) out, _ = graphHandle1.read_elem() results.put(out) except: import traceback traceback.print_exc() def preprocess_image(src): try: img = cv2.resize(src, (300, 300)) img = img - 127.5 img = img * 0.007843 return img except: import traceback traceback.print_exc()

 

https://github.com/PINTO0309/MobileNet-SSD-RealSense/blob/master/MultiStickSSDwithRealSense.py

0 Kudos
idata
Employee
646 Views
$ sudo apt install python-pip python3-pip $ sudo pip3 install --upgrade pip $ sudo pip2 install --upgrade pip $ sudo pip3 uninstall numpy $ sudo pip3 install numpy $ sudo pip2 uninstall numpy $ sudo pip2 install numpy
0 Kudos
idata
Employee
646 Views

@Sramctc In your code, try using this: graph.queue_inference_with_fifo_elem(input_fifo, output_fifo, image.astype(numpy.float32), None). By default the FIFOs are set to use a float32 tensor, so you will have to perform a float32 cast using numpy.

0 Kudos
idata
Employee
646 Views

@PINTO

 

Thank you for your advice. After uninstalling and reinstalling numpy, everything get back to track now.

 

@Tome_at_Intel

 

Thank you guy, you are always helpful
0 Kudos
Reply