Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Running Inference on Movidius VPU

Murugappan__Indrajit
538 Views

Hi

I converted my models to FP16 and tried to infer it on a Movidius VPU platform. But I get the following error:

"AssertionFailed: inputSize % 2 == 0"

Then I tried running the same FP16 models on a GPU (same machine) just by switching the device from "HDDL" to "GPU", I was able to get the results.

Could you help me with getting it running on VPU?

Thanks,

Indrajit

0 Kudos
6 Replies
Shubha_R_Intel
Employee
538 Views

Dear  Indrajit, yes this is really strange.

What kind of model is it (i.e. Tensorflow, caffe, mxnet, etc..) ?  What model optimizer command did you use to produce IR ? Finally, how did you get "AssertionFailed: inputSize % 2 == 0" ? By running one of the OpenVino samples ? 

Looking forward to hearing more,

Thanks for using OpenVino !

Shubha

0 Kudos
Murugappan__Indrajit
538 Views

Hi Shubha

I'm using OpenVINO R5 and converted my caffe model to FP16. I have attached a code snippet which basically reads the image and runs inference.

I get the error that I have mentioned before while loading the network into the plugin.

The code as such runs fine when i specify the device to be GPU.

Also, the same code sample does not give an error when I tried it with a different input size (544, 960) for both GPU and HDDL.

device = "HDDL"
MAX_BATCH_SIZE = 20

image = cv2.imread(filename, 0)
image = cv2.resize(image, (1900, 450))
plugin = IEPlugin(device=device, plugin_dirs= "")
plugin.set_config({"DYN_BATCH_ENABLED": "YES"})
net = IENetwork.from_ir(model=xml_file, weights=bin_file)
net.batch_size = MAX_BATCH_SIZE
net.reshape({"data": (1,1,450,1900)})
exec_net = plugin.load(network=net)
del net
			
im = image.astype('float')
im_input = im[np.newaxis, np.newaxis, :, :]
out = exec_net.infer(inputs={input_blob: "data"})

del exec_net
del plugin

Thanks,

Indrajit

0 Kudos
Murugappan__Indrajit
538 Views

Hi Shubha

I was able to run inference on VPU finally by changing my input resolution to 448, 1920 (multiple of 16). However, it would be good to be able to run inference on all devices without changing the input resolution.

Thanks

Indrajit

0 Kudos
Shubha_R_Intel
Employee
538 Views

Hi Indrajit. I have PM'd you with a request for you to send me your model and code via *.zip. Please allow me to debug this for you.

Thanks,

Shubha

0 Kudos
Shubha_R_Intel
Employee
538 Views

Dear Indrajit,

as discussed over PM, did you try downloading the 2019 R1 version ?

Thanks,

Shubha

 

0 Kudos
Murugappan__Indrajit
538 Views

Hi Shubha

I have downloaded the 2019 R1 version and I don't face this issue anymore.

I'm able to run inference for any input resolution on HDDL device.

Thanks

Indrajit

0 Kudos
Reply