Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

openvino does not give the same results as my ONNX/pytorch model

Daga__Pankaj
Beginner
443 Views

I have a strange problem in trying to use OpenVino.

I have exported my pytorch model to onnx and then imported it to OpenVino using the following command:

python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model ~/Downloads/unet2d.onnx --disable_resnet_optimization --disable_fusing --disable_gfusing --data_type=FP32

So for the test case, I have disabled the optimizations.

Now, using the sample python applications, I run inference using the model as follows:

   

 from openvino.inference_engine import IENetwork, IECore
 import numpy as np
    
 model_xml = path.expanduser('model.xml')
 model_bin = path.expanduser('model.bin')
 ie = IECore()
 net = IENetwork(model=model_xml, weights=model_bin)
 input_blob = next(iter(net.inputs))
 out_blob = next(iter(net.outputs))
 net.batch_size = 1
    
 exec_net = ie.load_network(network=net, device_name='CPU')
 np.random.seed(0)
 x = np.random.randn(1, 2, 256, 256).astype(np.float32) # expected input shape
 res = exec_net.infer(inputs={input_blob: x})
 res = res[out_blob]

The problem is that this seems to output something completely different from my onnx or the pytorch model.

Additionally, I realized that I do not even have to pass an input, so if I do something like:

    x = None
    res = exec_net.infer(inputs={input_blob: x})

This still returns me the same output! So it seems to suggest that somehow my input is getting ignored or something like that?

0 Kudos
1 Reply
Shubha_R_Intel
Employee
443 Views

Dear Daga, Pankaj,

Can you try without --disable_resnet_optimization --disable_fusing --disable_gfusing ? Yes, leave the optimizations in. Does it work then ?

Thanks,

Shubha

 

0 Kudos
Reply