Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Torchvision transform on Openvino

junsoo
Beginner
1,281 Views

Hi,

I am getting different prediction results from my AlexNet model running on PyTorch and OpenVino Inference Engine, similar to the thread posted in this link https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/the-inference-result-is-totally-different-after-converting-onnx/td-p/1135937. From the thread, this issue was solved by transforming the testing image into an input vector on PyTorch before inferring it on OpenVino IE to get an identical result from both platforms.

So I am wondering is there any other method to perform such transformation on OpenVino when torchvision is not available in the OpenVino toolkit?

0 Kudos
7 Replies
junsoo
Beginner
1,277 Views

Similar approach was used on https://github.com/ngeorgis/pytorch_onnx_openvino to Intel OpenVIno classification with input vector saved from PyTorch

0 Kudos
Iffa_Intel
Moderator
1,261 Views

Greetings,


Starting from the 2019R4 release, the OpenVINO™ toolkit officially supports public Pytorch* models (from torchvision 0.2.1 and pretrainedmodels 0.7.4 packages) via ONNX conversion.


You may refer here for supported topologies and methods on how to convert the ONNX to IR before using it with OpenVINO



Sincerely,

Iffa


0 Kudos
junsoo
Beginner
1,251 Views

Thank you Iffa for your reply. I managed to convert my Pytorch model to ONNX file, and convert the ONNX to IR to use it in OpenVINO. But the inference result is totally different after converting ONNX to OpenVINO IR.

Following the suggestion shown in https://github.com/ngeorgis/pytorch_onnx_openvino, the issue is caused by the image processing steps running on OpenVINO being different from PyTorch image transformation. From the same site, the issue was overcome by running OpenVINO classification with input vector saved from PyTorch.

Therefore I am wondering is there any way that I can perform torchvision transform on OpenVINO so that I would not need to pre-process the input image into an input vector before running inference on OpenVINO?

 

0 Kudos
Iffa_Intel
Moderator
1,234 Views

Another workaround that you can try is to converts the models that are not in the Inference Engine IR format into that format using Model Optimizer by using converter.py (model converter).

It's located in <openvinopath>/deployment_tools/tools/model_downloader/


You may refer here for further guidance.


Sincerely,

Iffa


0 Kudos
junsoo
Beginner
1,231 Views

As I said, I manage to convert my PyTorch model to IR format using the Model Optimizer. However, the inference result from OpenVINO is totally different from the result I am getting from the same model running in PyTorch.

The issue I am facing now is related to the image processing method (OpenCV and Torchvision transformation) using in OpenVINO and PyTorch being different, which affects the inference result I am getting. 

Therefore my question is, is there any way I can perform torchvision transformation in the OpenVINO environment?

 

0 Kudos
Iffa_Intel
Moderator
1,224 Views

You need to reconvert your PyTorch model using 2 commands below (PyTorch->ONNX and then ONNX->IR). If you still observe inference results discrepancies, then please help to attach your original PyTorch model and also inference results for both models.

 

Conversion to ONNX command: /usr/bin/python3 /opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/pytorch_to_onnx.py --model-name=alexnet --weights=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/alexnet.pth --import-module=torchvision.models --input-shape=1,3,224,224 --output-file=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/alexnet.onnx --input-names=data --output-names=prob

 

Conversion command: /usr/bin/python3 -- /opt/intel/openvino_2021.1.110/deployment_tools/model_optimizer/mo.py --framework=onnx --data_type=FP32 --output_dir=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/FP32 --model_name=alexnet --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/alexnet.onnx


Sincerely,

Iffa


0 Kudos
Iffa_Intel
Moderator
1,213 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


0 Kudos
Reply