Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Torchvision transform on Openvino

junsoo
Beginner
273 Views

Hi,

I am getting different prediction results from my AlexNet model running on PyTorch and OpenVino Inference Engine, similar to the thread posted in this link https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/the-inference-result-is-totally-differ... From the thread, this issue was solved by transforming the testing image into an input vector on PyTorch before inferring it on OpenVino IE to get an identical result from both platforms.

So I am wondering is there any other method to perform such transformation on OpenVino when torchvision is not available in the OpenVino toolkit?

0 Kudos
7 Replies
junsoo
Beginner
269 Views

Similar approach was used on https://github.com/ngeorgis/pytorch_onnx_openvino to Intel OpenVIno classification with input vector saved from PyTorch

Iffa_Intel
Moderator
253 Views

Greetings,


Starting from the 2019R4 release, the OpenVINO™ toolkit officially supports public Pytorch* models (from torchvision 0.2.1 and pretrainedmodels 0.7.4 packages) via ONNX conversion.


You may refer here for supported topologies and methods on how to convert the ONNX to IR before using it with OpenVINO



Sincerely,

Iffa


junsoo
Beginner
243 Views

Thank you Iffa for your reply. I managed to convert my Pytorch model to ONNX file, and convert the ONNX to IR to use it in OpenVINO. But the inference result is totally different after converting ONNX to OpenVINO IR.

Following the suggestion shown in https://github.com/ngeorgis/pytorch_onnx_openvino, the issue is caused by the image processing steps running on OpenVINO being different from PyTorch image transformation. From the same site, the issue was overcome by running OpenVINO classification with input vector saved from PyTorch.

Therefore I am wondering is there any way that I can perform torchvision transform on OpenVINO so that I would not need to pre-process the input image into an input vector before running inference on OpenVINO?

 

Iffa_Intel
Moderator
226 Views

Another workaround that you can try is to converts the models that are not in the Inference Engine IR format into that format using Model Optimizer by using converter.py (model converter).

It's located in <openvinopath>/deployment_tools/tools/model_downloader/


You may refer here for further guidance.


Sincerely,

Iffa


junsoo
Beginner
223 Views

As I said, I manage to convert my PyTorch model to IR format using the Model Optimizer. However, the inference result from OpenVINO is totally different from the result I am getting from the same model running in PyTorch.

The issue I am facing now is related to the image processing method (OpenCV and Torchvision transformation) using in OpenVINO and PyTorch being different, which affects the inference result I am getting. 

Therefore my question is, is there any way I can perform torchvision transformation in the OpenVINO environment?

 

Iffa_Intel
Moderator
216 Views

You need to reconvert your PyTorch model using 2 commands below (PyTorch->ONNX and then ONNX->IR). If you still observe inference results discrepancies, then please help to attach your original PyTorch model and also inference results for both models.

 

Conversion to ONNX command: /usr/bin/python3 /opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/pytorch_to_onnx.py --model-name=alexnet --weights=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/alexnet.pth --import-module=torchvision.models --input-shape=1,3,224,224 --output-file=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/alexnet.onnx --input-names=data --output-names=prob

 

Conversion command: /usr/bin/python3 -- /opt/intel/openvino_2021.1.110/deployment_tools/model_optimizer/mo.py --framework=onnx --data_type=FP32 --output_dir=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/FP32 --model_name=alexnet --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/opt/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader/public/alexnet/alexnet.onnx


Sincerely,

Iffa


Iffa_Intel
Moderator
205 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


Reply