Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
192 Views

Converting faster_rcnn pytorch model to run on NCS

Dear Experts,

I'm trying to get the pretrained object_recognition model fasterrcnn_resnet50_fpn of pytorch framework up and running on Intels NCS.
Therefore I exported the model from pytorch to onnx format. This was quite challenging but with the nightly build of pytorch an export was possible.
The problem is that the exported model uses opset_version=11 and I'm not able to convert the onnx model with mo_onnx.py to xml/bin format.

I'm getting following error message: Unexpected exception happened during extracting attributes for node Constant_111.

Perhaps someone can give a little hint I would very much appreciate your support.

I'm using Openvinco 2019_R3.1 and my environment looks as follows:

OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: version 3.5.1

Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA

Versions of relevant libraries:
[pip3] numpy==1.17.4
[pip3] torch==1.3.1
[pip3] torchvision==0.4.2
[conda] blas                      1.0                         mkl  
[conda] mkl                       2019.4                      243  
[conda] mkl-service               2.3.0            py37he904b0f_0  
[conda] mkl_fft                   1.0.14           py37ha843d7b_0  
[conda] mkl_random                1.1.0            py37hd6b4f25_0  
[conda] pytorch                   1.5.0.dev20200113 py3.7_cuda10.1.243_cudnn7.6.3_0    pytorch-nightly
[conda] torchvision               0.5.0.dev20200113      py37_cu101    pytorch-nightly
 

Kind regards,
Simon

0 Kudos
4 Replies
Highlighted
192 Views

Hi Simon,

 

Thank you for reaching out.

The model you are trying to use is not supported via ONNX conversion. I recommend you to use one of the Supported Pytorch* Models via ONNX Conversion listed here: 

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_O...

 

Regards,

Javier A.

 

0 Kudos
Highlighted
Beginner
192 Views

Hi Javier,

Thank you for your reply.

I was aware of the missing support via the ONNX conversion,
however I was hoping that someone was able to convert an object recognition model. 

Acutally I'm writing my thesis and wanted to compare the runtime of object recognition model (pretrained pytorch model) running on CPU, GPU and VPU (Intel NCS).

 

Regards,

Simon 

0 Kudos
Highlighted
192 Views

Hi,
I'm trying to do the exact same thing. I have a Faster RCNN Resnet50 in pytorch and I'm trying to run that on the NCS2. Could you guide me in any direction of how I could do that? I can't seem to successfully convert my model to a tensorflow.

Thanks,
Sebastian

0 Kudos
Highlighted
Beginner
192 Views

Hi Sebastian,
as Javier mentioned there is no support to convert an object recognition model from pytorch to run on inference engine of openvino.

However I was able to export a pretrained model (Faster R-CNN ResNet-50) to ONNX format. Therefore you've to install the newest nightly-build of pytorch library and use opset=11 as parameter for the onnx export.

But still it is not supported to convert that exported ONNX model into openvino format.

 

Kind regards,
Simon

0 Kudos