Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6401 Discussions

OpenVino floating point exception with Fast.ai network

Corentin_C_Intel
Employee
1,086 Views

Hi,

I have a problem when running the inference with a network architecture created using the fast.ai library.

The network is based on Resnet34 and have additional layers used for transfer learning:

 (1): Sequential(
    (0): AdaptiveConcatPool2d(
      (ap): AdaptiveAvgPool2d(output_size=1)
      (mp): AdaptiveMaxPool2d(output_size=1)
    )
    (1): Flatten()
    (2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (3): Dropout(p=0.25)
    (4): Linear(in_features=1024, out_features=512, bias=True)
    (5): ReLU(inplace)
    (6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): Dropout(p=0.5)
    (8): Linear(in_features=512, out_features=37, bias=True)

To run the network, I first convert it to Onnx and then run the model optimizer. This step happens without any error.

When running with the CPU plugin, (either using the Python or the C API), I get the following error:

[setupvars.sh] OpenVINO environment initialized
[ INFO ] Loading network files:
	ir/model_fastai.xml
	ir/model_fastai.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png is resized from (259, 787) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
/bin/bash: line 1:  8978 Floating point exception(core dumped) python3 /opt/intel/computer_vision_sdk/inference_engine/samples/python_samples/classification_sample.py -d CPU -m ir/model_fastai.xml --labels classes.labels -nt 1 -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png
Error code 136

I was able to reproduce the error on and i5 and Xeon Intel CPU.
I compared with the original network Resnet34 directly exported from Pytorch and I have no error.

You can find the output of the full test and rerun the test on the following google Colab notebook:
https://colab.research.google.com/drive/1TdjV6bSrgSAL6RcGsYzCaWMdaqaqCYID

Does this problem come from the network architecture?

Best Regards,

Cory

 

 

0 Kudos
7 Replies
Shubha_R_Intel
Employee
1,086 Views

Corentin, this is a bug. Inference Engine should not core dump. I promise to take a look at your code, reproduce it and file a bug. Should i have questions I will post here.

Thanks for using OpenVino !

Shubha

0 Kudos
Shubha_R_Intel
Employee
1,086 Views

Corentin I just tried looking at your Jupyter Notebook (very helpful) but I actually need your 1) onnx model 2) xml 3) bin files. Can you zip them up and attach them here ? Or can I get them through your Jupyter Notebook somehow ? 

Thanks

Shubha

0 Kudos
Corentin_C_Intel
Employee
1,086 Views

Dear Shubha,

It is possible to download files from Colab, but you would need to rerun the whole notebook and to download the files manually.
I have done so and copied the files to Google drive directly, which makes it easier to share.

Here is the link, it includes the working and non working networks in Onnx and their Intermediate Representation:
https://drive.google.com/open?id=1-1e9AbGKCC3KyzxfM1usszXWgHkuaAKr

Thank you,

Corentin

 

 

 

0 Kudos
Truong__Dien_Hoa
New Contributor II
1,086 Views

Hi Cheron,

I have just succeeded running fast.ai model (resnet18) with OpenVINO with FP16. For FP32 I had Segmentation Fault but when I converted my model to FP16, it works on MYRIAD device and GPU (FPS is so good: 70 with MYRIAD and 40 on GPU).

Details of my system:

fast.ai 1.0.50.post1

onnx-1.4.1

Hope that helps,

 

 

0 Kudos
Truong__Dien_Hoa
New Contributor II
1,086 Views

By the way, can I ask you what are the main steps we need to do to use fast.ai model in openvino ? There are plenty things still misterious to me. Like we need to input the mean value and scale for tensorflow slim model and reverse the input (because IR use BGR color but not RGB) but you didn't do that. So I guess you preprocess the input by yourself.

I am reading your code on github https://github.com/sgryco/openvino-docker and found it is very useful. Thank you a lot for sharing.

0 Kudos
Corentin_C_Intel
Employee
1,086 Views

Hi,

 

I ran more tests with the latest release (2019.1.094) and the problem is solved.

 

Thanks,

Cory

0 Kudos
Shubha_R_Intel
Employee
1,086 Views

Dearest Cheron, Corentin,

I'm relieved to hear that the problem is fixed in 2019 R1. Thank you for reporting back.

Shubha

0 Kudos
Reply