- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have a problem when running the inference with a network architecture created using the fast.ai library.
The network is based on Resnet34 and have additional layers used for transfer learning:
(1): Sequential( (0): AdaptiveConcatPool2d( (ap): AdaptiveAvgPool2d(output_size=1) (mp): AdaptiveMaxPool2d(output_size=1) ) (1): Flatten() (2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): Dropout(p=0.25) (4): Linear(in_features=1024, out_features=512, bias=True) (5): ReLU(inplace) (6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (7): Dropout(p=0.5) (8): Linear(in_features=512, out_features=37, bias=True)
To run the network, I first convert it to Onnx and then run the model optimizer. This step happens without any error.
When running with the CPU plugin, (either using the Python or the C API), I get the following error:
[setupvars.sh] OpenVINO environment initialized [ INFO ] Loading network files: ir/model_fastai.xml ir/model_fastai.bin [ INFO ] Preparing input blobs [ WARNING ] Image /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png is resized from (259, 787) to (224, 224) [ INFO ] Batch size is 1 [ INFO ] Loading model to the plugin [ INFO ] Starting inference (1 iterations) /bin/bash: line 1: 8978 Floating point exception(core dumped) python3 /opt/intel/computer_vision_sdk/inference_engine/samples/python_samples/classification_sample.py -d CPU -m ir/model_fastai.xml --labels classes.labels -nt 1 -i /opt/intel/computer_vision_sdk/deployment_tools/demo/car.png Error code 136
I was able to reproduce the error on and i5 and Xeon Intel CPU.
I compared with the original network Resnet34 directly exported from Pytorch and I have no error.
You can find the output of the full test and rerun the test on the following google Colab notebook:
https://colab.research.google.com/drive/1TdjV6bSrgSAL6RcGsYzCaWMdaqaqCYID
Does this problem come from the network architecture?
Best Regards,
Cory
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Corentin, this is a bug. Inference Engine should not core dump. I promise to take a look at your code, reproduce it and file a bug. Should i have questions I will post here.
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Corentin I just tried looking at your Jupyter Notebook (very helpful) but I actually need your 1) onnx model 2) xml 3) bin files. Can you zip them up and attach them here ? Or can I get them through your Jupyter Notebook somehow ?
Thanks
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha,
It is possible to download files from Colab, but you would need to rerun the whole notebook and to download the files manually.
I have done so and copied the files to Google drive directly, which makes it easier to share.
Here is the link, it includes the working and non working networks in Onnx and their Intermediate Representation:
https://drive.google.com/open?id=1-1e9AbGKCC3KyzxfM1usszXWgHkuaAKr
Thank you,
Corentin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Cheron,
I have just succeeded running fast.ai model (resnet18) with OpenVINO with FP16. For FP32 I had Segmentation Fault but when I converted my model to FP16, it works on MYRIAD device and GPU (FPS is so good: 70 with MYRIAD and 40 on GPU).
Details of my system:
fast.ai 1.0.50.post1
onnx-1.4.1
Hope that helps,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
By the way, can I ask you what are the main steps we need to do to use fast.ai model in openvino ? There are plenty things still misterious to me. Like we need to input the mean value and scale for tensorflow slim model and reverse the input (because IR use BGR color but not RGB) but you didn't do that. So I guess you preprocess the input by yourself.
I am reading your code on github https://github.com/sgryco/openvino-docker and found it is very useful. Thank you a lot for sharing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I ran more tests with the latest release (2019.1.094) and the problem is solved.
Thanks,
Cory
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Cheron, Corentin,
I'm relieved to hear that the problem is fixed in 2019 R1. Thank you for reporting back.
Shubha

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page