Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

INT8 Linear Layer Support

pdewan
Beginner
937 Views

Hi,

I created int8 ir files for my custom network using calibration tool.

When i tried loading my model(with int8 ir files) with plugin, i got error like:

"exec_net = ie.load_network(network=net, device_name="CPU")

File "ie_api.pyx", line 85, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 92, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Primitive descriptor was not found for node 49"

So i debuged further and found that linear layer(fc layer) is the issue. I replaced fc layer with 1x1 conv and I was able to load plugin with my model.

Maybe linear layer support with int8 quantization is not there or if am wrong, please correct me.

Thanks,

Puru

 

0 Kudos
9 Replies
Shubha_R_Intel
Employee
937 Views

Dear pdewan,

According to the Low precision 8-bit INT doc Fully Connected is supported but on AVX-512 only.  And of course, as you discovered, Convolution is supporoted.

Hope it helps,

Shubha

0 Kudos
pdewan
Beginner
937 Views

Hi Shubha,

Actually you were right. Fully connected is supported on AVX-512 only.

Can you help me with one more thing. I faced issue in 3D Convolution with int8 support. Error i got is like: "RuntimeError: std::exception".

I used simple 3D convolution network with only 3D convolution and relu layers. Quantized to int8 using calibration, then while loading plugin into model i got this error.

Can you confirm whether 3D convolution support is there or not.

Thanks,

Puru

0 Kudos
pdewan
Beginner
937 Views

Hi,

Can you help me with one more thing. Can you confirm whether any 3D operations (3D batchnorm, 3D maxpool etc) is supported with int8 quantization while loading int8 quantized model with plugins.

Thanks,

Puru

0 Kudos
Shubha_R_Intel
Employee
937 Views

Dear pdwean, 

Perhaps you've seen the 8-bit Integer Inference Engine doc ?  So if your 3D layer is one of these below, then it shouldn't be an issue. There is nothing inherent to the INT8 quantization process that should break with 3D layers.  But batchnorm and maxpool are not supported by OpenVino INT8 at all, as you can see below. If those layers are encountered during the quantization process, they will fall back to FP32. Now there's a key thing called "validation".  OpenVino INT8 may not have been fully validated on 3D layers. But if the layer is supported, then it should work. If it doesn't, it's a bug which should be filed.

Convolution

FullyConnected (AVX-512 only)

ReLU

ReLU6

Pooling

Eltwise

Concat

Resample

Hope it helps,

Shubha

0 Kudos
pdewan
Beginner
937 Views

Hi Shubha,

Actually i carried out a little experiment. I created a small network with two 3D convolution layers and two Relu layers. I will share int8 ir files with you.

When i tried loading plugin to network i got this error; i will give snipet of code also down below:

net = IENetwork(model=model_xml, weights=model_bin)

exec_net = plugin.load(network=net)

ERROR: File "ie_api.pyx", line 551, in openvino.inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx", line 561, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: std::exception

You can also execute from your side using these ir files. I think its a bug.

I am raising this concern because my custom network uses 3D convolution and other 3D operations.

Thanking you for your concern in this matter.

Thanks,

Puru

0 Kudos
Shubha_R_Intel
Employee
937 Views

Dear pdewan,

It may very well be a bug. thanks for your attached files which reproduce the issue ! I will certainly take a look and file a bug on your behalf if I can reproduce it also.

Sincerely,

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
937 Views

Dear pdewan,

Over PM i have requested also your original model as well as your Model Optimizer command.

Hope to hear from you,

Thanks,

shubha

0 Kudos
Shubha_R_Intel
Employee
937 Views

Dear pdewan,

I have filed a bug on your behalf.

Thanks for your patience !

Shubha

0 Kudos
Shubha_R_Intel
Employee
937 Views

Dear pdewan,

3D INT8 models don't work because they're not supported. But this should be explicitly stated in the documentation and it's not. So it's still a bug but for now, a documentation bug.

Thanks for your patience !

Shubha

0 Kudos
Reply