Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6492 Discussions

'GENERAL_ERROR' when compiling quantized openvino model

ewanmc123
Beginner
1,611 Views

I am trying to compile and run a quantized openvino model but I get

```

line 543, in compile_model
super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:114:
[ GENERAL_ERROR ] could not append an elementwise post-op

```

my quantization script runs successfully but I get the error when trying to compile the model. Further, I can confirm I am able to successfully compile and run inference with the fp32 openVino model that my quantized model is derived from.  My code for conversion quantizing and compiling my model

```

import nncf
import torch
import numpy as np
import openvino as ov
from torchvision import transforms
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
from torch.utils.data import Dataset
import os
from PIL import Image

class CustomDataset(Dataset):
def __init__(self, root_dir, transform=None):
self.root_dir = root_dir
self.transform = transform
self.image_files = [f for f in os.listdir(root_dir) if os.path.isfile(os.path.join(root_dir, f))]

def __len__(self):
return len(self.image_files)

def __getitem__(self, idx):
img_path = os.path.join(self.root_dir, self.image_files[idx])
image = Image.open(img_path).convert('RGB')

if self.transform:
image = self.transform(image)

return image


def normalize_image(img, mean, std):
"""
Normalize an OpenCV BGR image using given mean and std values.

Args:
- img (numpy.ndarray): BGR image read using OpenCV.
- mean (list): List of mean values for BGR channels.
- std (list): List of standard deviation values for BGR channels.

Returns:
- numpy.ndarray: Normalized image.
"""

# Convert image to float32 for the normalization process
img = img.astype(np.float32)

# Normalize each channel
for i in range(3): # For B, G, and R channels
img[:,:,i] = (img[:,:,i] - mean[i]) / std[i]

return img


def transform_fn(data_item):
images = data_item
normalized_images = [normalize_image(img.permute(1, 2, 0).numpy(), [103.53, 116.28, 123.675], [57.375, 57.12, 58.395]) for img in images]
return np.array(normalized_images).transpose(0, 3, 1, 2)

def main():

transform = transforms.Compose([
transforms.ToTensor(),
])

calibration_loader = DataLoader(
CustomDataset(root_dir='/home/calibration_imgs', transform=transform),
batch_size=1, shuffle=False
)

calibration_dataset = nncf.Dataset(calibration_loader, transform_fn)
model = ov.Core().read_model('../models/end2end_m_ov.xml')
quantized_model = nncf.quantize(model, calibration_dataset, target_device=nncf.TargetDevice.CPU)

ov.serialize(quantized_model, "../models/end2end_m_ov_quant.xml")

model_int8 = ov.Core().compile_model(quantized_model,device_name='CPU')

if __name__=='__main__':
main()

```

 

0 Kudos
6 Replies
Iffa_Intel
Moderator
1,587 Views

Hi,

 

could you share:

  1. Relevant model files
  2. Relevant inferencing codes (custom codes files, source of reference, etc)
  3. OpenVINO version

 

 

Cordially,

Iffa

 

0 Kudos
ewanmc123
Beginner
1,536 Views

Hi Iffa,

 

Thanks for getting back to me,

 

here is a link to a zip file with all the necessary files:

 

https://drive.google.com/file/d/1hddUQEtTyAhmxifjnmvkGAWPwGBTHsld/view?usp=drive_link

 

there is a models folder that contains the original onnx model, the model converted straight to openvino from onnx (fp32) and the quantized openvino model. I have also included in utils, a script for converting the onnx model, a script for running the quantization and a basic script for inference. I set up a 'nncf_ptq_env' venv precisely as described in the openvino docs. and I have openvino '2023.1.0-12185-9e6b00e51cd-releases/2023/1'  additionally I am running this on ubuntu 20.04 . I hope this helps and I look forward to hearing back.

 

Kind Regards

 

 

0 Kudos
Iffa_Intel
Moderator
1,507 Views

Hi,

 

I could see the same error when inferencing your quantized model on both Windows 11 and Ubuntu 20.04.

 

Windows 11 + ONNX (success inferencing):

onnxbenchmark.png

 

Windows 11 + IR (success inferencing):

IRbenchmark.png

 

Windows 11 + Quantized(failed inferencing): [ GENERAL_ERROR ] could not append an elementwise post-op

quantizedBenchmark.png

 

Ubuntu 20.04 + ONNX (success inferencing):

onnxBenchmarkub20.PNG

 

Ubuntu 20.04 + IR (success inferencing):

IRbenchmark.PNG

 

Ubuntu 20.04 + Quantized(failed inferencing): [ GENERAL_ERROR ] could not append an elementwise post-op

 

quantizedBenchmark.PNG

 

We will further investigate this and get back to you.

 

Cordially,

Iffa

0 Kudos
ewanmc123
Beginner
1,475 Views

Hi Iffa,

 

Thanks for the investigation thus far, I look forward to hearing wat further investigation shows,

 

Kind Regards

0 Kudos
Iffa_Intel
Moderator
1,296 Views

Hi,


As a hotfix, you can add nodes that produce FQ with inf values to ignored_scope parameter - https://github.com/openvinotoolkit/nncf/blob/develop/nncf/scopes.py#L24.

adding these two layers seems to be working:

ignored_scope=IgnoredScope(names=['MatMul_606', "Add_608"])

 

Here are the similar issues reported in GitHub

https://github.com/openvinotoolkit/openvino/issues/20716



Cordially,

Iffa


0 Kudos
Iffa_Intel
Moderator
1,224 Views

Hi,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 



Cordially,

Iffa


0 Kudos
Reply