- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I am trying to convert my model to int8 using accuracy aware optimization. However, when I try to optimize it I get this error
RuntimeError: Inconsistent number of per-sample metric values
I am not able to find what this means.
I have attached my configuration file below. I have renamed it to txt as I am not allowed to upload .json. I have also attached annotation.txt file of my dataset.
The model converts successfully when I use Default Optimization.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Are you trying to implement INT8 Quantization?
If yes, you need to take the dataset types into consideration.
Only these 5 are supported:
- ImageNet
- Pascal Visual Object Classes (Pascal VOC)
- Common Objects in Context (COCO)
- Common Semantic Segmentation
- Unannotated dataset
You can refer to these links for further info:
- https://docs.openvinotoolkit.org/latest/_docs_Workbench_DG_Dataset_Types.html
- https://docs.openvinotoolkit.org/latest/_docs_Workbench_DG_Int_8_Quantization.html
Sincerely,
Iffa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Are you trying to implement INT8 Quantization?
If yes, you need to take the dataset types into consideration.
Only these 5 are supported:
- ImageNet
- Pascal Visual Object Classes (Pascal VOC)
- Common Objects in Context (COCO)
- Common Semantic Segmentation
- Unannotated dataset
You can refer to these links for further info:
- https://docs.openvinotoolkit.org/latest/_docs_Workbench_DG_Dataset_Types.html
- https://docs.openvinotoolkit.org/latest/_docs_Workbench_DG_Int_8_Quantization.html
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have the same problem when trying to convert to 8bit ("Inconsistent number of per-sample...")
I am trying to use Resnet50 on Imagenet with a slightly different data structure (specified below).
Any suggestions how to solve this problem?
My Imagenet structure is:
- Imagenet
- train (NOT USED)
- ....
-val
- annotation.txt
- n03187595
- ILSVRC2012_val_00034672.JPEG
- ILSVRC2012_val_00037631.JPEG
- .....
- n04356056
- ILSVRC2012_val_00039734.JPEG
- .....
- ....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dataset wasn't an issue in my case. What worked for me was a combination of .json and .yaml files. Have a look at the samples in the C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\tools\post_training_optimization_toolkit\configs folder.
I have attached a few files. Maybe try splitting your .json into two such files. Let me know if this works.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@dopeuser - Thanks a lot!
Your yaml and json solved the issue!
Where did you find the documentation explaining the use of them?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You are welcome. I didn't find it in any documentation. I spent a lot of time going through the directories and the source code. Their documentation is pretty inconsistent. I suggest going through all the examples to see if you find something similar to your use case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@dopeuser If you have the experience, maybe you can help?
I am trying to quantize to 8bit several Imagenet models (e.g. resnet50, mobilenetv2, v3, efficientnets etc.).
Do you have any insights regarding how much accuracy drop to expect? how many images\steps are needed for that?
Any information could assist.
Thanks a lot
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I didn't explore this further. It always removed quantization from all the layers when doing accuracy aware quantization. I really can't help you with this
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page