Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Accuracy Configuration Failed

KCZ
Beginner
272 Views

Hi,

I had converted my custom yolov4 model into OpenVINO IR model following this tutorial. https://github.com/TNTWEN/OpenVINO-YOLOV4.git

But when I am going to create new accuracy report, it shows "Accuracy Configuration Failed".

 

I am running with:

Ubuntu 18.04

Openvino_2021.4.582

 

The error log is as follows:

Traceback (most recent call last):
File "/opt/intel/openvino_2021/deployment_tools/tools/workbench/wb/main/console_tool_wrapper/accuracy_tools/accuracy/check_accuracy.py", line 120, in <module>
main()
File "/opt/intel/openvino_2021/deployment_tools/tools/workbench/wb/main/console_tool_wrapper/accuracy_tools/accuracy/check_accuracy.py", line 106, in main
profile=args.profile,
File "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py", line 348, in process_dataset_sync
batch_input_ids, batch_meta, enable_profiling, output_callback)
File "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py", line 367, in _process_batch_results
batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta)
File "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/adapters/yolo.py", line 444, in process
self.processor, self.threshold)
File "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/adapters/yolo.py", line 135, in parse_output
raw_bbox = DetectionBox(bbox[0], bbox[1], bbox[2], bbox[3], bbox[4], bbox[5:])
IndexError: index 0 is out of bounds for axis 0 with size 0

 

 

0 Kudos
4 Replies
Zulkifli_Intel
Moderator
233 Views

Hi KCZ,


Thank you for reaching out to us.

 

Please share your model, configuration file (.yml), and annotated dataset with us.

 

You can go to How to use predefined configuration files and Dataset Preparation Guide for references.

 

Sincerely,

Zulkifli 


KCZ
Beginner
223 Views

Hi, thanks for replying.

 

The YOLOv4-tiny model is trained on 2 classes with custom datasets 

 

The validation dataset exceeds the upload size limit.

I am using the val2017 coco dataset and the file layout of the archived file is as follows:

|-annotations

        |-instances_val2017.json

|-val2017

        |-0000X.jpg

        |...

 

The configuration file cannot be attached here so i just copy the part related to MS COCO dataset:

datasets:
- name: ms_coco_mask_rcnn
annotation_conversion:
converter: mscoco_mask_rcnn
annotation_file: instances_val2017.json
has_background: True
sort_annotations: True
annotation: mscoco_mask_rcnn.pickle
dataset_meta: mscoco_mask_rcnn.json
data_source: val2017

- name: ms_coco_mask_rcnn_short_80_classes
annotation_conversion:
converter: mscoco_mask_rcnn
annotation_file: instances_val2017_short.json
has_background: True
sort_annotations: True
annotation: mscoco_mask_rcnn_short_80.pickle
dataset_meta: mscoco_mask_rcnn_short_80.json
data_source: val2017

- name: ms_coco_mask_rcnn_short_80_classes_without_background
annotation_conversion:
converter: mscoco_mask_rcnn
annotation_file: instances_val2017.json
has_background: False
sort_annotations: True
annotation: mscoco_mask_rcnn_short_80_without_bkgr.pickle
dataset_meta: mscoco_mask_rcnn_short_80_without_bkgr.json
data_source: val2017

- name: ms_coco_mask_rcnn_short_91_classes
annotation_conversion:
converter: mscoco_mask_rcnn
annotation_file: instances_val2017_short.json
has_background: True
sort_annotations: True
use_full_label_map: True
annotation: mscoco_mask_rcnn_short_91.pickle
dataset_meta: mscoco_mask_rcnn_short_91.json
data_source: val2017
preprocessing:
- type: resize
aspect_ratio_scale: fit_to_window
dst_height: 800
dst_width: 1365
- type: padding
dst_height: 800
dst_width: 1365
pad_type: right_bottom

postprocessing:
- type: faster_rcnn_postprocessing_resize
dst_height: 800
dst_width: 1365

- name: ms_coco_detection_91_classes
annotation_conversion:
converter: mscoco_detection
annotation_file: instances_val2017.json
has_background: True
sort_annotations: True
use_full_label_map: True
annotation: mscoco_det_91.pickle
dataset_meta: mscoco_det_91.json
data_source: val2017
preprocessing:
- type: resize
aspect_ratio_scale: fit_to_window
dst_height: 600
dst_width: 1024
- type: padding
dst_height: 600
dst_width: 1024
pad_type: right_bottom

postprocessing:
- type: faster_rcnn_postprocessing_resize
dst_height: 600
dst_width: 1024

- name: ms_coco_detection_80_class_without_background
data_source: val2017
annotation_conversion:
converter: mscoco_detection
annotation_file: instances_val2017.json
has_background: False
sort_annotations: True
use_full_label_map: False
annotation: mscoco_det_80.pickle
dataset_meta: mscoco_det_80.json

- name: ms_coco_detection_80_class_with_background
data_source: val2017
annotation_conversion:
converter: mscoco_detection
annotation_file: instances_val2017.json
has_background: True
sort_annotations: True
use_full_label_map: False
annotation: mscoco_det_80_bkgr.pickle
dataset_meta: mscoco_det_80_bkgr.json

- name: ms_coco_detection_90_class_without_background
data_source: MSCOCO/val2017
annotation_conversion:
converter: mscoco_detection
annotation_file: MSCOCO/annotations/instances_val2017.json
has_background: False
sort_annotations: True
use_full_label_map: True
annotation: mscoco_det_90.pickle
dataset_meta: mscoco_det_90.json

- name: ms_coco_keypoints
data_source: val2017
annotation_conversion:
converter: mscoco_keypoints
annotation_file: person_keypoints_val2017.json
sort_key: image_size
annotation: mscoco_keypoints.pickle
dataset_meta: mscoco_keypoints.json
metrics:
- name: AP
type: coco_precision
max_detections: 20

- name: ms_coco_val2017_keypoints
data_source: val2017
annotation_conversion:
converter: mscoco_keypoints
annotation_file: person_keypoints_val2017.json
remove_empty_images: True
sort_annotations: True
sort_key: image_size
images_dir: val2017
annotation: mscoco_val2017_keypoints.pickle
dataset_meta: mscoco_val2017_keypoints.json
metrics:
- name: AP
type: coco_orig_keypoints_precision

- name: ms_coco_val2017_keypoints_5k_images
data_source: val2017
annotation_conversion:
converter: mscoco_keypoints
annotation_file: person_keypoints_val2017.json
remove_empty_images: False
sort_annotations: True
sort_key: image_size
images_dir: val2017
annotation: mscoco_val2017_keypoints_5k_images.pickle
dataset_meta: mscoco_val2017_keypoints_5k_images.json
metrics:
- name: AP
type: coco_orig_keypoints_precision

Zulkifli_Intel
Moderator
196 Views

Hello KCZ,

 

The configuration file that you shared might not be in the correct format. Please share with us the full configuration file that you used to run with the accuracy checker tool.

 

You can refer to the sample accuracy-check.yml file for the YOLO-v4-tiny model in Open Model Zoo.

https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/yolo-v4-tiny-tf/accuracy...

 

Sincerely,

Zulkifli 


Zulkifli_Intel
Moderator
166 Views

Hello KCZ,


Thank you for your question. This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Sincerely,

Zulkifli


Reply