Goal: Evaluate efficientdet-d3 model on COCO validation dataset using accuracy_checker
- Download efficientdet-d3 and freeze graph using steps mentioned on this repository.
- Convert the frozen graph in Step 1 to OpenVino IR with precision FP32.
- Use the following config.yaml to run accuracy checker. I just took the efficientdet-d0-tf.yml file found under accuracy_checker/configs/ and modified it. I have used COCO's validation dataset that has 5000 images.
models: - datasets: - name: ms_coco_detection data_source: val2017 metrics: - type: coco_precision annotation_conversion: images_dir: /home/siddhant.sahu/pot_data/dataset/val2017/ converter: mscoco_detection annotation_file: /home/siddhant.sahu/pot_data/dataset/instances_val2017.json preprocessing: - type: resize aspect_ratio_scale: fit_to_window size: 896 - type: padding size: 896 pad_type: right_bottom postprocessing: - type: faster_rcnn_postprocessing_resize size: 896 - type: shift_labels offset: 1 launchers: - adapter: ssd batch: 1 device: CPU framework: dlsdk model: /home/siddhant.sahu/pot_data/models/efficientdet/fp32/frozen_inference_graph.xml weights: /home/siddhant.sahu/pot_data/models/efficientdet/fp32/frozen_inference_graph.bin name: ms_coco_detection
When I execute the above step, I get a coco_precision value of 0.01% which is incorrect.
To ensure that the model conversion using model optimizer was executed correctly, I have checked the AP value using custom code I wrote and got 0.435, same as what's mentioned here.
What am I missing here?
Thank you for your patient. We have confirmed that efficientdet-d3 is not yet be supported within OpenVINO toolkit. You may still be able to utilize our accuracy checker tool for the above-mentioned model but we cannot confirm nor provide any exact result as it is yet to be validated by our OpenVINO development team.
Efficientdet works with OpenVino - I have tested it already. Anyway, I spent some more time trying to figure out the issue and was able to get non-zero results of map (mean average precision) and coco_precision metric using the accuracy_checker tool. I'm not sure if the results are correct.
I used DL workbench to get the command that was being run in the background whenever accuracy is evaluated. Using that, I modified the config to the following:
models: - datasets: - name: ms_coco_detection data_source: val2017 metrics: - type: map integral: max overlap_threshold: 0.5 annotation_conversion: images_dir: /home/pot_data/dataset/val2017/ converter: mscoco_detection has_background: true use_full_label_map: false annotation_file: /home/pot_data/dataset/instances_val2017.json preprocessing: - type: auto_resize postprocessing: - type: resize_prediction_boxes launchers: - adapter: ssd batch: 1 device: CPU framework: dlsdk model: /home/pot_data/models/efficientdet/fp32/frozen_inference_graph.xml weights: /home/pot_data/models/efficientdet/fp32/frozen_inference_graph.xml name: ms_coco_detection
The documentation of accuracy_checker module can be improved with working configs of models. The efficientdet config that's present in the repository (path is accuracy_checker/configs/) seems to be incorrect because it returns 0 map and 0 coco_precision.
Hi Siddhant Sahu,
That particular config file you are using belongs to the Efficientdet-d0, not Efficientdet-d3. As mentioned before, you can use the file but the result might not accurate as we cannot validate any correct value for accuracy checker with Efficientdet-d3. Even though you can get a value with editing/adding some of the metrics, we don’t have any exact mAP value to determine the correctness of the value that you get.
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.