Community
cancel
Showing results for 
Search instead for 
Did you mean: 
136 Views

Example of Labels From CSV

I want to run the accuracy_checker on my data.  I have my images organized in folders by label.  For example, data/label1/image1.npy, data/label1/image2.npy, data/label2/image3.npy, data/label2/image4.npy.  What's the best way to convert this into something that accuracy_checker can use?

0 Kudos
4 Replies
JAIVIN_J_Intel
Employee
136 Views

Hi Joshua,

Thanks for reaching out.

Please follow the Steps for testing new model with Accuracy Checker.

Feel free to ask if you have any other questions.

Regards,

Jaivin

136 Views

I found that page.  It would seem that I need to use the right  Annotation Converter or write my own.  The problem is that I can't find anywhere that explicitly specifies the annotation format for the existing converters or tells me where to look at examples or that clearly explains the interface I need to implement to write my own.

136 Views

Hi Joshua,

The following steps may be helpful to explore further.

Link:- https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/810212

Shubha R. (Intel) wrote:

Dear Tomasz S

We definitely support INT8 precision. But in order to use it you must convert your model to INT8 using the calibration tool. Before using the calibration tool, however, you must generate a *.json and a *.pickle using convert_annotation.py, where you must know which dataset you want to use in advance. Please read here for additional information on annotation converters:
https://github.com/opencv/dldt/blob/2019/tools/accuracy_checker/accuracy_checker/annotation_converte...

The below steps don't exactly pertain to ssd_mobilenet_v1_coco.  But hopefully you can extrapolate the steps for ssd_mobilenet_v1_coco based on the hints below.

So the steps to do that are:
1)https://github.com/opencv/dldt/blob/2019/inference-engine/tools/accuracy_checker_tool/convert_annotation.py

2)https://github.com/opencv/dldt/blob/2019/inference-engine/tools/calibration_tool/calibrate.py

3)https://github.com/opencv/dldt/blob/2019/inference-engine/tools/accuracy_checker_tool/accuracy_check.py

So in summary, step 1) will create a *.json and a *.pickle file which will be consumed by Step 2) in the form of a "definitions.yml" file, out plops INT8 IR if everything worked ok. Then in Step 3) you check the accuracy of the INT8 IR created by step 2). Many model flavors are definitely supported.

You can get some sample configuration (*.yml) files here:
https://software.intel.com/en-us/forums/computer-vision/topic/807243

I know it's a lot of information. But the basic idea is that until you use our tools to generate proper INT8 IR, INT8 precision cannot be supported in OpenVino.

Here are some sample commands which you can use as guidance:

python convert_annotation.py imagenet --annotation_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/val.txt --labels_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/synset_words.txt -ss 2000 -o ~/annotations -a imagenet_calibration.pickle -m imagenet_calibration.json
(to create *.json and *.pickle for step 2)

python calibrate.py --config ~/inception_v4.yml --definition ~/definitions.yml -M /home/bob/intel/openvino/deployment_tools/model_optimizer --tf_custom_op_config_dir ~/tf_custom_op_configs --models ~/models --source /media/user/icv_externalN/omz-validation-datasets --annotations ~/annotations –cfc

python convert_annotation.py imagenet --annotation_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/val.txt --labels_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/synset_words.txt -o ~/annotations -a imagenet.pickle -m imagenet.json
(do it again before you do accuracy check, step 3) - create new *.json and *.pickle).

python accuracy_check.py --config ~/inception_v4.yml -d ~/definitions.yml -M /home/bob/intel/openvino/deployment_tools/model_optimizer --tf_custom_op_config_dir ~/tf_custom_op_configs --models ~/models --source /media/user/icv_externalN/omz-validation-datasets --annotations ~/annotations -tf dlsdk -td CPU

Hope it helps. Post here if you are confused or need more help.

Thanks,

Shubha


 

136 Views

I was able to get something working after LOTS of trial and error.  The accuracy checker now gets the same accuracy I have achieved using other tools.  I used this to build a quantized model.  However, the inference engine won't run it.  I'll save that for another post.

Reply