Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Calculating accuracy of model after generating IR files

Kannan_K_Intel
Employee
1,046 Views

Hi,

I am trying to find the accuracy of my ssd lite mobilenet model for object detection(tensorflow).

I tried using validation_app tool and I am getting 0 for two classes and total accuracy is below 1%. But I am getting decent detection while inferring using object_detection_ssd  sample. I tried with 2019 R1, R2 and 2018 R4,R5 but still getting very low accuracy or 0 accuracy.

I tried to use workbench tool in 2019 R2 also, but it seems like it only takes Pascal VOC dataset only for object detection. 

can you suggest me any tool or page to calculate the accuracy of my model?

 

0 Kudos
12 Replies
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan,

yes you are correct  - the DL Workbench tool is limited in the number of data sets it supports as it's in Preview Mode for R2. In future releases it should support more.

You're better off doing it manually using the Python Tools. In dldt github issue 171 i give the poster detailed steps on how to accomplish it. 

Please continue to use OpenVIno 2019R2 as it contains many bug fixes and improvements.

Hope it helps !

Thanks,

Shubha

0 Kudos
Kannan_K_Intel
Employee
1,046 Views

Thanks Shubha, for the quick reply.

I followed the steps you specified in the github page for getting accuracy using accuracy_checker tool.

I am able to successfully convert the dataset, but while running the accurcy_checker python code I am facing the below error. The model I am using is ssd lite mobilenet v2 and tried with 2019 R2

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\accuracy_checker_tool>python accuracy_check.py -c ~/openvino\convert_annotations\config.yml -s ~\openvino\convert_annotations -m ~\openvino\convert_annotations\model -s ~\openvino\convert_annotations -a ~\openvino\convert_annotations --cpu_extensions_mode avx2


Processing info:
model: ssd
launcher: dlsdk
device: CPU
dataset: baggage
OpenCV version: 4.1.1-openvino
IE version: 2.0.27579
Loaded CPU plugin version: 2.0.27579
Traceback (most recent call last):
  File "accuracy_check.py", line 19, in <module>
    main()
  File "C:\Program Files (x86)\IntelSWTools\openvino\python\python3.7\openvino\tools\accuracy_checker\accuracy_checker\main.py", line 201, in main
    model_evaluation_mode(config, progress_reporter, args)
  File "C:\Program Files (x86)\IntelSWTools\openvino\python\python3.7\openvino\tools\accuracy_checker\accuracy_checker\main.py", line 217, in model_evaluation_mode
    model_evaluator = ModelEvaluator.from_configs(launcher_config, dataset_config)
  File "C:\Program Files (x86)\IntelSWTools\openvino\python\python3.7\openvino\tools\accuracy_checker\accuracy_checker\evaluators\model_evaluator.py", line 66, in from_configs
    launcher = create_launcher(launcher_config)
  File "C:\Program Files (x86)\IntelSWTools\openvino\python\python3.7\openvino\tools\accuracy_checker\accuracy_checker\launcher\launcher.py", line 179, in create_launcher
    return Launcher.provide(config_framework, launcher_config)
  File "C:\Program Files (x86)\IntelSWTools\openvino\python\python3.7\openvino\tools\accuracy_checker\accuracy_checker\dependency.py", line 67, in provide
    return root_provider(*args, **kwargs)
  File "C:\Program Files (x86)\IntelSWTools\openvino\python\python3.7\openvino\tools\accuracy_checker\accuracy_checker\launcher\dlsdk_launcher.py", line 205, in __init__
    self.exec_network = self.plugin.load(network=self.network, num_requests=requests_num)
  File "ie_api.pyx", line 551, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 561, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: Unsupported primitive of type: PriorBoxClustered name: PriorBoxClustered_5

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan,

Hmmm. This seems to be coming from Model Optimizer "under the hood". Are you able to generate IR for this model with model optimizer ?

Unsupported primitive of type: PriorBoxClustered name: PriorBoxClustered_5  

Means exactly what it says - a layer is used which is not supported by Inference Engine. But according to Model Optimizer Supported Tensorflow List in fact SSD Lite MobileNet V2 COCO is supported. Did you use the download link provided here ? And are you for sure trying with OpenVino 2019R2 since SSD lite is new to 2019R2.

Please report back on this forum.

Thanks,

Shubha

0 Kudos
Kannan_K_Intel
Employee
1,046 Views

Hi Subha,

Thanks again for the reply.

I am trying accuracy checker tool with using IR file generated on R2. and I did transfer learning on the model.

The same model is inferring fine with object_detection_demo_ssd_async sample. 

 

 

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan,

Which *.yml file did you use for the --config argument to accuracy_check.py ? I am thinking that this is what is wrong, you're either using the wrong one or an incorrect one.

Please let me know.

Thanks,

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan 

I confirmed that we don't have an ssd lite  ssd lite mobilenet v2 config.yml. I'm not sure which yml you are using. But if you're not using a proper one designed for ssd lite  ssd lite mobilenet v2 then you will get errors. You can certainly devise one - and if you do, accuracy_check.py should work. Please look at existing yml examples in the C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\python\python3.7\openvino\tools\accuracy_checker\configs directory.

Hope it helps,

thanks,

Shubha

0 Kudos
Kannan_K_Intel
Employee
1,046 Views

Hi Shubha,

Thanks for the follow up.

I wrote a custom yml file and I am attaching the file here.

 

Thanks 

Kannan K

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear Dear K, Kannan,

Thanks ! I will take a look.

Shubha

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan,

Before we complicate matters by involving accuracy_check.py can you even use regular old Model Optimizer to generate IR on your model ? First please try that on OpenVino 2019R2.

Thanks,

Shubha

0 Kudos
Kannan_K_Intel
Employee
1,046 Views

Hi Subha, 

To be clear, I am trying to get the accuracy of my model using the IR files that I converted using OpenVINO R2 like I mentioned earlier. 

I converted my ssd lite mobilenet v2 model to IR using openvino R2:-

C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer>python mo_tf.py --input_model C:\Users\kkanna2x\Desktop\baggage\openvino\lite_225k\frozen_inference_graph.pb --output_dir C:\Users\kkanna2x\Desktop\baggage\openvino\forum --tensorflow_use_custom_operations_config extensions\front\tf\ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config C:\Users\kkanna2x\Desktop\baggage\openvino\lite_225k\pipeline.config --reverse_input_channels
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Users\kkanna2x\Desktop\baggage\openvino\lite_225k\frozen_inference_graph.pb
        - Path for generated IR:        C:\Users\kkanna2x\Desktop\baggage\openvino\forum
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\Users\kkanna2x\Desktop\baggage\openvino\lite_225k\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json
Model Optimizer version:        2019.2.0-436-gf5827d4
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Users\kkanna2x\Desktop\baggage\openvino\forum\frozen_inference_graph.xml
[ SUCCESS ] BIN file: C:\Users\kkanna2x\Desktop\baggage\openvino\forum\frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 155.46 seconds.


I am able to convert my pb file to IR. And I tried using inference engine samples - object_detection_sample_ssd.

And here is one example 

C:\Users\kkanna2x\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release>object_detection_sample_ssd.exe -m C:\Users\kkanna2x\Desktop\baggage\openvino\forum\frozen_inference_graph.xml -i C:\Users\kkanna2x\Desktop\baggage\test.jpg -d CPU -l cpu_extension.dll
[ INFO ] InferenceEngine:
        API version ............ 2.0
        Build .................. 27579
        Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     C:\Users\kkanna2x\Desktop\baggage\test.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        CPU
        MKLDNNPlugin version ......... 2.0
        Build ........... 27579
[ INFO ] CPU Extension loaded: cpu_extension.dll
[ INFO ] Loading network files:
        C:\Users\kkanna2x\Desktop\baggage\openvino\forum\frozen_inference_graph.xml
        C:\Users\kkanna2x\Desktop\baggage\openvino\forum\frozen_inference_graph.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ WARNING ] Image is resized from (275, 249) to (300, 300)
[ INFO ] Batch size is 1
[ INFO ] Start inference
[ INFO ] Processing output blobs
[0,1] element, prob = 0.0244563    (4,33)-(260,211) batch id : 0
[1,1] element, prob = 0.00697048    (-31,0)-(195,101) batch id : 0
...
[94,4] element, prob = 0.00293151    (142,123)-(243,298) batch id : 0
[95,4] element, prob = 0.00286348    (186,80)-(238,163) batch id : 0
[96,4] element, prob = 0.0028563    (161,73)-(314,169) batch id : 0
[97,4] element, prob = 0.00284527    (171,66)-(215,136) batch id : 0
[98,4] element, prob = 0.00281829    (96,218)-(126,240) batch id : 0
[99,4] element, prob = 0.00279953    (198,98)-(222,127) batch id : 0
[ INFO ] Image out_0.bmp created!
[ INFO ] Execution successful

[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

and It is getting detected correctly. I am not getting any layer issues here but only while running accuracy checker tool. 

 

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan ,

I understand the distinction. I am just trying to narrow down the error. You are getting the below error at runtime (Inference Engine):

    self.exec_network = self.plugin.load(network=self.network, num_requests=requests_num)
  File "ie_api.pyx", line 551, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 561, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: Unsupported primitive of type: PriorBoxClustered name: PriorBoxClustered_5

Upon running accuracy_check.py . Are you sure you're passing in the correct model_optimizer to -M ? What about --tf_custom_op_config_dir  ?

Please post your entire command for accuracy_check.py here.

Also looking at your config.yml it doesn't look complete. Please study c:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\tools\calibration_tool\configs\ssd_mobilenet_v1_coco.yml for an example. For instance your *.yml is missing 

mo_params:
            data_type: FP32
            tensorflow_use_custom_operations_config: ssd_v2_support.json
            tensorflow_object_detection_api_pipeline_config: ssd_mobilenet_v1_coco.config

If you are using certain MO params in your model optimizer command then you need to specify those also in your config.yml.

Also please look at c:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\python\python3.7\openvino\tools\accuracy_checker\README.md

Start with a simple version of the command as the document describes:

accuracy_check -c path/to/configuration_file -m /path/to/models -s /path/to/source/data -a /path/to/annotation

Sadly --h doesn't work, it throws an exception (I'm going to write a bug on that). And the latest R2 documentation doesn't even mention some of the options of that github issue 171 which I earlier referred you to.

Thanks,

shubha

0 Kudos
Shubha_R_Intel
Employee
1,046 Views

Dear K, Kannan ,

Sorry but the help is available so I misspoke earlier. You have to be sure to install all the prerequisites first (requirements.txt). Here is the help for the full list of options :

usage: accuracy_check.py [-h] [-d DEFINITIONS] -c CONFIG [-m MODELS]
[-s SOURCE] [-a ANNOTATIONS] [-e EXTENSIONS]
[--cpu_extensions_mode

{avx512,avx2,sse4}

]
[-b BITSTREAMS]
[--stored_predictions STORED_PREDICTIONS]
[-C CONVERTED_MODELS] [-M MODEL_OPTIMIZER]
[--tf_custom_op_config_dir TF_CUSTOM_OP_CONFIG_DIR]
[--tf_obj_detection_api_pipeline_config_path TF_OBJ_DETECTION_API_PIPELINE_CONFIG_PATH]
[--progress PROGRESS]
[--progress_interval PROGRESS_INTERVAL]
[-tf TARGET_FRAMEWORK]
[-td TARGET_DEVICES [TARGET_DEVICES ...]]
[-tt TARGET_TAGS [TARGET_TAGS ...]] [-l LOG_FILE]
[--ignore_result_formatting IGNORE_RESULT_FORMATTING]
[-am AFFINITY_MAP] [--aocl AOCL]
[--vpu_log_level \{LOG_NONE,LOG_WARNING,LOG_INFO,LOG_DEBUG}]

Deep Learning accuracy validation framework

optional arguments:
-h, --help show this help message and exit
-d DEFINITIONS, --definitions DEFINITIONS
path to the yml file with definitions
-c CONFIG, --config CONFIG
path to the yml file with local configuration
-m MODELS, --models MODELS
prefix path to the models and weights
-s SOURCE, --source SOURCE
prefix path to the data source
-a ANNOTATIONS, --annotations ANNOTATIONS
prefix path to the converted annotations and datasets
meta data
-e EXTENSIONS, --extensions EXTENSIONS
prefix path to extensions folder
--cpu_extensions_mode {avx512,avx2,sse4}

specified preferable set of processor instruction for
automatic searching cpu extension lib
-b BITSTREAMS, --bitstreams BITSTREAMS
prefix path to bitstreams folder
--stored_predictions STORED_PREDICTIONS
path to file with saved predictions. Used for
development
-C CONVERTED_MODELS, --converted_models CONVERTED_MODELS
directory to store Model Optimizer converted models.
Used for DLSDK launcher only
-M MODEL_OPTIMIZER, --model_optimizer MODEL_OPTIMIZER
path to model optimizer directory
--tf_custom_op_config_dir TF_CUSTOM_OP_CONFIG_DIR
path to directory with tensorflow custom operation
configuration files for model optimizer
--tf_obj_detection_api_pipeline_config_path TF_OBJ_DETECTION_API_PIPELINE_CONFIG_PATH
path to directory with tensorflow object detection api
pipeline configuration files for model optimizer
--progress PROGRESS progress reporter. You can select bar or print
--progress_interval PROGRESS_INTERVAL
interval for update progress if selected print
progress.
-tf TARGET_FRAMEWORK, --target_framework TARGET_FRAMEWORK
framework for infer
-td TARGET_DEVICES [TARGET_DEVICES ...], --target_devices TARGET_DEVICES [TARGET_DEVICES ...]
Space separated list of devices for infer
-tt TARGET_TAGS [TARGET_TAGS ...], --target_tags TARGET_TAGS [TARGET_TAGS ...]
Space separated list of launcher tags for infer
-l LOG_FILE, --log_file LOG_FILE
file for additional logging results
--ignore_result_formatting IGNORE_RESULT_FORMATTING
allow to get raw metrics results without data
formatting
-am AFFINITY_MAP, --affinity_map AFFINITY_MAP
prefix path to the affinity maps
--aocl AOCL path to aocl executable for FPGA bitstream programming
--vpu_log_level

{LOG_NONE,LOG_WARNING,LOG_INFO,LOG_DEBUG}

log level for VPU devices

0 Kudos
Reply