I am trying to find out if it is possible to use the Accuracy Validation Framework (accuracy checker) with own model, own dataset and own dataset definition.
In other words, could I possibly extend the 'dataset_definitions.yml' somehow to make it work with my own dataset and definition?
Thank you very much for your help.
@kalocsai sure, that's how Accuracy Checker extended with support for new tasks, models and dataset. Just read through Accuracy Checker documentation either on OpenVINO online documentation or directly on Open Model Zoo github. You also can contribute your work to Open Model Zoo if you would like to.
Thank you, my tf2 model works now with the Accuracy Checker. However, it still doesn't work with the DL Workbench tool. With pretty much identical config files the only issue I could think of was that while the Accuracy Checker takes the 'launchers: framework: tf2' argument, the Workbench, of course, only takes the 'launchers: framework: dlsdk' argument. I can run my model inside the Workbench, it 's just that I am unable to calculate accuracy on it. This is the end of the error message for accuracy:
File "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py", line 79, in __init__self.metadata['image_size'] = data.shape if not isinstance(data, list) else np.shape(data)
AttributeError: 'NoneType' object has no attribute 'shape'
I wonder if this has to do with my input data being 6 channel .npy arrays and not images, per se. For some reason the tf2 module in the Accuracy Checker is able to handle numpy arrays just fine while the dlsdk module in the Workbench might have some issues with it, at least for accuracy. Or what else could be the issue here? Of course, for both versions I specify in the config files the 'reader:' in the 'dataset:' as 'type: numpy_reader'.
Any help with this would be greatly appreciated.
Thanks for trying to help. I already provided all that info in this thread:
Error for accuracy calculation on own model of Imagenet style data
Please let me copy that info here as well:
Sure thing, here is the data you requested:
This zip file contains the following:
1. SavedModel folder containing the tf2 model
2. A zip file containing the test data and labels.txt file in Imagenet format
3. ConfigForWorkbench.yml - the config file I used in DL Workbench
4. ConfigForAccuracyChecker.yml -the almost identical config file I used with the command line Accuracy Checker tool
5. WorkbenchError.txt - the error message produced by DL Workbench when running accuracy
So, the command line Accuracy Checker tool could run this model, data combination without any problem. The workbench could also run this combination just not for accuracy. Accuracy produces the above error. I have also tried a .h5 Keras version of the model and the IR converted version as well. All producing this same error.
It is a simple classification task with 8 classes and with a fairly small CNN with some convolutional, pooling, batch normalization, and dense layers along with relu and softmax activation. The data doesn't need preprocessing, ready to go as is. The data is 150x150, 6 channel numpy arrays saved in .npy format. Please let me know if you need anything else or if you have any questions about this setup.
Thank you very much for your help.
@kalocsai Thank you for the data.
Having reproduced the issue locally, I can say that there is indeed an issue with the way DL Workbench processes accuracy configurations, in particular the data reader field. There is unfortunately little that can be done in the 2021.4 release to alleviate this issue, however, you may be able to bypass this problem by generating and using a Jupyter Notebook for your project through the DL Workbench interface.
This thread will no longer be monitored since we have provided suggestion. If you need any additional information from Intel, please submit a new question.