- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to find out if it is possible to use the Accuracy Validation Framework (accuracy checker) with own model, own dataset and own dataset definition.
In other words, could I possibly extend the 'dataset_definitions.yml' somehow to make it work with my own dataset and definition?
Thank you very much for your help.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@kalocsai sure, that's how Accuracy Checker extended with support for new tasks, models and dataset. Just read through Accuracy Checker documentation either on OpenVINO online documentation or directly on Open Model Zoo github. You also can contribute your work to Open Model Zoo if you would like to.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you, my tf2 model works now with the Accuracy Checker. However, it still doesn't work with the DL Workbench tool. With pretty much identical config files the only issue I could think of was that while the Accuracy Checker takes the 'launchers: framework: tf2' argument, the Workbench, of course, only takes the 'launchers: framework: dlsdk' argument. I can run my model inside the Workbench, it 's just that I am unable to calculate accuracy on it. This is the end of the error message for accuracy:
File "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py", line 79, in __init__self.metadata['image_size'] = data.shape if not isinstance(data, list) else np.shape(data[0])
AttributeError: 'NoneType' object has no attribute 'shape'
I wonder if this has to do with my input data being 6 channel .npy arrays and not images, per se. For some reason the tf2 module in the Accuracy Checker is able to handle numpy arrays just fine while the dlsdk module in the Workbench might have some issues with it, at least for accuracy. Or what else could be the issue here? Of course, for both versions I specify in the config files the 'reader:' in the 'dataset:' as 'type: numpy_reader'.
Any help with this would be greatly appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@kalocsai Could I ask that you provide the config file used within DL Workbench, as well as some general info about the model (task, type of architecture, dataset format)? I'll see if I can figure out the issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi VladimirG,
Thanks for trying to help. I already provided all that info in this thread:
Error for accuracy calculation on own model of Imagenet style data
Please let me copy that info here as well:
---------------------------------------------------------------------------------------------------------
Sure thing, here is the data you requested:
TestPackage.zip |
This zip file contains the following:
1. SavedModel folder containing the tf2 model
2. A zip file containing the test data and labels.txt file in Imagenet format
3. ConfigForWorkbench.yml - the config file I used in DL Workbench
4. ConfigForAccuracyChecker.yml -the almost identical config file I used with the command line Accuracy Checker tool
5. WorkbenchError.txt - the error message produced by DL Workbench when running accuracy
So, the command line Accuracy Checker tool could run this model, data combination without any problem. The workbench could also run this combination just not for accuracy. Accuracy produces the above error. I have also tried a .h5 Keras version of the model and the IR converted version as well. All producing this same error.
It is a simple classification task with 8 classes and with a fairly small CNN with some convolutional, pooling, batch normalization, and dense layers along with relu and softmax activation. The data doesn't need preprocessing, ready to go as is. The data is 150x150, 6 channel numpy arrays saved in .npy format. Please let me know if you need anything else or if you have any questions about this setup.
Thank you very much for your help.
peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@kalocsai Thank you for the data.
Having reproduced the issue locally, I can say that there is indeed an issue with the way DL Workbench processes accuracy configurations, in particular the data reader field. There is unfortunately little that can be done in the 2021.4 release to alleviate this issue, however, you may be able to bypass this problem by generating and using a Jupyter Notebook for your project through the DL Workbench interface.
Regards,
Vladimir Golubenko
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your effort and suggestion, Vladimir!
Regards,
peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@kalocsai I'm not that familiar with DL WorkBench, but I'll let involved people know about your question
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi kalocsai,
This thread will no longer be monitored since we have provided suggestion. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page