I am trying to use the Deep Learning Workbench to prepare models for Int8 inference on CPU. However there are some problems I am having.
1. Any time I try to optimize models using the workbench, it fails and throws an error, with no detail on what has actually gone wrong during the conversion process. I have to convert the model the "old fashioned" way using model optimizer manually.
2. When I go to optimize the model, all selections in the optimization window are greyed out. The Workbench Developer Guide states that this will occur if you try to use a non FP32 model, but this is FP32 squeezenet which should be fine. So I can't actually do int8 inference at all using the workbench.
This is using OpenVINO 2019R3 on Ubuntu 18.04.
Any and all suggestions are appreciated.
Thanks for trying out the tool. Regarding the first one. Could you pinpoint the model you experimented with (if it's public) or some details like original framework, version, used options etc. And corresponding command in command line that solved the issue for you. Just for us to be able to reproduce and troubleshoot it here. I would say that if you were able to convert it using CLI of MO you should get it converted through DL Workbench (unless you have your own custom layers that can make a story a bit different).
The second one. I suspect I understand the reason, but need you to confirm my guess here. I think that you use Autogenerated dataset that we provide in the tool combined with your model. If yes, then unfortunately it's an expected behavior and we deliberately limit Int-8 conversion option for such configurations (model + Autogenerated by DL WB dataset). You see, our Autogenerated dataset is a set of images actually filled with Gaussian noise - quite Ok to measure performance of the model, but not useful for accuracy measurements. And the current Int-8 calibration method is accuracy aware one, means it uses accuracy as a metric for selections of layers to quantize to Int-8 to fit into user-defined max accepted accuracy drop. So, for this one you need some real dataset to measure your accuracy. So, for the configurations with autogenerated dataset we disable the option with Int-8 calibration. Need to incorporate it into hints section, started seeing people stumbling upon it too often. Alternatively, can consider allowing such configurations with no guarantee on what is going to be the accuracy of the final model calibrated model - but suspect it's not going to be acceptable option.
Thanks again for giving feedback on the tool. Helps us making it better.