Hi! I am developing vision related DNN using OpenVINO toolkit and Caffe.
I post this text because of one question. I am training the same DNN model on several different sets of training data.
Although the DNN model, Model-Optimizer parameters and test platform is always the same, inference speed of real data is different up to 2x depending on the training data.
In my opinion, it always has to be similar using the same environment. What is the reason ?
For more complete information about compiler optimizations, see our Optimization Notice.