I have two tiny yolov3 models. And before performing inference, I want to choose which model to use. Is this possible?
1)I have initialized the CNNNetwork netreader and ExecutableNetwork network first with one of the models.
2)After that, I performed ReadNetwork and LoadNetwork to change the network.
3)I performed a continuous run wherein the ReadNetwork and LoadNetwork are always performed.
But at some point, the program was forcefully stopped. There were no errors so I cannot determine the cause.
Is my method not possible? Or are there other methods on changing the model?
Thanks for reaching out.
Firstly, you can refer to the network topology of yolo-v3-tiny-tf from the OpenVINO public pre-trained model. You can download the model using Model Downloader.
Meanwhile, referring to the information in Inference Engine documentation, the combination of ReadNetwork + LoadNetwork (CNNNetwork) flow is inefficient when caching is enabled and cached model is available. Check out this https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Core.html for more information regarding the network.
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.