- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have two tiny yolov3 models. And before performing inference, I want to choose which model to use. Is this possible?
1)I have initialized the CNNNetwork netreader and ExecutableNetwork network first with one of the models.
2)After that, I performed ReadNetwork and LoadNetwork to change the network.
3)I performed a continuous run wherein the ReadNetwork and LoadNetwork are always performed.
But at some point, the program was forcefully stopped. There were no errors so I cannot determine the cause.
Is my method not possible? Or are there other methods on changing the model?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Catastrope,
Thanks for reaching out.
Firstly, you can refer to the network topology of yolo-v3-tiny-tf from the OpenVINO public pre-trained model. You can download the model using Model Downloader.
Meanwhile, referring to the information in Inference Engine documentation, the combination of ReadNetwork + LoadNetwork (CNNNetwork) flow is inefficient when caching is enabled and cached model is available. Check out this https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Core.html for more information regarding the network.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Catastrope,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page