I have optimized the custom ml model. I am trying to load it for inference.
In Debug mode, I am able to do so successfully. In release mode, I am getting exception when i am trying to load the network.
The status returned is UNEXPECTED (-7), which is due to an exception that occurred in inference_engine_ir_reader.dll. I am getting below exception.
InferenceEngine::details::InferenceEngineException at memory location
Since the same code is working fine in debug mode, I feel it is dll related issue in release mode. I confirmed that all dll in release mode is release version and code is accessing same.
Can anyone guide me on this?
I have attached my code snipper, as well as console log window for reference.
Thanks for reaching out.
The exceptions can happen for any reason. I would recommend you to try installing the OpenVINO toolkit from the open-source repo: https://github.com/openvinotoolkit/openvino and see if the same issue arises.
Meanwhile, you can check the suggested method on how to implement the Inference Engine C* API.
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.