- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are getting a runtime error when executing the following C++ code
m_model = core.read_model(strModel,
ov::Tensor(ov::element::u8, { vecModelWeights.size() }, m_modelWeights.get()));
The exception is
Exception thrown at 0x00007FFD6E37CF19 in SysApp.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000BFB0EFD0F8.
The strModel looks correct in that its contents resemble the .xml file
strModel =
"<?xml version=\"1.0\"?>\r\n<net name=\"1rmnq505_frozen_tf2.0\" version=\"11\">\r\n\t<layers>\r\n\t\t<layer id=\"0\" name=\"input_1\" type=\"Parameter\" version=\"opset1\">\r\n\t\t\t<data shape=\"1,256,256,1\" element_type=\"f32\" />...std::str
Openvino Version; 2022.3.0-9052-9752fafe8eb-releases/2022/3
Is there a way to delve deeper into what the issue might be with the xml and .vsi file?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie_patton,
Thank you for reaching out to us.
The error might be due to an incorrect core.read_model arguments, thus it was unable to read the model you specify correctly.
For your information, I received a similar error when I execute the core.read_model without specifying my path correctly as shown below:
Exception thrown at 0x00007FFE3D4B4B2C in demo223.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000BF101CEC48.
Exception thrown at 0x00007FFE3D4B4B2C in demo223.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000BF101CE700.
Exception thrown at 0x00007FFE3D4B4B2C in demo223.exe: Microsoft C++ exception: ov::Exception at memory location 0x000000BF101CFA18.
Unhandled exception at 0x00007FFE3D4B4B2C in demo223.exe: Microsoft C++ exception: ov::Exception at memory location 0x000000BF101CFA18.
I was able to build and run my code without any issues when configuring the correct model path for the core.read_model here:
Please try to ensure the read model argument is correct. If the error still persists, please provide us with the complete error messages and more details (ie. model file and source code) if possible for us to replicate and validate the issue, thank you.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for getting back to me Megat.
Apparently I am using a different overloaded version of read_model() than you are (I inherited this code from another dev)
m_modelWeights = VsiAlign::make_aligned<BYTE[]>(vecModelWeights.size());
std::memcpy(m_modelWeights.get(), vecModelWeights.data(), vecModelWeights.size());
m_model = core.read_model(strModel,
ov::Tensor(ov::element::u8, { vecModelWeights.size() }, m_modelWeights.get()));
I looked up the overloads for read_model and am using this overload
std::shared_ptr<ov::Model> read_model( const std::string& model, const Tensor& weights ) const;
I also checked the openvino version again.
You could be right that it's the .bin. We are using encryption, so perhaps decrypting it is causing an issue. I'll regenerate the bin and see what happens.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I changed the api to be the same as yours:
core.read_model(strNetworkXml.c_str(), pszModelPath );
And now get the exception message:
repos\openvino\src\frontends\common\src\frontend.cpp:54:Converting input model
repos\openvino\src\frontends\ir\src\ir_deserializer.cpp:347 Incorrect weights in bin file!
We also have another model for a different application that loads using the above read_model() and it functions correctly. (it also worked with the other overloaded read_model()
std::shared_ptr<ov::Model> read_model( const std::string& model, const Tensor& weights ) const;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Another data point: the model runs on Openvino 2021.1.110.
There must be something about converting the model to openvino 2022.3 format that the 2022.3 engine doesn't like.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie_patton,
You mentioned that the model you are using can be read in OpenVINO™ 2021.1.
Does that mean that you are able to use both the read_model() and the overloaded read_model() function on OpenVINO™ 2021.1?
If you can read the model in OpenVINO™ 2021 but not OpenVINO™ 2022, then the error you get might be due to the incompatibility between the model version and the inference engine version. Starting from the 2022.1 releases, OpenVINO™ introduced the new API 2.0 as well as the new OpenVINO IR model format: IR v11.
Could you please try to convert your model again using OpenVINO™ 2022.3 Model Optimizer and run the inference using the same OpenVINO™ version and see if this solves your issue reading the model?
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Megat
This is the syntax we are using for loading the model with 2021.1. It's not using the read_model API. Perhaps read_model() was not available for 2021?
m_pNetwork = std::make_unique<InferenceEngine::CNNNetwork>(ie.ReadNetwork(strModel, make_shared_blob<uint8_t>({ Precision::U8, {weights.size()}, C }, weights.data())));
The model optimizer (mo) version we are using is (same as the OpenVino inference engine - see my first post)
tf-docker ~ > mo --version
Version of Model Optimizer is: 2022.3.0-9052-9752fafe8eb-releases/2022/3
We are able to load the same model using python without issue:
# OpenVINO inference
ie = Core()
model = ie.read_model(model=args.model_file)
interpreter = ie.compile_model(model=model, device_name="CPU")
input_details = interpreter.inputs
output_details = interpreter.outputs
t0 = time()
output_data = interpreter([input_data])
t1 = time()
infer_ms = (t1-t0)*1000
print('Inference ms:', infer_ms)
classes, boxes = output_data.values()
print('Classes.shape:', classes.shape)
print('Boxes.shape: ', boxes.shape)
which outputs
Python>python test0.py -m 1rmnq505.xml -i Frame_00000173.png
Inference ms: 11.965751647949219
Classes.shape: (1, 3024, 2)
Boxes.shape: (1, 3024, 4)
Python version info:
$ pip list | grep openvino
openvino 2022.3.0
openvino-dev 2022.3.0
openvino-telemetry 2022.3.0
This tells us that Python has no issues with the openvino model .bin and .xls files.
Also, about the model, we convert TensorFlow/Keras to OpenVINO (if that helps).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie_patton,
In OpenVINO 2021, the read_model API was not available and Core::ReadNetwork was use instead.
You've mentioned that there was no issue when loading the same model in Python but an exception error occurred when using C++ ("ir_deserializer.cpp:347 Incorrect weights in bin file!").
Comparing both codes, it seems that the inference engine wasn’t able to read the weights in bin file when running the C++ command. I'd suggest for you to leave out the path for .bin file as you did in the Python command and see if the issue was resolved. Remove the bin path from the core.read_model:
core.read_model(xml_path);
On another note, please do share with us the following additional information for further investigation:
- Source repository of the base model used for training.
- Framework & Topology of the model.
- Model Optimizer command used for converting into IR format.
- The C++ source code.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Use core.read_model(xml_path), I get the same exception
Exception thrown at 0x00007FFACEB5CF19 in SysApp.exe: Microsoft C++ exception: std::runtime_error at memory location 0x00000091605FCF58.
Regarding the sharing of code and model info, I have to get approval from higher ups, but it's unlikely we can do this without an NDA + private channel.
But in the meantime, I'm creating a sample project to isolate the code and the issue.
mo --input_shape [1,256,256,3] --use_new_frontend --saved_model_dir model
Framework & Topology of the model: an object detection model based on RetinaNet.
As per the code segment that where we call the model, I can share that
// FYI: LPCWSTR pszModelPath32
// wstring strNetworkXml
// Check files exist
{
auto bExistsXml = GetFileExists(strNetworkXml);
auto bExistsBin = GetFileExists(pszModelPath);
if (!bExistsXml || !bExistsBin)
{
TheLogger().Log(SysApp::SYSTEM, "%s Inference model files not found\n", __FUNCTION__);
return E_FAIL;
}
}
ov::Core core;
// convert wstring to string
std::string strModel(strNetworkXml.length(), 0);
std::transform(strNetworkXml.begin(), strNetworkXml.end(), strModel.begin(), [](wchar_t c) {
return (char)c;
});
std::wstring w(pszModelPath32);
std::string strWeight(w.begin(), w.end());
//m_model = core.read_model(strNetworkXml.c_str(), pszModelPath); // caused exception
//m_model = core.read_model(strModel, strWeight); // caused exception
m_model = core.read_model(strModel); // latest attempt - caused exception
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie_patton,
Thank you for sharing the information.
From our end, we were able to successfully read a retinanet-tf model which was available from Open Model Zoo without any issues. Here is the code snippet that I used to read the model from .xml file:
Upon further investigation of your C++ code snippet, we noticed that you're reading the model from a string data instead of from the .xml file itself. In the Python script you've shared we noticed that you're reading from a model file directly.
The difference that we're able to see are between reading the model from file is successful instead of from a string data. Please have a try by defining the path to your .xml file for the core.read_model() function in your C++ code and see if the issue was resolved.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hairul
Perhaps the code naming is misleading, but the debugger shows that strModel = C:\FFSS\debug\Bladder\x64\Host\SysApp\Models\1rmnq505.xml
Question: if one only specifies the xml path, then the read_model() API assumes this is the location of the .bin?
= C:\FFSS\debug\Bladder\x64\Host\SysApp\Models\1rmnq505.bin
In which case, the bin is in the above location, but just thought I'd check.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie_patton,
Thank you for the clarification.
From my end, here is the text visualizer result from successful retinanet-tf model read by providing the xml file path:
Regarding your question, if bin file path was not provided in the core.read_model() function, it will try to read bin file with the same name as the xml. The bin file should be in the same directory as the xml file path for this to work.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hairul
So based on what you've shown, the string I'm using is correct. So it's still a mystery why the exception happens.
I downloaded openvino 2023.1 to a different PC and went through the instructions to run the Hello_classification.
https://docs.openvino.ai/2023.0/openvino_inference_engine_samples_hello_classification_README.html
My goal was to get the read_model working in the hello_classification example with the googlenet-v1 model and then try our model that's failing.
Unfortunately, my quest ran into a snag: The read_model() can't find the googlenet-v1 model and throws the exception below.
Which is really odd since I added code to check for the file and it sees it, and it does.
std::ifstream f(model_path.c_str());
if (!f.good()) {
std::cerr << "Could not open file " << model_path << std::endl;
return EXIT_FAILURE;
}
// -------- Step 1. Initialize OpenVINO Runtime Core --------
ov::Core core;
// -------- Step 2. Read a model --------
slog::info << "Loading model files: " << model_path << slog::endl;
std::shared_ptr<ov::Model> model = core.read_model(model_path);
printInputAndOutputsInfo(*model);
The example arguments:
"C:\Users\XXXXXX\Documents\Intel\OpenVINO\openvino_cpp_samples_build\hello_classification\public\googlenet-v1\FP16\googlenet-v1.xml" "C:\Users\XXXXXX\Documents\Intel\OpenVINO\openvino_cpp_samples_build\hello_classification\Data\car.bmp"
GPU
This is the model_path variable. It's correct.
The File Explorer shows both the bin and xml in the same folder.
I found it odd that when I initially ran the example, I got an error saying that the openvinod.dll, tbb_debug.dll and plugin.xml files could not be found. I was able to run after copying them to the hello_classification exe folder from the folders below:
C:\Program Files (x86)\Intel\openvino_2023.1\runtime\bin\intel64\Debug
C:\Program Files (x86)\Intel\openvino_2023.1\runtime\3rdparty\tbb\bin
Any thoughts on why the ie_network_reader.cpp can't find the googlenet-v1 model?
Could it be that windows 10 is somehow blocking access? You'd think the ifstream() would also fail too if that's the case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie_patton,
From my end, I was able to run the hello_classification sample with googlenet-v1 model in OpenVINO 2023.0 without issue. Here is the result:
However, I did face the "openvinod.dll was not found" error for hello_classification sample when launching Visual Studio without running "setupvars.bat" script beforehand:
Here is the steps that I did to successfully run the hello_classification sample:
- Run the "setupvars.bat" script from the OpenVINO 2023.0 directory in a terminal
- Run the "samples\build_samples_msvc.bat" script to build the samples
- Launch Visual Studio from the same terminal of "setupvars.bat"
- Run the hello_classification sample
Do let us know if the issue was able to be resolved following the steps provided.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hairul
I'm currently on PTO till 08/28, but will try this as soon as I'm back in the office.
However, I do recall running setvars.bat, but I didn't run Visual Studio from the same terminal. I'll give that a try.
Hopefully this relates to the NETWORK_NOT_READ error.
Cheers
Eddie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was able to get an OpenVino example from the 2023.0 OpenVino toolkit to run successfully on my laptop. The issue mentioned in the previous task (redux #2) was caused by a mismatch of the model conversion tool (2023.0) with a 2023.1 OpenVino toolkit.
Currently, our models are 2022.3 OpenVino (.xml and .bin). Tomorrow I'll ask our AI dudes to give me 2023.0 versions.
But until they do, I'm going to follow the same steps using a OpenVino 2022.3 example and then run read_model() using our two models - the one that works with 2022.3 on our system and the one that doesn't.
... the saga continues. Thx for your support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie,
Glad to know that you are able to run the OpenVINO™ Toolkit sample on your end.
This thread will no longer be monitored since we have provided suggestions. If you need any additional information from Intel later on, please submit a new question, thank you.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hold the phone Megat. Getting the example working isn't solving the original problem! It's a step towards solving the original problem.
Cheers
Eddie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie,
Sure, we'll keep the thread open and wait for your response. Do let us know if you are able to solve your original problem, Thank you.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Megat
I've run the hello_classification example using both the 2022.3.0 and 2022.3.1 toolkits with the googlenet-v1 models and get exceptions, yet the code outputs what it should.
Consequently, I think it's best to close this thread and focus on the issue I see with the examples. That way I don't have to share our proprietary model with you (that said, we are working on an NDA with Intel for support).
This is the thread I posted info on the hello_classification example.
hello_classification example runtime_error at memory location 0x000000381E37D8E8. - Intel Community
Thanks for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eddie,
This thread will no longer be monitored since we have provided suggestions. If you need any additional information from Intel, please submit a new question.
Regards,
Megat

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page