Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
6062 Discussions

Problems reading network from memory (C API)

magnusg
Novice
1,061 Views

I am having trouble reading IR networks (XML and bin) from memory using ie_core_read_network_from_memory() in ie_c_api.h and I can't find any code samples where ie_core_read_network_from_memory() is used.

 

I suspect that I am creating the network weight blob wrong, but I cannot find any information on how to create weight blobs correctly for networks.

 

 

// void* dmem->data        - network memory buffer (float32)
// size_t dmem->size       - size of network memory buffer (bytes)

ie_core_t* ov_core = NULL;
IEStatusCode status = ie_core_create("", &ov_core);
if (status != OK)
{
    // error
}

const dimensions_t weights_tensor_dims = 
                    { 4, { 1, 1, 1, dmem->size/sizeof(float) } };
tensor_desc_t weights_tensor_desc = { OIHW, weights_tensor_dims, FP32 };

ie_blob_t* ov_model_weight_blob = NULL;
status = ie_blob_make_memory_from_preallocated(
    &weights_tensor_desc, dmem->data, dmem->size, &ov_model_weight_blob);
if (status != OK)
{
    // error
}

// char* model_xml_desc        - the model's XML string

uint8_t* ov_model_xml_content = (uint8_t*)model_xml_desc;

ie_network_t* ov_network = NULL;
size_t xml_sz = strlen(ov_model_xml_content);
status = ie_core_read_network_from_memory(
    ov_core, ov_model_xml_content, xml_sz, ov_model_weight_blob, &ov_network);
if (status != OK)
{
    // Always get "GENERAL_ERROR (-1)"
}

 

The code works fine down to the ie_core_read_network_from_memory() call which results in "GENERAL_ERROR".

 

I have tried two different models (provided as attached file) that were converted from Tensorflow. One is a simple [X] -> [Y] regression model (single input value, single output value). The other is also a regression model [X_1, X_2, ..., X_9] -> [Y] (nine input values, single output value).

 

I would appreciate any help, either by pointing out what I am getting wrong or directing me to some code samples that use ie_core_read_network_from_memory().

 

System information:

  • Windows 10
  • OpenVINO version w_openvino_toolkit_p_2021.4.689.exe
  • Microsoft Visual Studio 2019

 

Thanks

0 Kudos
1 Solution
IntelSupport
Community Manager
644 Views

Hi Magnus,

 

Basically the issue is at tensor_desc_t weights_tensor_desc = { ANY, { 1, { model_bin_size} }, U8 };

 

We have drafted an updated main.c as attached below, please give it a try. Meanwhile, for your information, the weights_tensor_decs should take the model bin size.

 

 

Regards,

Aznie

 

View solution in original post

10 Replies
Iffa_Intel
Moderator
1,014 Views

Hi,

 

Generally, this is how the ie_core_read_network_from_memory should be defined (.h):

 

IEStatusCode ie_core_read_network_from_memory(

ie_core_t* core,

const uint8_t* xml_content,

size_t xml_content_size,

const ie_blob_t* weight_blob,

ie_network_t** network

);

 

 

The ie_read_network_from_memory() function would read the model from an XML string and a blob of the bin part of the IR.

It would return status code of the operation: OK(0) for success.

You may refer here.

 

This is how it should be executed (.cpp):

 

IEStatusCode ie_core_read_network_from_memory(ie_core_t *core, const uint8_t *xml_content, size_t xml_content_size, \

    const ie_blob_t *weight_blob, ie_network_t **network) {

  if (core == nullptr || xml_content == nullptr || network == nullptr || weight_blob == nullptr) {

    return IEStatusCode::GENERAL_ERROR;

  }

 

  IEStatusCode status = IEStatusCode::OK;

 

  try {

    std::unique_ptr<ie_network_t> network_result(new ie_network_t);

    network_result->object = core->object.ReadNetwork(std::string(reinterpret_cast<const char *>(xml_content),

      reinterpret_cast<const char *>(xml_content + xml_content_size)), weight_blob->object);

    *network = network_result.release();

  } CATCH_IE_EXCEPTIONS

 

  return status;

}

 

 

The try{} function contains error-handling functions which would help with the General Error issue.

You can actually refer to the ie_c_api.cpp.

 

 

Meanwhile, for blob creation, you may refer to this blob creation and memory utility documentation.

If you want to create a blob file from your existing model, you can use the OpenVINO Compile Tool with this specific command:

./compile_tool -m <path_to_model>/model_name.xml -d MYRIAD -o model_xml_file.blob

 

To import a blob with the network from a generated file into your application, use the InferenceEngine::Core::ImportNetwork method:

InferenceEngine::Core ie;

std::ifstream file{"model_name.blob"};

InferenceEngine::ExecutableNetwork = ie.ImportNetwork(file, "MYRIAD", {});

 

 

Hope this helps! 

 

Sincerely,

Iffa

 

 

magnusg
Novice
943 Views

Hello,

 

Thank you for your reply. Unfortunately it did not provide me with any further insight. It might be that I am missing something fundamental in your explanation.

 

You gave an explanation of the ie_core_read_network_from_memory() function signature, which I was already familiar with. You also refer to the OpenVINO Core C API Docs, which I have read, and it contains no information, that I can see, on how to create a proper binary blob to pass to ie_core_read_network_from_memory().

 

You also show how the function should be "executed" which seems to just be the method implementation for ie_core_read_network_from_memory(). It does show that core->object.ReadNetwork() is used for reading the model, but the docs for ReadNetwork() only says that the weights blob is a "shared pointer to constant blob with weights", which I already knew.

 

When it comes to the blob creation the "Blob creation and memory utilities" docs was new to me (the methods described does not exist in ie_c_api.h) but it provided no information on how to make a blob from preallocated memory with ie_core_read_network_from_memory().

 

You also mention how to import a network blob from file, but that is not possible in my use case since I cannot read anything from file, the model is only provided as a binary memory buffer and XML string. Also there seems to be no methods for importing network blob files in ie_c_api.h. So, although good to know that the compile tool exists and can generate blob-files, the information fails to address my question.

 

So my initial question remains unanswered.

 

Regards,

Magnus

IntelSupport
Community Manager
844 Views

Hi Magnus,

From the error message, it seems that either one of the parameters has nullptr, by checking each of the parameters, I am suspecting the ov_model_weight_blob or weights_tensor_desc might possibly be the issue.

 

For the function tensor_desc_t weights_tensor_desc = { OIHW, weights_tensor_dims, FP32 };

Based on the documents, the precision and the dims must be matched. Also, refer to Layouts for the details that the Inference Engine supports.

 

I found some sample code that has the tensor_desc_t for your reference, you can refer to hello_classification script.

 

Hope this information helps.

 

 

Regards,

Aznie

 

magnusg
Novice
813 Views

Hello Aznie,

 

Thank you for your reply. I have gotten information from another Intel employee that there is a unit test for ie_core_read_network_from_memory() that shows how to properly set the tensor descriptor when creating weight blobs for networks.

 

 

tensor_desc_t weights_tensor_desc = { ANY, { 1, { dmem->size } }, U8 };

 

 

I have updated my code to use the proper way of creating the weight blob but unfortunately the initial problem remains.

I have verified that none of the parameters has nullptr, and that they are all of the expected size and the XML content looks ok.

I have previously read the documentation you refer to but it fails to provide any insight into why the method call fails.

The hello_classification script you mention was the code that I used as blueprint for my example (and largely the reason why I initially chose the wrong memory layout in the tensor description). So unfortunately it is of no further help with this issue.

 

So my initial question unfortunately remains unanswered.

 

However, with the unit test mentioned above I will hopefully be able to derive where the error lies.

Also, there is a small but not insignificant chance that the problem might be fixed by upgrading the OpenVINO library to the latest version of the LTS release (2021.4.2 LTS) as I am currently on 2021.4.1 LTS.

 

Sincerely yours,

Magnus

magnusg
Novice
813 Views

<retracted as duplicate post>

IntelSupport
Community Manager
723 Views

Hi Magnus,

 

We have drafted a C code with the model weight blob and the read_network_from_memory function. Basically, it is from the unit test, we just translate to C programming.

 

You may refer to the attachment below and hope this sample helps.

 

 

Regards,

Aznie

 

magnusg
Novice
674 Views

Hello Aznie,

 

Thank you for providing the C code sample. Unfortunately it did not work, but at least now it throws an exception instead of just returning a general error. I tried the code you provided *without any changes*, compiled it and ran it on the test model test_model_fp32.xml used in the C++ unit test for ie_core_read_network_from_memory(). When I execute the call to ie_core_read_network_from_memory() I get the following exception (whole debug output for context):

'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Users\...\sandbox\vs19\openvino-read-network-from-memory-test\x64\Debug\openvino-read-network-from-memory-test.exe'. Symbols loaded.
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\ntdll.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\kernel32.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\KernelBase.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\inference_engine\bin\intel64\Debug\inference_engine_c_apid.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\vcruntime140d.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\ucrtbased.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\inference_engine\bin\intel64\Debug\inference_engined.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\msvcp140d.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Windows\System32\vcruntime140_1d.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\inference_engine\external\tbb\bin\tbb_debug.dll'. Symbols loaded.
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\inference_engine\bin\intel64\Debug\inference_engine_transformationsd.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\ngraph\lib\ngraphd.dll'. 
The thread 0x3210 has exited with code 0 (0x0).
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\inference_engine\bin\intel64\Debug\inference_engine_onnx_readerd.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\ngraph\lib\onnx_importerd.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\ngraph\lib\onnx_protod.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\ngraph\lib\libprotobufd.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\ngraph\lib\libprotobufd.dll'. 
'openvino-read-network-from-memory-test.exe' (Win32): Unloaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\ngraph\lib\libprotobufd.dll'
Exception thrown at 0x00007FFB8D454F69 in openvino-read-network-from-memory-test.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000285EB1EF38.
Exception thrown at 0x00007FFB8D454F69 in openvino-read-network-from-memory-test.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000285EB1EF38.
'openvino-read-network-from-memory-test.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\inference_engine\bin\intel64\Debug\inference_engine_ir_readerd.dll'. 
Exception thrown at 0x00007FFAE9386B8D (ngraphd.dll) in openvino-read-network-from-memory-test.exe: 0xC0000005: Access violation reading location 0x0000000000000000.

I tried it with both OpenVINO 2021.4.1 LTS (2021.4.689) and OpenVINO 2021.4.2 LTS (2021.4.752) and I get the same exception with both versions.

 

Best regards,

Magnus

IntelSupport
Community Manager
645 Views

Hi Magnus,

 

Basically the issue is at tensor_desc_t weights_tensor_desc = { ANY, { 1, { model_bin_size} }, U8 };

 

We have drafted an updated main.c as attached below, please give it a try. Meanwhile, for your information, the weights_tensor_decs should take the model bin size.

 

 

Regards,

Aznie

 

magnusg
Novice
588 Views

Thank you for the code update!

 

I tried your bug fix and managed to read the test model test_model_fp32.xml (used in the C++ unit test for ie_core_read_network_from_memory()) from memory. This is good news.

 

Now I will transfer the changes to my original setting and see if I can figure out what I originally did wrong. But having verified that reading networks from memory works on my current OpenVINO installation is a big step forward and promising.

 

Thank you,

Magnus

 

IntelSupport
Community Manager
543 Views

Hi Magnus,

 

Glad to help you.

 

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.



Regards,

Aznie


Reply