Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6442 Discussions

CNNNetwork ReadNetwork(const std::string& model, const) is throwing error as unknown file: Failure

Subtle
Employee
2,318 Views

Hi,

I am using  CNNNetwork ReadNetwork(const std::string& model, const Blob::CPtr& weights) function for loading models from memory.
My code looks like:

//vector<uint8_t> model (.xml)

//vector<uint8_t>weights (.bin)

//for creating TensorDesc

InferenceEngine::TensorDesc tensor(InferenceEngine::Precision::U8,{weights.size()}, InferenceEngine::Layout::ANY);

//For creating Blob

InferenceEngine::TBlob<uint8_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint8_t>(tensor,&weights[0],vec_s);

here vec_s is of type size_t

//calling readnetwork 

//std::string strModel(model.begin(),model.end());

InferenceEngine::Core ie;

InferenceEngine::CNNNetwork network =  ie.ReadNetwork(strModel,wei_blob);

 

My code is getting compiled but when i am running this, it is throwing error :

 unknown file: Failure
C++ exception with description "Failed to construct OpenVINOImageInference" thrown in the test body.

I am not getting what is the issue. 

How can i resolve this.

 

Thanks a lot

 

0 Kudos
1 Solution
Zulkifli_Intel
Moderator
2,195 Views

Hi Subtle,

 

Thank you for sharing the screenshot of the error. Unfortunately, support is hard to come by with your limited information. Is it possible to share your code and steps to reproduce the issue via email

 

Sincerely,

Zulkifli 

 

View solution in original post

0 Kudos
10 Replies
Zulkifli_Intel
Moderator
2,296 Views

Hi Subtle,

Thank you for getting in touch.

 

Which OpenVINO version did you use? We would really appreciate it if you could share the complete file (including the model) for us to replicate and investigate further.

 

Sincerely,

Zulkifli 


0 Kudos
Subtle
Employee
2,277 Views

Hi Zulkifli_intel,

 

Thanks for replying.

 

I am using OpenVINO 2021.4 version and OS is Ubuntu 20.04 LTS. I am sorry, i can't share the complete file. I will try to cover needed information in my code snippet here. For my use case i am not going for any particular model. For current scenario i am using face-detection-retail-0004.xml , age-gender-recognition-retail-0013.xml, emotions-recognition-retail-0003.xml with precision FP32 . 

My code looks like(updated):

I have .bin and .xml file stored on disk. I am storing these .bin and .xml to vectors of type uint32_t.

// xmlFile is xmlFile path 

//binFile is binFile path 

string xmlFile;

string binFile;

vector<uint32_t> model (.xml)

vector<uint32_t>weights (.bin)

//storing xmlFile in vector<uint32_t> model

//stroing binFile in vector<uint32_t> weights

//for creating TensorDesc

InferenceEngine::TensorDesc tensor(InferenceEngine::Precision::U32,{weights.size()}, InferenceEngine::Layout::ANY);

//For creating Blob

InferenceEngine::TBlob<uint32_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint32_t>(tensor,&weights[0],vec_s);

here vec_s is of type size_t

//calling readnetwork 

//std::string strModel(model.begin(),model.end());

InferenceEngine::Core ie;

InferenceEngine::CNNNetwork network =  ie.ReadNetwork(strModel,wei_blob);

 

Thank you 

 

 

0 Kudos
Zulkifli_Intel
Moderator
2,219 Views

Hi Subtle,

 

Sorry for the delay in replying. The "Failed to construct OpenVINOImageInference" error is incomplete. Is there an error message before and after this line? This information may help point to the actual problem you are having.

 

Sincerely,

Zulkifli 


0 Kudos
Subtle
Employee
2,205 Views

Hi Zulkifli_intel,

 

Thanks for replying.

 

There is no error message before and after this. 


unknown file: Failure
C++ exception with description "Failed to construct OpenVINOImageInference" thrown in the test body.

 

I have attached ss for your reference.

 

Thank you

 

0 Kudos
Zulkifli_Intel
Moderator
2,196 Views

Hi Subtle,

 

Thank you for sharing the screenshot of the error. Unfortunately, support is hard to come by with your limited information. Is it possible to share your code and steps to reproduce the issue via email

 

Sincerely,

Zulkifli 

 

0 Kudos
Subtle
Employee
2,191 Views

Hi Zulkifli_intel,

 

Thanks for replying.

 

Yeah, Sure. I will share artefacts via mail.

 

Thanks a lot

0 Kudos
Zulkifli_Intel
Moderator
2,170 Views

Hi Subtle,


You can send it to my email: zulkiflix.bin.abdul.halim@intel.com


Sincerely,

Zulkifli


0 Kudos
Zulkifli_Intel
Moderator
2,147 Views

Hi Subtle,

 

By referring to Executable Network, the recommended flow is to read IR content and follow by reading blob content with formatting and allocation before using ReadNetwork to load both.

 

 // read XML content

std::string xmlString;

std::uint64_t dataSize = 0;

model.read(reinterpret_cast<char\*>(&dataSize), sizeof(dataSize));

xmlString.resize(dataSize);

model.read(const_cast<char\*>(xmlString.c_str()), dataSize);

 

// read blob content

InferenceEngine::Blob::Ptr dataBlob;

model.read(reinterpret_cast<char\*>(&dataSize), sizeof(dataSize));

if (0 != dataSize) {

dataBlob = InferenceEngine::make_shared_blob<std::uint8_t>(

InferenceEngine::TensorDesc(InferenceEngine::Precision::U8,

{static_cast<std::size_t>(dataSize)},

InferenceEngine::Layout::C));

dataBlob->allocate();

model.read(dataBlob->buffer(), dataSize);

}

 

auto cnnnetwork = _plugin->GetCore()->ReadNetwork(xmlString, std::move(dataBlob));

 

 

In the code, after the blob is created - it was directly fed into ReadNetwork which 'might' potentially caused the error.

 

// create blob

InferenceEngine::TensorDesc O_tensor(InferenceEngine::Precision::U32,{weights.size()},InferenceEngine::Layout::ANY);

std::cout<<"created tensordesc"<<std::endl;

InferenceEngine::TBlob<uint32_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint32_t>(O_tensor,&weights[0]);

std::cout<<"Created blob"<<std::endl;

//Read Network

InferenceEngine::CNNNetwork network = IeCoreSingleton::Instance().ReadNetwork(strModel, wei_blob);


Sincerely,

Zulkifli



0 Kudos
Hari_B_Intel
Moderator
2,054 Views

Hi Subtle,

 

Thank you for your patience, after investigating, we found that the model and weight vector are defined as <uint32_t>, and the InferenceEngine::Precision is set to U32. 

 

So in your code provided, the code was successfully executed after changing the vectors to <unit8_t>, and InferenceEngine::Precision to U8. You can give it a try and see if it works on your program.

 

std::vector<uint8_t>n_model;

std::vector<uint8_t>weights;

 

InferenceEngine::TensorDesc O_tensor(InferenceEngine::Precision::U8 {weights.size()},InferenceEngine::Layout::ANY);

InferenceEngine::TBlob<uint8_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint8_t>(O_tensor,&weights[0]);

 

Hope this helps.

 

Thank you

 

0 Kudos
Zulkifli_Intel
Moderator
1,972 Views

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Sincerely,

Zulkifli


0 Kudos
Reply