Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6571 讨论

CNNNetwork ReadNetwork(const std::string& model, const) is throwing error as unknown file: Failure

Subtle
员工
3,783 次查看

Hi,

I am using  CNNNetwork ReadNetwork(const std::string& model, const Blob::CPtr& weights) function for loading models from memory.
My code looks like:

//vector<uint8_t> model (.xml)

//vector<uint8_t>weights (.bin)

//for creating TensorDesc

InferenceEngine::TensorDesc tensor(InferenceEngine::Precision::U8,{weights.size()}, InferenceEngine::Layout::ANY);

//For creating Blob

InferenceEngine::TBlob<uint8_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint8_t>(tensor,&weights[0],vec_s);

here vec_s is of type size_t

//calling readnetwork 

//std::string strModel(model.begin(),model.end());

InferenceEngine::Core ie;

InferenceEngine::CNNNetwork network =  ie.ReadNetwork(strModel,wei_blob);

 

My code is getting compiled but when i am running this, it is throwing error :

 unknown file: Failure
C++ exception with description "Failed to construct OpenVINOImageInference" thrown in the test body.

I am not getting what is the issue. 

How can i resolve this.

 

Thanks a lot

 

0 项奖励
1 解答
Zulkifli_Intel
主持人
3,660 次查看

Hi Subtle,

 

Thank you for sharing the screenshot of the error. Unfortunately, support is hard to come by with your limited information. Is it possible to share your code and steps to reproduce the issue via email

 

Sincerely,

Zulkifli 

 

在原帖中查看解决方案

0 项奖励
10 回复数
Zulkifli_Intel
主持人
3,761 次查看

Hi Subtle,

Thank you for getting in touch.

 

Which OpenVINO version did you use? We would really appreciate it if you could share the complete file (including the model) for us to replicate and investigate further.

 

Sincerely,

Zulkifli 


0 项奖励
Subtle
员工
3,742 次查看

Hi Zulkifli_intel,

 

Thanks for replying.

 

I am using OpenVINO 2021.4 version and OS is Ubuntu 20.04 LTS. I am sorry, i can't share the complete file. I will try to cover needed information in my code snippet here. For my use case i am not going for any particular model. For current scenario i am using face-detection-retail-0004.xml , age-gender-recognition-retail-0013.xml, emotions-recognition-retail-0003.xml with precision FP32 . 

My code looks like(updated):

I have .bin and .xml file stored on disk. I am storing these .bin and .xml to vectors of type uint32_t.

// xmlFile is xmlFile path 

//binFile is binFile path 

string xmlFile;

string binFile;

vector<uint32_t> model (.xml)

vector<uint32_t>weights (.bin)

//storing xmlFile in vector<uint32_t> model

//stroing binFile in vector<uint32_t> weights

//for creating TensorDesc

InferenceEngine::TensorDesc tensor(InferenceEngine::Precision::U32,{weights.size()}, InferenceEngine::Layout::ANY);

//For creating Blob

InferenceEngine::TBlob<uint32_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint32_t>(tensor,&weights[0],vec_s);

here vec_s is of type size_t

//calling readnetwork 

//std::string strModel(model.begin(),model.end());

InferenceEngine::Core ie;

InferenceEngine::CNNNetwork network =  ie.ReadNetwork(strModel,wei_blob);

 

Thank you 

 

 

0 项奖励
Zulkifli_Intel
主持人
3,684 次查看

Hi Subtle,

 

Sorry for the delay in replying. The "Failed to construct OpenVINOImageInference" error is incomplete. Is there an error message before and after this line? This information may help point to the actual problem you are having.

 

Sincerely,

Zulkifli 


0 项奖励
Subtle
员工
3,670 次查看

Hi Zulkifli_intel,

 

Thanks for replying.

 

There is no error message before and after this. 


unknown file: Failure
C++ exception with description "Failed to construct OpenVINOImageInference" thrown in the test body.

 

I have attached ss for your reference.

 

Thank you

 

0 项奖励
Zulkifli_Intel
主持人
3,661 次查看

Hi Subtle,

 

Thank you for sharing the screenshot of the error. Unfortunately, support is hard to come by with your limited information. Is it possible to share your code and steps to reproduce the issue via email

 

Sincerely,

Zulkifli 

 

0 项奖励
Subtle
员工
3,656 次查看

Hi Zulkifli_intel,

 

Thanks for replying.

 

Yeah, Sure. I will share artefacts via mail.

 

Thanks a lot

0 项奖励
Zulkifli_Intel
主持人
3,635 次查看

Hi Subtle,


You can send it to my email: zulkiflix.bin.abdul.halim@intel.com


Sincerely,

Zulkifli


0 项奖励
Zulkifli_Intel
主持人
3,612 次查看

Hi Subtle,

 

By referring to Executable Network, the recommended flow is to read IR content and follow by reading blob content with formatting and allocation before using ReadNetwork to load both.

 

 // read XML content

std::string xmlString;

std::uint64_t dataSize = 0;

model.read(reinterpret_cast<char\*>(&dataSize), sizeof(dataSize));

xmlString.resize(dataSize);

model.read(const_cast<char\*>(xmlString.c_str()), dataSize);

 

// read blob content

InferenceEngine::Blob::Ptr dataBlob;

model.read(reinterpret_cast<char\*>(&dataSize), sizeof(dataSize));

if (0 != dataSize) {

dataBlob = InferenceEngine::make_shared_blob<std::uint8_t>(

InferenceEngine::TensorDesc(InferenceEngine::Precision::U8,

{static_cast<std::size_t>(dataSize)},

InferenceEngine::Layout::C));

dataBlob->allocate();

model.read(dataBlob->buffer(), dataSize);

}

 

auto cnnnetwork = _plugin->GetCore()->ReadNetwork(xmlString, std::move(dataBlob));

 

 

In the code, after the blob is created - it was directly fed into ReadNetwork which 'might' potentially caused the error.

 

// create blob

InferenceEngine::TensorDesc O_tensor(InferenceEngine::Precision::U32,{weights.size()},InferenceEngine::Layout::ANY);

std::cout<<"created tensordesc"<<std::endl;

InferenceEngine::TBlob<uint32_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint32_t>(O_tensor,&weights[0]);

std::cout<<"Created blob"<<std::endl;

//Read Network

InferenceEngine::CNNNetwork network = IeCoreSingleton::Instance().ReadNetwork(strModel, wei_blob);


Sincerely,

Zulkifli



0 项奖励
Hari_B_Intel
主持人
3,519 次查看

Hi Subtle,

 

Thank you for your patience, after investigating, we found that the model and weight vector are defined as <uint32_t>, and the InferenceEngine::Precision is set to U32. 

 

So in your code provided, the code was successfully executed after changing the vectors to <unit8_t>, and InferenceEngine::Precision to U8. You can give it a try and see if it works on your program.

 

std::vector<uint8_t>n_model;

std::vector<uint8_t>weights;

 

InferenceEngine::TensorDesc O_tensor(InferenceEngine::Precision::U8 {weights.size()},InferenceEngine::Layout::ANY);

InferenceEngine::TBlob<uint8_t>::Ptr wei_blob = InferenceEngine::make_shared_blob<uint8_t>(O_tensor,&weights[0]);

 

Hope this helps.

 

Thank you

 

0 项奖励
Zulkifli_Intel
主持人
3,437 次查看

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Sincerely,

Zulkifli


0 项奖励
回复