Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6493 Discussions

Using Inference Engine ReadNetwork method to load models from memory

Subtle
Employee
1,010 Views

Hi,

 

I want OpenVINO’s inference engine to load models from memory (not from a file path).

For this I am using  CNNNetwork ReadNetwork(const std::string& model, const Blob::CPtr& weights) function.

I need help regarding how can i create Blob::CPtr weights from weights.bin file. I am using OpenVINO 2021.4 version and OS is Ubuntu 20.04 LTS.

 

Thank you 

 

0 Kudos
1 Solution
Peh_Intel
Moderator
979 Views

Hi Subtle,

 

You can create Blob objects using constructors with InferenceEngine::TensorDesc. You can refer to Inference Memory Primitives (C++) for more information.


Besides, I also found that someone shared his way to load model from memory in the previous post which may be helpful for you.



Regards,

Peh


View solution in original post

5 Replies
Peh_Intel
Moderator
980 Views

Hi Subtle,

 

You can create Blob objects using constructors with InferenceEngine::TensorDesc. You can refer to Inference Memory Primitives (C++) for more information.


Besides, I also found that someone shared his way to load model from memory in the previous post which may be helpful for you.



Regards,

Peh


Subtle
Employee
974 Views

Hi Peh,

 

Thanks for reply. 

I am creating Blob objects using constructor  TensorDesc(const Precision& precision, const SizeVector& dims, Layout layout) . For my use case i need to keep Precision::UNSPECIFIED and Layout::ANY, I want to understand more about const SizeVector& dims, what should i keep for this parameter, if i am keeping Layout::ANY. 

 

 

Thank you 

 

 

 

 

0 Kudos
Peh_Intel
Moderator
950 Views

Hi Subtle,


There is nothing special for the layout set as “ANY”. The parameters for the dimension still need to set based on the model in the supported format (InferenceEngine::Layout::NCDHW, InferenceEngine::Layout::NCHW, InferenceEngine::Layout::NC, InferenceEngine::Layout::C).



Regards,

Peh


Subtle
Employee
946 Views

Hi Peh, 

 

Thanks a lot for reply. 

I got the point. 

 

 

Thanks & Regards 

Subtle 

 

 

0 Kudos
Peh_Intel
Moderator
925 Views

Hi Subtle,


This thread will no longer be monitored since we have provided answers. If you need any additional information from Intel, please submit a new question. 



Regards,

Peh


0 Kudos
Reply