Community
cancel
Showing results for 
Search instead for 
Did you mean: 
SRado1
Beginner
116 Views

Deploy parts of OpenVINO required for executing Inference on pre-trained Models as AWS Lambda Layer

Hello!

 

I have code written in Go Programming Language which uses models trained with OpenVINO framework (.xml and .bin files).

 

I would like to know, which parts of OpenVINO do I need to package into AWS Lambda Layer so that I can perform Inference with already trained models?

 

Thing is, OpenVINO takes ~900MB and AWS Lambda Function maximum size is 250MB. I need to take ONLY PARTS of OpenVINO that allow me to perform Inference over pre-trained DNN models (xml and bin files) and package them as AWS Lambda Layer.

 

Thanks in advance!

0 Kudos
2 Replies
Artem_A_Intel
Employee
116 Views

Hello!

Starting with OpenVINO 2019'R3 release you can use Deployment Manager tool for this purpose.

This tool allows to generate deployment packages with necessary target for inference(CPU/GPU/etc) and user data - in your case it is Go application and models trained with OpenVINO framework (.xml and .bin files).


Please find more details in documentation
https://docs.openvinotoolkit.org/2019_R3/_docs_install_guides_deployment_manager_tool.html

Sundar__Sharan
Beginner
116 Views

Deployment Packages do not support python bindings right? I have tried and it always reports "openvino module not found". But AWS Lambda supports only Python and several other runtimes except C++. So deployment package does not work here for me. Any other alternative for deploying using python with dependencies size less than 250mb? 

Reply