Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6507 Discussions

Deploy parts of OpenVINO required for executing Inference on pre-trained Models as AWS Lambda Layer

SRado1
Beginner
1,130 Views

Hello!

 

I have code written in Go Programming Language which uses models trained with OpenVINO framework (.xml and .bin files).

 

I would like to know, which parts of OpenVINO do I need to package into AWS Lambda Layer so that I can perform Inference with already trained models?

 

Thing is, OpenVINO takes ~900MB and AWS Lambda Function maximum size is 250MB. I need to take ONLY PARTS of OpenVINO that allow me to perform Inference over pre-trained DNN models (xml and bin files) and package them as AWS Lambda Layer.

 

Thanks in advance!

0 Kudos
2 Replies
Artem_A_Intel
Employee
1,130 Views

Hello!

Starting with OpenVINO 2019'R3 release you can use Deployment Manager tool for this purpose.

This tool allows to generate deployment packages with necessary target for inference(CPU/GPU/etc) and user data - in your case it is Go application and models trained with OpenVINO framework (.xml and .bin files).


Please find more details in documentation
https://docs.openvinotoolkit.org/2019_R3/_docs_install_guides_deployment_manager_tool.html

0 Kudos
Sundar__Sharan
Beginner
1,130 Views

Deployment Packages do not support python bindings right? I have tried and it always reports "openvino module not found". But AWS Lambda supports only Python and several other runtimes except C++. So deployment package does not work here for me. Any other alternative for deploying using python with dependencies size less than 250mb? 

0 Kudos
Reply