- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Team,
I'm trying to setup openvino model server in AWS EKS cluster. Since, I'm not able to find any relevant blog. I'm trying to set up things as much as I can.
Initially, I have few questions, as I'm working in AWS EKS
1. which AWS EC2 instance should I choose?
2. Secondly, I noticed there is an AMI in AWS MarketPlace "Intel® Distribution of OpenVINO™ Toolkit" but that is more specific to a single instance.
3. However, to test AWS EKS with other AMI, just to make sure whether AWS EKS accepts other AMI or not. But it throws an error. Please find the screenshot for your reference.
```
│ Error: expected ami_type to be one of [AL2_x86_64 AL2_x86_64_GPU AL2_ARM_64 CUSTOM BOTTLEROCKET_ARM_64 BOTTLEROCKET_x86_64 BOTTLEROCKET_ARM_64_NVIDIA BOTTLEROCKET_x86_64_NVIDIA WINDOWS_CORE_2019_x86_64 WINDOWS_FULL_2019_x86_64 WINDOWS_CORE_2022_x86_64 WINDOWS_FULL_2022_x86_64], got ami-06a566ca43e14780d
│
│ with aws_eks_node_group.demo,
│ on eks-worker-nodes.tf line 46, in resource "aws_eks_node_group" "demo":
│ 46: ami_type = "ami-06a566ca43e14780d"
```
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Iamexperimentingnow,
Thanks for reaching out to us.
- OpenVINO™ Model Server has been tested on RedHat and Ubuntu.
Please choose AWS EC2 instance that fulfills Intel® Distribution of OpenVINO™ Toolkit System Requirements. - Intel® Distribution of OpenVINO™ toolkit on Amazon Machine Image (AMI) enables developers to optimize pre-trained models and accelerate the deployment of deep learning solutions with a write-once-deploy-anywhere approach across Intel-powered CPUs, integrated GPUs, Intel® Movidius™ VPUs, and FPGAs.
Intel® Distribution of OpenVINO™ toolkit on AMI comes pre-equipped with the Intel® Distribution of OpenVINO™ toolkit development and deployment components, such as the Model Optimizer and the Inference Engine.
Intel® Distribution of OpenVINO™ toolkit on AMI also includes OpenVINO™ Deep Learning Workbench.
For more information, please refer to Getting Started with OpenVINO™ toolkit on Amazon Machine Image (AMI). - For your information, Error: expected ami_type to be one of [AL2_x86_64 AL2_x86_64_GPU AL2_ARM_64 CUSTOM BOTTLEROCKET_ARM_64 BOTTLEROCKET_x86_64 BOTTLEROCKET_ARM_64_NVIDIA BOTTLEROCKET_x86_64_NVIDIA WINDOWS_CORE_2019_x86_64 WINDOWS_FULL_2019_x86_64 WINDOWS_CORE_2022_x86_64 WINDOWS_FULL_2022_x86_64], got ami-06a566ca43e14780d is not related to Intel® Distribution of OpenVINO™ Toolkit.
Please submit a support request via Contact AWS.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wan,
Thanks for your reply. So, can I choose any instance from AWS?
If I choose any instance from AWS and pick OpenVINO model server AMI from Marketplace, you're saying it is more enough to do model optimization and inference?
is that correct?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Iamexperimentingnow,
Thanks for your information.
AWS EC2 instance that fulfills Intel® Distribution of OpenVINO™ Toolkit System Requirements should be able to do model optimization and inference.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Customer,
Thanks for your question.
Please submit a new question if additional information is needed as this thread will no longer be monitored.
Regards,
Wan

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page