Cloud
Examine critical components of Cloud computing with Intel® software experts
137 Discussions

Intel Labs Research on Intent-Driven Orchestration Leads to Simplifying Cloud and Edge Deployments

Thijs_Metsch
Employee
0 0 22.7K

Thijs Metsch is a senior research engineer with Intel Labs working on distributed systems for high performance computing, cloud, and edge related scenarios. His research focuses on making orchestration systems more efficient and autonomous.

 

Highlights

  • Using research from Intel Labs, Intel has introduced the open-source Intent-Driven Orchestration (IDO) model, a novel approach to resource allocation using intent-driven requests based on application key performance indicators (KPIs).
  • Intent-Driven Orchestration is available under an open-source Apache 2.0 license on GitHub.

 

Using research from Intel Labs, Intel has introduced the open-source Intent-Driven Orchestration model, a novel approach to resource allocation that enables the management of cloud-native applications through service level objectives (SLOs) expressed as KPIs, which minimizes service owner and administrator overhead. IDO simplifies deployment, enabling users to run cloud and edge applications in a semantically portable and efficient manner without knowledge of optimal resource allocation, such as power, CPU, memory, and storage. As an open-source model, IDO will provide an opportunity for Intel Labs to engage with the cloud and edge ecosystem.

As cloud computing application design shifts to scalable distributed systems using cloud-native deployments, resource allocation will move from declarative requests for specific resources to intent-driven requests based on KPIs, such as required latency, throughput, or reliability targets, leaving it to the autonomous orchestration stack to determine what infrastructure resources will fulfill the objectives. By adding a planning component to a Kubernetes (K8s)-based orchestration stack, IDO can translate service objectives into actionable decisions on the minimal resources needed to fulfill the KPIs.

 

Watch a demonstration of Intent-Driven Orchestration.

 

Many orchestration and management systems are highly resource-oriented, and allocations are commonly based on resource requests. However, declarative requests based on specific resource requirements have drawbacks. Users need to have detailed knowledge about the available resources, but the quantification of the resource requirements may not be easily determined (for example, the exact performance characteristics of the targeted platform need to be known to make optimal resource requests). Incorrectly specified resource requirements can often lead to suboptimal performance, faults, or underallocation and overallocation of resources.

Users of serverless and low-code platforms need less knowledge about resource allocations and do not have to worry about what hardware their code will run on when using cloud-native app design paradigms. These factors are driving the gradual transition from a resource-oriented requirement definition model to an intent-driven one in the future.

 

Intent-Driven Orchestration Uses Planning Component

Here’s an overview on how the IDO system works: A novel planning component is embedded in the Kubernetes control plane to enable intent-driven orchestration in the closed loop system. The intents are defined as objects in the K8s cluster while they are assessed by the planning component. This component continuously watches the current state of the user’s KPIs and the system’s telemetry information, using a planning algorithm to generate a set of actions to keep the desired and current state close. Various orchestration actions can be performed through plugin actuators interacting with specific resources.

 

ido_fig_1.png

Figure 1. The IDO planning component uses a pluggable architecture.

 

The planner continuously compares the current state of a workload instance to the desired one, and dynamically adjusts to meet the target objective. Different workloads, such as microservices, functions, or others, require different resource management techniques. While compute-intensive and cache-sensitive workloads benefit from memory bandwidth and last-level-cache tuning, others might benefit from scaling out and/or up.

 

ido_fig_2.png

Figure 2. Autonomous intent management is enabled by foreground and background flows.

 

In the machine learning (ML)-based autonomous intent management system, data from observability stacks is processed using AI/ML techniques, forming a continuously running background flow that gives insight into the effect of orchestration actions on target objectives. These insights are stored in a knowledge base as lookup tables, regression models, or neural networks, for example. The insights are then used in a foreground flow where the service owner requests the intents and the orchestration stack tries to adhere to the given targets within the bounds of the effects possible with resource assignment. This flow enables closed-loop automation for managing the intents. IDO uses this type of reactive planning where the planner reacts to an event and uses current insights to determine a possible set of actions. The platform can also use opportunistic planning where the planner tries to bring the current state closer to the goal state (although the goal state cannot be reached), and proactive planning where the planning component and its plugins try to explore the effect of an orchestration action.

Instead of relying on domain knowledge, IDO enables users in serverless environments to define what they really care about — their application objectives.

Intent-Driven Orchestration is available under an open-source Apache 2.0 license on GitHub.

Tags (3)
About the Author
Thijs is a Researcher building cool stuff at Intel Labs. His key interests include system performance and distributed systems. In past career moves, he did work on HPC, Grids, and Cloud/Edge for companies such as IBM, Sun Microsystems, and the German Aerospace Center. He helped make shipbuilding easier, ran massive parallel workloads, managed tons of compute in hybrid environments, and created one of the first standards for the Cloud more than a decade ago. Now focused on making orchestration easier with tools such as Kubernetes using e.g. AI/ML techniques.