Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
544 Discussions

Intel® Liftoff Member Expanso Takes ML/AI to the Edge with Intel® Hardware

Eugenie_Wirz
Employee
0 0 2,628

Intel® Liftoff member Expanso is an AI/ML enabler that facilitates local processing at the edge or on-prem, in all clouds and regions. Their solution reduces the amount of data that needs to be sent to centralized locations for processing and storage. This in turn makes ML cheaper and faster than traditional methods, as well as more compliant with privacy regulations.

 

Bridging the Gap: Bringing ML Insights to the Data's Edge with Bacalhau


In the current landscape of rapid digitization, there’s an unprecedented increase in data generation outside of  centralized data centers. This data spans a wide range, from application logs for troubleshooting in multi-zone deployments to on-premises deployments near to customers and usage to 4K CCTV footage for security. Machine Learning (ML) has the potential to unlock this data’s insights, yet, there’s a challenge: the separation between data generation sites and ML model locations, especially in terms of data transit.

While data is generated everywhere, ML training and inference is centralized, often behind a hosted API. Bacalhau, the open source software backed by Expanso, changes this dynamic. It allows you to use familiar tools and models but bring ML inference to the data’s location. This decentralization drastically cuts costs, improves efficiency, and provides real-time insights, enhancing system reliability and response times. Running ML at the edge also boosts data security, simplifies system design, and eases management.

 

Revolutionizing Video Analysis: Localized ML Inference on a Massive Scale with Bacalhau


Consider a scenario with 100 cameras recording in 4K resolution at 24 FPS, generating 1TB of data hourly, totaling about 9.6 Petabytes annually. For cost analysis, we will use 50 virtual machines (
e2-standard-8 (GCP)), comparable to a solid Intel NUC, each processing 50 frames per second with the YOLOv5 model. Traditionally, this vast amount of data was barely tapped into—either due to the sluggish data transfer, at best, or at worst, videos were briefly reviewed by humans and then deleted to avoid high storage costs. Leveraging Bacalhau to distribute ML inference models directly on local hardware changes the game. It enables you to extract valuable insights and improve security while drastically cutting down on the costs associated with centralized computing.

 

Achievements:

 

  • Decentralization of Machine Learning (ML) Inference: Bacalhau shifts the paradigm by bringing ML inference directly to the data’s location, significantly reducing the need for moving data or video to centralized locations for processing before initial analysis. This approach facilitates real-time insights, enhances system reliability, and improves response times.
  • Cost Reduction and Efficiency: By distributing ML inference models on local hardware, Expanso demonstrates over 95% savings on compute costs compared to traditional centralized processing methods used by AWS, Google Cloud, and Azure.
  • Enhanced Data Security and Simplified System Design: Running ML at the edge improves data security and simplifies the overall system design and management, making it easier for organizations to adopt and maintain.

 

Provider (Totals)

Storage

Access

AI (run & train)

Total

Savings

AWS

$202,604

$378,432

$15,140,530

$15,721,566

19.35%

Google Cloud

$192,480

$378,432

$18,922,450

$19,493,362

0.00%

Azure

$154,484

$151,373

$15,214,580

$15,520,437

20.38%

Bacalhau

$192,480

$0

$738,037

$930,517

95.23%

 

Intel® Technology Used

 

Eugenie_Wirz_0-1709059398356.png

 

  1. Edge Hardware: The first requirement is edge hardware capable of executing and retraining medium-sized inference models, like YOLOv5. Compact yet powerful machines like Intel NUCs or hardware accelerators such as the Intel Neural Compute Stick are recommended.
  2. Hypervisor Layer: The second component is a hypervisor layer for abstracting and setting up the local hardware. We recommend using VMware’s vSphere or ESXi for this purpose.
  3. Bacalhau Installation: Finally, install Bacalhau. Bacalhau's main advantage is that it allows you to keep using the same binaries and architecture you're accustomed to while bringing your AI applications to the edge. It seamlessly orchestrates across different formats like Docker, WASM, Python, or other binaries, providing versatility and ease of use.

CEO David Arochnik commented on the role that Intel hardware and VMware virtualization technology have played in realizing Expanso’s vision:

We're thrilled to work with Intel and VMware, pushing the boundaries of edge computing. Intel's NUCs bring the muscle we need for heavy-duty machine learning tasks, while VMware's virtualization tech gives us the flexibility to scale and adapt. Together, we're not just crunching numbers faster; we're unlocking real-time insights from distributed workloads including high-res video data, making smarter decisions easier and more cost-effective. This is a big step forward in our journey to transform how businesses process and leverage their data.” - David Aronchick, CEO at  Expanso.

Expanso has been recognized as the winner in the Efficiency Excellence category at the TECHCOnnect Innovation Showcase, a collaboration between VMware by Broadcom and Intel®.

 

About Expanso: Fast, Global & Secure Operations


Expanso, leveraging its open-source software Bacalhau, redefines the computing landscape. Their distributed compute platform empowers customers to process data exactly where it’s generated, be it on-premise, across various clouds, zones, or regions, enabling a truly global reach. This approach results in operations that are not only faster and more cost-effective but also inherently more secure. Expanso transforms how data is processed, making it more efficient and safer for anyone working with data.

Want to push the boundaries of AI in your startup? Become a part of the Intel® Liftoff program now. Get access to the tools, technology, and support you need to make a big impact. Join us and start making smarter, faster, and more secure decisions with your data. Click here to apply.

About the Author
I'm a proud team member of the Intel® Liftoff for Startups, an innovative, free virtual program dedicated to accelerating the growth of early-stage AI startups.