My first experience with distributed computing takes me back to my early career in 1995. I was working to develop a system that would create real-time insights from capturing the lifecycle of gas tanks and chemical drums as they were connected to machines within our Intel manufacturing fabrication facilities, called “fabs.” I had envisioned this amazing system, but I was struggling to find the technology and methods to fuse all of the types of data together, including the need to transfer some dreaded batch data into a real-time system.
This batch data existed in a brittle, ancient system that could only be pulled on Tuesday nights, and which quickly became the bane of my existence. Our “mere-kids-of-out-school” team often failed to architect and deploy on this tough journey, but somehow, we powered through to design a world-class system. We built something that was proliferated to 18 fabs across 10 worldwide sites, and that would last more than 25 years as designed—and last I checked, had generated close to 10 million clicks of a barcode reader. In the 1990s, no one truly appreciated the framework we had built, which networked across systems and supported a family of personas. No one realized that to get it to work, “us kids” had to successfully influence all of our suppliers to get onto a single barcode standard. But the lack of recognition didn’t matter. I was hooked into a technical journey to become an expert in an emerging field of complex systems and distributed computing.
Today, we find that distributed computation is critical in the design of complex systems and a requirement in solutions designed to meet desired operational metrics. Enterprises constantly seek ways to extract information from their data to make informed decisions, optimize processes, and gain a competitive edge. However, data's sheer volume and complexity often prove to be significant challenges. Traditional centralized computing models struggle to meet the demand for instant analytics and responsiveness.
AI and the need to ingest and analyze data outside the data center require intelligent systems that seamlessly integrate silicon, software, and system components and operate cohesively across edge-to-cloud architectures. This is where distributed computing comes in.
Distributed computing at a glance
Distributed computing distributes computational tasks across multiple nodes or devices, allowing for parallel processing and efficient resource utilization. It is an architectural approach that supports multiple computing practices, including parallel computing, cluster computing, grid computing, edge computing, and cloud computing. It also encompasses the curation and fusion of different data types into a system so that the desired computation can achieve the desired insights. Distributed computing is critical given the nature of the dispersed, interconnected, and complex systems required to support many business objectives today.
Distributed computing architecture supports the seamless movement of workloads across different locations throughout the entire compute spectrum, giving organizations the freedom to choose where and when they process data. Resiliency is designed into distributed architectures, minimizing the impact of hardware failures or system disruptions.
A solid distributed computing foundation begins with five basic tenets:
- Connectivity that supports signal, control, data, and application flexibility on open-standards-based networks.
- Manageability that enables discovery, provisioning, and management of applications and workloads from edge to cloud.
- Security that ensures a chain of trust that is rooted in silicon and linked throughout software layers.
- Interoperability that enables applications and services to scale across diverse platforms and environments.
- Performance that is optimized for cost and each workload.
The flexibility to run AI at the most optimal location allows us to expand into new environments. For example, an architecture might include the implementation of inference models at the edge while also supporting large datasets in the cloud for AI training to develop and optimize those models. Distributed systems can also easily scale horizontally by adding more nodes, ensuring seamless performance as new data sources emerge and data volumes grow.
I had a unique opportunity to work on a distributed intelligent system back in 2016, where we were deploying AI on edge-device cameras to recognize people and objects. Most of the AI infrastructure was housed across a local server placed near each of the edge cameras and within a network of inter-company data centers.
We faced many latency, security, and compute speed challenges, resulting in excessive hardware (by today’s standards) to achieve the desired data-refresh requirements to operate the cameras. If we’d had the speed of today’s compute, our challenges could have been addressed far more cheaply, with fewer devices and systems, by placing an inference on each camera and a training environment in a single data center.
Intel can deliver an end-to-end distributed computing experience
As a global leader in technologies spanning the edge to the cloud, Intel has participated in distributed computing for decades, starting with embedded systems. Intel has long recognized the importance of distributed computing and interconnected systems and devices. Intel® technologies—from processors to networking to software to developer tools—are key enablers of distributed computing. Intel supports using an open computing environment that allows different parts and pieces of the computing ecosystem to interoperate freely. Through collaboration with industry partners in developing open standards, reference architectures, and software optimizations, Intel has established a strong foundation to support interoperability. This is a cornerstone for a distributed computing ecosystem that spans many sectors, including energy, telecommunications, healthcare, and manufacturing.
Why I am excited about Intel’s future in distributed computing
Distributed computing is now more important than ever in helping organizations seize opportunities for innovation through data analysis and AI. With a diverse set of technologies that connect, manage, help secure, and interoperate with optimal performance, Intel supports the movement of workloads across different locations throughout the compute spectrum from edge to cloud. Intel’s established ecosystem and platform consistency empower users to unlock the power of data and forge ahead into the intelligence era. I’m excited to be part of this important journey because Intel’s diverse portfolio can help my customers achieve their desired results. Distributed computing is the future, and thoughtful system design that supports choosing where and when to compute will become increasingly important to business objectives.
Read the solution brief for more information on distributed computing and Intel solutions available today.
Notices and Disclaimers
Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software, or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.