Data Center
Participate in insightful discussions regarding Data Center topics
59 Discussions

Green HPC Mitigates Supply Chain Constraints

Shesha_Krishnapura
1 0 1,459

In today’s blog, I explain how green high-performance computing (HPC)—made possible by innovative disaggregated servers—can benefit the environment. Furthermore, it also helps Intel engineers to accelerate chips and software development even when extreme supply chain constraints cause significant server delivery delays and price increases.

HPC—the Life Force for Intel’s Innovation

HPC is a mission-critical and transformation capability for Intel. Without hyper-scale HPC, Intel innovation couldn’t keep up with today’s blistering pace, with more complex products in parallel design than ever before. Nanometer- and angstrom-level silicon designs including in excess of 100 billion transistors and new features are the key to accelerating modern workloads like artificial intelligence (AI), machine learning, and analytics. Not only does HPC enable these complex designs, but we’re also using it to speed up the design process to get Intel’s products into the hands of customers sooner than later.

Holistic Data Center Strategy Sets the Stage for Green HPC Success

Optimization of our HPC environment is an integral part of our overall data center strategy. This strategy has generated USD 4.8 billion in savings over the last decade. We’ve achieved these savings through the following tactics:

  • Running Intel data center services like a factory
  • Affecting change in a disciplined manner
  • Transforming business processes
  • Innovating disruptive technologies
  • Streamlining day-to-day operations.

This strategy has enabled us to keep up with our typical annual >30 percent growth in Intel’s compute demand and our >40 percent storage capacity growth, year over year, while still keeping server spend at about the same level. At the same time, we’ve increased HPC capacity by 293X in the last ten years.

One crucial tactic of our data center strategy is adopting disruptive technology that can transform Intel’s business. One example is the disaggregated server, invented by Intel IT and first adopted into our data centers in 2016. A disaggregated server makes it possible to upgrade just the CPU module with or without new DRAM. It also helps avoid replacing other components like input-output modules or shared infrastructures like chassis, fans, integrated network switches, management modules, and power supplies. In our experience, disaggregated servers can benefit both the business and the environment. We spend less (44% to 67% less compared to a full-acquisition refresh) while being able to refresh more often. Plus, we can reduce the weight of shipping materials by 82 percent and reduce e-waste significantly.

We have purchased and deployed more than 220,000 new disaggregated servers in our data centers in the last five years (and upgraded them as we went along). The business-transforming power of this green technology was underscored even further this year—because of continued innovation acceleration at Intel, we found ourselves needing to add one million cores to our HPC environment in 2021 (2.3 million increasing to 3.3 million). This 43 percent growth is considerably larger than our typical yearly growth in compute demand. Disaggregated servers are enabling us to replace our legacy 4-core Intel® Xeon® processors with newer and faster 8-core processors featuring better performance and throughput within the same data center space and similar power envelope.

In a Constrained Environment, Green HPC Is the Answer

In 2022, disaggregated servers will help us manage massive expansion in compute capacity. Additionally, and especially in today’s constrained supply chain environment, costs are rising nearly 30 percent and server infrastructure such as RAID controllers is in short supply, often delaying delivery by weeks or months. Disaggregated servers enable us to win on three fronts: total cost of ownership (TCO), time to deploy and total cost to the environment.

Our data center strategy, including the adoption of disaggregated servers, is enabling us to surmount the supply chain constraint situation while at the same time helping to decrease our total cost to the environment. And as Intel’s foundry business grows, our investment in millions of CPU cores, petabytes (PBs) of storage, more than a half-billion network ports and other business, technological and operational transformations put us in the right place to support faster time to market and product innovation for both Intel and its foundry customers.

For more information, read the IT@Intel white paper, “Data Center Strategy Leading Intel’s Business Transformation.”

 

Tags (1)
About the Author
Shesha Krishnapura is an Intel Fellow and chief technology officer in the Information Technology organization at Intel Corporation. He is responsible for advancing Intel data centers for energy and rack space efficiency, disaggregated server innovation and hardware designs, high-performance computing (HPC) for electronic design automation (EDA), and optimized platforms for enterprise computing. He is also responsible for fostering unified technical governance across IT, leading consolidated IT strategic research and pathfinding efforts, and advancing the talent pool within the IT technical community to help shape the future of Intel. Shesha has led the introduction and optimization of Intel® architecture compute platforms in the EDA industry since 2001. He and his team have delivered five generations of HPC clusters and four supercomputers for Intel silicon design and device physics computation. Earlier in his Intel career, as director of software in the Intel Communications Group, he delivered the driver and protocol software stack for Intel’s Ethernet switch products. As an engineering manager in the Intel® Itanium® processor validation group, he led the development of commercial validation content that produced standardized workload and e-commerce scenarios for successful product launches. He joined Intel in 1991 and spent the early years of his Intel career with the Design Technology group. A three-time recipient of the Intel Achievement Award, Shesha was appointed an Intel Fellow in 2016. His external honors include an InformationWeek Elite 100 award, an InfoWorld Green 15 award and recognition by the U.S. Department of Energy for industry leadership in energy efficiency. He has been granted several patents and has published more than 75 technical articles. Shesha holds a bachelor’s degree in electronics and communications engineering from University Visvesvaraya College of Engineering in Bangalore, India, and a master’s degree in computer science from Oregon State University. He is the founding chair of the EDA computing board of advisers that influences computer platform standards among EDA application vendors. He has also represented Intel as a voting member of the Open Compute Project incubation committee since its inception.