0
0
5,354
The server industry traditionally associates environmental stewardship with energy efficiency and water conservation. However, no conversation about green computing is complete without addressing the other elephant in the room: e-waste. Intel is taking on the challenge of e-waste with disaggregated server design, an innovation that is rooted in the merger of two concepts: total cost of ownership (TCO) and total cost to the environment (TCE).
Most people in data center management incorporate TCO into their growth strategy. But few understand or even consider TCE, particularly when it comes to updating data center infrastructure.
Historically, the server industry has focused on power usage effectiveness (PUE) and reducing water usage since both directly impact a company’s bottom line while also reducing environmental impact. At Intel, we’ve also done those things. Our innovative cooling design and high rack densities have helped us to reach unprecedentedly low PUEs and reduced our data center space footprint by 26 percent. But our commitment to TCE, as well as TCO, has forced us to look at e-waste.
Intel IT’s Lightbulb Moment
Intel IT refreshes data center servers every four years to take advantage of improvements in the Intel® Xeon® processor, every generation of which yields better performance per core and more memory per socket. While this intense schedule enables us to meet the current double-digit growth in demand, it also creates significant e-waste since selective replacement of processors alone has not historically been an option. As a result, perfectly functioning chassis, cables, power supplies, network switches, fans, as well as SSDs and SAS drives are ripped out, even though many years of useful life may remain for these components.
To mitigate the costs and environmental impact of server refresh, we had to rethink our approach to server design, leading to Intel IT’s own “lightbulb moment.”
Several years back, the lighting industry experienced a number of closely timed efficiency innovations – taking consumers from traditional incandescent light bulbs to more efficient compact fluorescent lamps (CFLs), and then to the highly efficient and long-lasting light-emitting diode (LED) bulbs. The transition to more efficient light bulbs was not always straight forward. Manufacturers had to adjust lighting designs to make this transition simple and less costly for consumers. Ultimately, switching to more efficient lighting became as simple as screwing in a new lightbulb.
We decided to be equally as innovative. We created the world’s first disaggregated server architecture, making it possible to independently refresh the CPU/DRAM module without involving adjacent components. After all, why discard perfectly good server components when they themselves do not change from one processor generation to the next? We simply decoupled the CPU/DRAM module from the NIC/drives module on the motherboard. Now, instead of spending hours on refresh, we just remove a few screws, slide out the old CPU/DRAM, and install the new module.
Since introducing the first disaggregated server design in 2016, Intel IT has deployed more than 220,000 disaggregated servers, using 13 different motherboard designs. The benefits include:
The implications for rapidly growing data centers are undeniable. Disaggregated servers offer enormous value to expanding data centers by allowing them to quickly and efficiently upgrade performance, cores, and memory without leaving literally tons of e-waste in the wake.
Selective replacement of components is an environmentally conscious decision that can lower TCE without harming Intel’s TCO, proving that environmental and profit initiatives can not only coexist within the server industry, but can even be mutually beneficial.
Read the IT@Intel White Paper, “Green Computing at Scale,” to learn more about how disaggregated servers, along with other data center design innovations, can transform your data center.
Most people in data center management incorporate TCO into their growth strategy. But few understand or even consider TCE, particularly when it comes to updating data center infrastructure.
Historically, the server industry has focused on power usage effectiveness (PUE) and reducing water usage since both directly impact a company’s bottom line while also reducing environmental impact. At Intel, we’ve also done those things. Our innovative cooling design and high rack densities have helped us to reach unprecedentedly low PUEs and reduced our data center space footprint by 26 percent. But our commitment to TCE, as well as TCO, has forced us to look at e-waste.
Intel IT’s Lightbulb Moment
Intel IT refreshes data center servers every four years to take advantage of improvements in the Intel® Xeon® processor, every generation of which yields better performance per core and more memory per socket. While this intense schedule enables us to meet the current double-digit growth in demand, it also creates significant e-waste since selective replacement of processors alone has not historically been an option. As a result, perfectly functioning chassis, cables, power supplies, network switches, fans, as well as SSDs and SAS drives are ripped out, even though many years of useful life may remain for these components.
To mitigate the costs and environmental impact of server refresh, we had to rethink our approach to server design, leading to Intel IT’s own “lightbulb moment.”
Several years back, the lighting industry experienced a number of closely timed efficiency innovations – taking consumers from traditional incandescent light bulbs to more efficient compact fluorescent lamps (CFLs), and then to the highly efficient and long-lasting light-emitting diode (LED) bulbs. The transition to more efficient light bulbs was not always straight forward. Manufacturers had to adjust lighting designs to make this transition simple and less costly for consumers. Ultimately, switching to more efficient lighting became as simple as screwing in a new lightbulb.
We decided to be equally as innovative. We created the world’s first disaggregated server architecture, making it possible to independently refresh the CPU/DRAM module without involving adjacent components. After all, why discard perfectly good server components when they themselves do not change from one processor generation to the next? We simply decoupled the CPU/DRAM module from the NIC/drives module on the motherboard. Now, instead of spending hours on refresh, we just remove a few screws, slide out the old CPU/DRAM, and install the new module.
Positive Outcomes From Disaggregating Servers
Since introducing the first disaggregated server design in 2016, Intel IT has deployed more than 220,000 disaggregated servers, using 13 different motherboard designs. The benefits include:
- No need to replace perfectly good components.
- No need to reinstall the OS.
- A minimum of 44 percent reduction in refresh costs.
- A 77 percent reduction in technician time spent on refresh.
- Decrease in refresh materials’ shipping weight by 82 percent.
- A greater than 50 percent estimated reduction in e-waste.
The implications for rapidly growing data centers are undeniable. Disaggregated servers offer enormous value to expanding data centers by allowing them to quickly and efficiently upgrade performance, cores, and memory without leaving literally tons of e-waste in the wake.
Selective replacement of components is an environmentally conscious decision that can lower TCE without harming Intel’s TCO, proving that environmental and profit initiatives can not only coexist within the server industry, but can even be mutually beneficial.
Read the IT@Intel White Paper, “Green Computing at Scale,” to learn more about how disaggregated servers, along with other data center design innovations, can transform your data center.
About the Author
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.