Site icon Connected World

Sustainability and Infrastructure

Jen Huffstetler

Jen Huffstetler, chief product sustainability officer, VP & GM future platform strategy, Intel, takes time out of her schedule to share with Peggy Smedley, editorial director, and president of Specialty Publishing Media, about some of the biggest trends on the horizon for infrastructure and sustainability, identifying what comes next.

Smedley: What is next for infrastructure and sustainability?

Huffstetler: First, demand for compute is accelerating due to new AI (artificial intelligence) demands. This need for performance requires processors and accelerators that generate more heat, which in turn, requires advanced cooling solutions. Today’s cooling solutions are typically based on traditional air conditioning, which can consume up to 40% of data center power consumption. In addition, water is often used to help with cooling, making data centers large consumers of water (ex: Google used 5.2 billion gallons of water in 2022 for the company’s data centers, a 20% increase on the amount Google reported the year prior.

Liquid cooling, or the use of fluids to reduce heat generated by the system, can reduce data center power consumption by up to 30% and, depending upon implementation, drastically reduce water consumption. Liquid cooling comes in two forms: DTC (direct-to-chip)/cold plate and immersion. Immersion cooling (where the entire server board is immersed in an inert fluid) is generally seen as an option for new data center builds (due to the building architecture required) whereas DTC/cold plate can be more easily retrofitted into existing data centers. Analysts have forecast a large, growing demand (more than 50% revenue growth) for liquid cooling technology in the next several years.

Second, modularity of server systems will be coming into the market in the next several years and has the potential to reduce e-waste. Modular server design can amortize the embodied carbon footprint of components across more years of service. Industry specifications for both server components and accelerators are available now through the Open Compute Project. There is considerable momentum behind these new design standards with first products for servers available from Jabil and a host of servers being created to come to market with Intel’s next platform. Modular systems for the edge are also coming—built for short depth and optimized for environmentally constrained locations. Intel has modeled that the carbon footprint reduction from implementing modularity on our platforms can reduce the carbon footprint by 27%—this was estimated using PAIA (Product Attribute to Impact Algorithm) methodology with industry partners providing estimates on carbon and energy.

Third, focusing on IT side efficiency will enable greater power savings. By examining how much IT power is useful power, optimizations can be made. For example, comprehending losses of power due to transfer of electricity from grid to rack and power to fans for cooling can provide opportunities for improvements.

Reporting of attestable embodied footprints so that no matter where you are in the product lifecycle or supply chain you have trusted, transparent access to this data. Currently, there are many estimators available for determining the embodied footprint of the IT supply chain (such as PAIA), but the goal should be an actual number for the embodied footprint. This will require greater visibility, measuring and reporting of information such as intensity of power, run time power consumed, product embodied carbon, etc.

Smedley: How can the data center community increase resiliency and reliability to contribute to meeting corporate sustainability goals?

Huffstetler: Typically, continuity of service is implemented in software, but is enabled by hardware over-provisioning and redundancy. As resiliency and reliability increase, a reduction in over-provisioning should be possible. This will enable the total carbon footprint to be reduced (as measured by service capacity). Additionally, long life of hardware enables circularity and “second life” deployments, which amortizes the embodied carbon footprint across more years of service. Coupled with modularity, overall resource consumption and scope 3 impact associated with attaining those resources will be reduced.

Smedley: What types of financial benefits will we see as we increase energy efficiency and workloads?

Huffstetler: Increased energy efficiency will enable growth in service capacity or reduced costs. With the ever-increasing demand for compute, especially in the era of AI computing, more workloads will be able to be executed for a given unit of energy. Alternatively, if workload volume would remain the same, but run more efficiently, lower costs could also be passed on to customers. 

Smedley: What other benefits will we see?

Huffstetler: With visibility and focus on IT side efficiency, improvements in software optimization should also be seen. As there is wide variability in compute resources used to execute similar workload functions across a range of software stack implementations, making this more visible will encourage “best in class” resource allocation. This will reduce the associated carbon footprint and add clarity and precision to planning for net-zero energy.

Smedley: What challenges will we face as we continue to make this transition?

Huffstetler: There are many challenges as we make this transition:

Want to tweet about this article? Use hashtags #IoT #workeroftomorrow #AI #ML #sustainability #infrastructure

Exit mobile version