Data centers are energy-intensive, and the U.S. Dept. of Energy says data centers collectively account for about 2% of the total U.S. electricity use. But new technology innovations are helping to reduce total data center energy consumption. On the heels of CES, Intel held a launch event for its 4th Gen Intel Xeon Scalable processors, the Intel Xeon CPU Max Series, and the Intel Data Center GPU Max Series for HPC (high-performance computing) and AI (artificial intelligence), marking what the company calls one of its “most important product launches in company history”.
Intel says its new products will help customers reach new heights in terms of data center performance, efficiency, and security. It’ll also open the door to new capabilities for AI, the cloud, the network and edge, and supercomputers. Sandra Rivera, Intel’s executive vice president and general manager of the Data Center and AI Group, says the launch is also part of an important moment in time for Intel as a company. “The launch of 4th Gen Xeon Scalable processors and the Max Series product family is a pivotal moment in fueling Intel’s turnaround, reigniting our path to leadership in the data center and growing our footprint in new arenas,” she says.
The 4th Gen Intel Xeon Scalable processors are more sustainable than their predecessors thanks to built-in accelerators that offer platform-level power savings. Customers can expect a 2.9x average performance per watt efficiency improvement for targeted workloads when utilizing built-in accelerators, the company says. Plus, a new optimized power mode delivers up to 20% socket power savings, and Intel says this mode has less than a 5% performance impact for selected workloads. Per CPU, the new generation of Intel Xeon processers in optimized power mode can save 70 watts in power, lowering the TCO (total cost of ownership) by up to 66%.
The new generation of Intel Xeon processors is also unlocking new possibilities for customers leveraging AI. Compared to the previous generation of processors, 4th Gen Intel Xeon processors achieve up to 10x higher PyTorch realtime inference and training performance, thanks to built-in Intel AMX accelerators. Intel says its second new product, the Intel Xeon CPU Max Series, even further expands on these capabilities, specifically in the realm of NLP (natural language processing). In fact, the company cites a 20x increase in speed on large language models.
According to Verified Market Research, the HPC market will reach $65.12 billion by 2030, up from $34.85 billion in 2021, and Intel’s new products will be a factor in fueling this market growth. The company says the 4th Gen Intel Xeon and Intel Max Series product family help solve challenging problems by offering a scalable, balanced architecture that “integrates CPU and GPU with oneAPI’s open software ecosystem for demanding computing workloads in HPC and AI.” Specifically, the Intel Xeon CPU Max Series is a x86-based processor with high-bandwidth memory that’s capable of accelerating HPC workloads without code changes. The Intel Data Center GPU Max Series is the company’s highest-density processor, and it’s loaded with 100 billion transistors on a 47-tile package. Intel says this will open doors for new levels of throughput when applied to challenging workloads.
All of this represents a notable platform transformation from Intel and shows the company is looking to lead the space. Security wasn’t overlooked, either. Intel says with its 4th Gen Intel Xeon processors, it’s delivering “the most comprehensive confidential computing portfolio of any data center silicon provider in the industry.” For instance, it offers application isolation for data center computing with Intel SGX (Software Guard Extensions), as well as a new virtual-machine isolation technology.
Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #digitaltransformation #machinelearning #cybersecurity #Intel #HPC #processors #Xeon #datacenters #energy #artificialintelligence #NLP