Intel's high-core count Xeon Platinum 9200 processors are getting new support from high-performance computing system builder Penguin Computing with a new Open Compute Project server that can max the compute density to more than 7,616 CPU cores in a rack.
Penguin Computing said its new supercomputing platform, Tundra AP, "outperforms" standard servers using Intel's Xeon Platinum 9200 processors thanks to the platform's use of the Open Compute Project (OCP) form factor and its higher power efficiency that allows for as much as 15 percent more nodes per rack.
Previously known under the code name Cascade Lake AP, Intel's Xeon Platinum 9200 processors provide the highest core counts of any Intel processor, with up to 56 cores, but their market is smaller than the rest of Intel's second-generation Xeon Scalable lineup, in part because the processors are sold as part of a compute board designed by Intel and not as a stand-alone component.
William Wu, vice president of hardware products at Penguin Computing, told CRN that Tundra AP and the first server in the lineup, the Relion XO1122eAP, are designed for organizations that want to overcome the hurdles in standard Xeon Platinum 9200 servers caused by power limitations and maximize the number of nodes they can fit in a rack for high-performance computing (HPC) workloads.
"The current customers that are taking [Cascade Lake AP] in HPC, AI types of data centers, they're underutilized. They're currently underutilized in terms of space," he said. "This allows them to have full utilization of the resource and not have things go to waste."
Tundra AP is a 21-inch 1U server chassis that can fit two of Intel's S9200WK nodes, which each house two Xeon Platinum 9200 processors, for a total of up to 224 cores per server. Wu said Tundra AP has the same density as Intel's 19-inch 2U servers for Xeon Platinum 9200; what's different is that Tundra AP "has smaller Lego pieces," making it "easier to manage a rack for maximum configurability."
Key to Tundra AP is the OCP's power disaggregation design, which removes the power supply unit from the rack server and moves the power sources to power shelves and centralized DC busbars, the latter of which are connected to the motherboard through a simplified power distribution board. This means Tundra AP servers save space and have fewer moving parts and easier thermal design compared with standard servers, according to Penguin Computing.
" The value is really once you start scaling out, you're not adjusting the power on the individual nodes," Wu said. "You're just addressing the power on the rack itself, so it simplifies the equation."
The power disaggregation made it easy for Penguin Computing to expand upon the direct-to-chip liquid cooling in Intel's S9200WK nodes with a rack-level manifold that "feeds into the hot components" to cover the maximum number of nodes, according to Wu.
"It's a win-win situation for Intel and for Penguin because we're serving a space that their existing product line may not be able to do because we're looking at the scale out. We're not looking at the individual node issue," he said.
The first Tundra AP systems are set to be delivered in September.