Intel: Sapphire Rapids With 64 GB of HBM2e, Ponte Vecchio with 408 MB L2 Cacheby Dr. Ian Cutress on November 15, 2021 9:00 AM EST
This week we have the annual Supercomputing event where all the major High Performance Computing players are putting their cards on the table when it comes to hardware, installations, and design wins. As part of the event Intel is having a presentation on its hardware offerings, which discloses additional details about the next generation hardware going into the Aurora Exascale supercomputer.
Aurora is a contract that Intel has had for some time – the scope was originally to have a 10nm Xeon Phi based system, for which the idea was mothballed when Xeon Phi was scrapped, and has been an ever changing landscape due to Intel’s hardware offerings. It was finalized a couple of years ago that the system would now be using Intel’s Sapphire Rapids processors (the ones that come with High Bandwidth Memory) combined with new Ponte Vecchio Xe-HPC based GPU accelerators and boosted from several hundred PetaFLOPs to an ExaFLOP of compute. Most recently, Intel CEO Pat Gelsinger has disclosed that the Ponte Vecchio accelerator is achieving double the performance, above the expectations of the original disclosures, and that Aurora will be a 2+EF Supercomputer when built. Intel is expecting to deliver the first batch of hardware to the Argonne National Laboratory by the end of the year, but this will come with $300m write-off on Intel’s Q4 financials. Intel is expecting to deliver the rest of the machine through 2022 as well as ramp up the production of the hardware for mainstream use through Q1 for wider spread launch in the first half of the year.
Today we have additional details about the hardware.
On the processor side, we know that each unit of Aurora will feature two of Intel’s newest Sapphire Rapids CPUs (SPR), featuring four compute tiles, DDR5, PCIe 5.0, CXL 1.1 (not CXL.mem), and will be liberally using EMIB connectivity between the tiles. Aurora will also be using SPR with built-in High Bandwidth Memory (SPR+HBM), and the main disclosure is that SPR+HBM will offer up to 64 GB of HBM2e using 8-Hi stacks.
Based on the representations, Intel intends to use four stacks of 16 GB HBM2e for a total of 64 GB. Intel has a relationship with Micron, and the Micron HBM2e physical dimensions are in line with the representations given in Intel’s materials (compared to say, Samsung or SKHynix). Micron currently offers two versions of 16 GB HBM2E with ECC hardware: one at 2.8 Gbps per pin (358 GB/s per stack) and one at 3.2 Gbps per pin (410 GB/s per stack). Overall we’re looking at a peak bandwidth then between 1.432 TB/s to 1.640 TB/s depending on which version Intel is using. Versions with HBM will use an additional four tiles, to connect each HBM stack to one of SPR’s chiplets.
Based on this diagram from Intel, despite Intel stating that SPR+HBM will share a socket with traditional SPR, it’s clear that there will be versions that are not compatible. This may be an instance where the Aurora versions of SPR+HBM are tuned specifically for that machine.
On the Ponte Vecchio (PVC) side of the equation, Intel has already disclosed that a single server inside Aurora will have six PVC accelerators per two SPR processors. Each of the accelerators will be connected in an all-to-all topology to each other using the new Xe-Link protocol built into each PVC – Xe-Link supports 8 in fully connected mode, so Aurora only needing six of those saves more power for the hardware. It’s not been disclosed how they are connected to the SPR processors – Intel has stated that there will be a unified memory architecture between CPU and GPU.
The insight added today by Intel is that each Ponte Vecchio dual-stack implementation (the diagram Intel has shown repeatedly is two stacks side by side) will feature a total of 64 MB of L1 cache and 408 MB of L2 cache, backed by HBM2e.
408 MB of L2 cache across two stacks means 204 MB per stack. If we compare that to other hardware:
- NVIDIA A100 has 40 MB of L2 cache
- AMD’s Navi 21 has 128 MB of Infinity Cache (an effective L3)
- AMD’s CNDA2 MI250X in Frontier has 8 MB of L2 per ‘stack’, or 16 MB total
Whichever way you slice it, Intel is betting hard on having the right hierarchy of cache for PVC. Diagrams of PVC also show 4 HBM2e chips per half, which suggests that each PVC dual-stack design might have 128 GB of HBM2e. It is likely that none of them are ‘spare’ for yield purposes, as a chiplet based design allows Intel to build PVC using known good die from the beginning.
On top of this, we also get an official number as to the scale of how many Ponte Vecchio GPUs and Sapphire Rapids (+HBM) processors we need for Aurora. Back in November 2019, when Aurora was only listed as a 1EF supercomputer, I crunched some rough numbers based on Intel saying Aurora was 200 racks and making educated guesses on the layout – I got to 5000 CPUs and 15000 GPUs, with each PVC needing around 66.6TF of performance. At the time, Intel was already showing off 40 TF of performance per card on early silicon. Intel’s official numbers for the Aurora 2EF machine are:
18000+ CPUs and 54000+ GPUs is a lot of hardware. But dividing 2 Exaflops by 54000 PVC accelerators comes to only 37 TeraFlops per PVC as an upper bound, and that number is assuming zero performance is coming from the CPUs.
To add into the mix: Intel CEO Pat Gelsinger only a couple of weeks ago said that PVC was coming in at double the performance originally expected, allowing Aurora to be a 2EF machine. Does that mean the original performance target for PVC was ~20 TF of FP64? Apropos of nothing, AMD’s recent MI250X announcement last week showcased a dual-GPU chip with 47.9 TF of FP64 vector performance, moving to 95.7 TF in FP64 matrix performance. The end result here might be that AMD’s MI250X is actually higher raw performance than PVC, however AMD requires 560 W for that card, whereas Intel’s power numbers have not been disclosed. We could do some napkin math here as well.
- Frontier uses 560 W MI250X cards, and is rated for 1.5 ExaFlops of FP64 Vector at 30 MW of power. This means Frontier needs 31300 cards (1.5 EF / 49.7 TF) to meet performance targets, and for each 560 W MI250X card, Frontier has allocated 958 Watts of power (30 MW / 31300 cards). This is a 71% overhead for each card (which means cooling, storage systems, other compute/management etc).
- Aurora uses PVC at an unknown power, is rated for 2 ExaFlops of FP64 Vector at 60 MW of power. We know that PVC has 54000+ cards to meet performance targets, which means that the system has allocated 1053 W (that’s 60 MW / 54000) per card to include the PVC accelerator and other overheads required. If we were to assume (a big assumption I know) that Frontier and Aurora have similar overheads, then we’re looking at 615 W per PVC.
- This would end up with PVC at 615 W for 37 TF, against MI250X at 560 W for 47.9 TF.
- This raw discussion fails to discuss specific features each card has for its use case.
|Compute GPU Accelerator Comparison
|Product||Ponte Vecchio||MI250X||A100 80GB|
|Transistors||100 B||58.2 B||54.2 B|
|Tiles (inc HBM)||47||10||6 + 1 spare|
|Compute Units||128||2 x 110||108|
|Matrix Cores||128||2 x 440||432|
|INT8 Tensor||?||383 TOPs||624 TOPs|
|FP16 Matrix||?||383 TOPs||312 TOPs|
|FP64 Vector||?||47.9 TFLOPS||9.5 TFLOPS|
|FP64 Matrix||?||95.7 TFLOPs||19.5 TFLOPS|
|L2 / L3||2 x 204 MB||2 x 8 MB||40 MB|
|VRAM Capacity||128 GB (?)||128 GB||80 GB|
|VRAM Type||8 x HBM2e||8 x HBM2e||5 x HBM2e|
|VRAM Bandwidth||?||3.2 TB/s||2.0 TB/s|
|Chip-to-Chip Total BW||8||8 x 100 GB/s||12 x 50 GB/s|
|CPU Coherency||Yes||With IF||With NVLink 3|
|TSMC N6||TSMC N7|
|Form Factors||OAM||OAM (560 W)||SXM4 (400W*)
|*Some Custom deployments go up to 600W|
Intel also disclosed that it will be partnering with SiPearl to deploy PVC hardware in the European HPC efforts. SiPearl is currently building an Arm-based CPU called Rhea built on TSMC N7.
Moving forward, Intel also released a mini-roadmap. Nothing too surprising here - Intel has plans for designs beyond Ponte Vecchio, and that future Xeon Scalable processors will also have options enabled with HBM.
- Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance
- Intel Teases Ponte Vecchio Xe-HPC Power On, Posts Photo of Server Chip
- Analyzing Intel’s Discrete Xe-HPC Graphics Disclosure: Ponte Vecchio, Rambo Cache, and Gelato
- Intel’s 2021 Exascale Vision in Aurora: Two Sapphire Rapids CPUs with Six Ponte Vecchio GPUs
- Intel’s Xe for HPC: Ponte Vecchio with Chiplets, EMIB, and Foveros on 7nm, Coming 2021
- Bringing Geek Back: Q&A with Intel CEO Pat Gelsinger
- Intel Architecture Day 2021: A Sneak Peek At The Xe-HPG GPU Architecture
- Intel to Launch Next-Gen Sapphire Rapids Xeon with High Bandwidth Memory
- Intel’s Xeon & Xe Compute Accelerators to Power Aurora Exascale Supercomputer
- SiPearl Lets Rhea Design Leak: 72x Zeus Cores, 4x HBM2E, 4-6 DDR5
- AMD Announces Instinct MI200 Accelerator Family: Taking Servers to Exascale and Beyond
Post Your CommentPlease log in or sign up to comment.
View All Comments
nandnandnand - Monday, November 15, 2021 - linkIt's fair to consider the 64 GB HBM2e an L4 cache, right?
p1esk - Monday, November 15, 2021 - linkNo.
onewingedangel - Monday, November 15, 2021 - linkIt can operate either as a L4 cache before the DDR5 memory pool or as the main memory pool without any DDR5.
saratoga4 - Monday, November 15, 2021 - linkSince the access time of the DDR5 and the HBM are similar, it can't function like a conventional cache that aims to hide memory latency (or at least that would be pointless, like using one DIMM to cache for another). Instead the HBM gives you a lot of extra memory channels that have relatively less RAM per channel. Ideally the programmer chooses where data goes. If not the system can try to guess, putting frequently accessed data in the smaller (HBM) channels and infrequently used data in the larger capacity channels.
JMaca - Monday, November 15, 2021 - linkBetter tell that to intel seeing as they say it can be used as a cache for DDR5
saratoga4 - Monday, November 15, 2021 - linkYes, see the last sentence of the post you replied to.
JayNor - Monday, November 15, 2021 - linkNot really pointless since it adds 8 memory channels per stack... so a lot more bandwidth, as if you added 32 memory channels that can be used in parallel until your program needs more than 64GB.
JayNor - Monday, November 15, 2021 - linksmaller memory channels? I believe they are 128 wide, and there are 8 per stack.
blanarahul - Monday, November 15, 2021 - linkThe channels definitely aren't smaller.
Dual channel RAM is 128 bit (each channel being 64 bit wide). Eight channel RAM will be 512 bit wide. 4096 bit wide HBM = 64 channels of RAM in terms of bandwidth.
0ldman79 - Tuesday, December 14, 2021 - linkLatency vs bandwidth though, having 128GB of L4 @ ~1.5TB/s bandwidth is better than not having it even if the latency only matches DDR5.