Just shy of a year ago, SK Hynix threw their hat into the ring, as it were, by becoming the second company to announce memory based on the HBM2E standard. Now the company has announced that their improved high-speed, high density memory has gone into mass production, offering transfer rates up to 3.6 Gbps/pin, and capacities of up to 16GB per stack.

As a quick refresher, HBM2E is a small update to the HBM2 standard to improve its performance, serving as a mid-generational kicker of sorts to allow for higher clockspeeds, higher densities (up to 24GB with 12 layers), and the underlying changes that are required to make those happen. Samsung was the first memory vendor to ship HBM2E with their 16GB/stack Flashbolt memory, which runs at up to 3.2 Gbps in-spec (or 4.2 Gbps out-of-spec). This in turn has led to Samsung becoming the principal memory partner for NVIDIA’s recently-launched A100 accelerator, which was launched using Samsung’s Flashbolt memory.

Today’s announcement by SK Hynix means that the rest of the HBM2E ecosystem is taking shape, and that chipmakers will soon have access to a second supplier for the speedy memory. As per SK Hynix’s initial announcement last year, their new HBM2E memory comes in 8-Hi, 16GB stacks, which is twice the capacity of their earlier HBM2 memory. Meanwhile, the memory is able to clock at up to 3.6 Gbps/pin, which is actually faster than the “just” 3.2 Gbps/pin that the official HBM2E spec tops out at. So like Samsung’s Flashbolt memory, it would seem that the 3.6 Gbps data rate is essentially an optional out-of-spec mode for chipmakers who have HBM2E memory controllers that can keep up with the memory.

At those top speeds, this gives a single 1024-pin stack a total of 460GB/sec of memory bandwidth, which rivals (or exceeds) most video cards today. And for more advanced devices which employ multiple stacks (e.g. server GPUs), this means a 6-stack configuration could reach as high as 2.76TB/sec of memory bandwidth, a massive amount by any measure.

Finally, for the moment SK Hynix isn’t announcing any customers, but the company expects the new memory to be used on “next-generation AI (Artificial Intelligence) systems including Deep Learning Accelerator and High-Performance Computing.” An eventual second-source for NVIDIA’s A100 would be among the most immediate use cases for the new memory, though NVIDIA is far from the only vendor to use HBM2. If anything, SK Hynix is typically very close to AMD, who is due to launch some new server GPUs over the next year for use in supercomputers and other HPC systems. So one way or another, the era of HBM2E is quickly ramping up, as more and more high-end processors are set to be introduced using the faster memory.

Source: SK Hynix

Comments Locked

37 Comments

View All Comments

  • TheReason8286 - Thursday, July 2, 2020 - link

    Im sorry but that doesn't even sound right. You really think Intel about to push the industry forward like that? Id wager AMD does it first. Intel is all about the status quo more than the others.
  • jeremyshaw - Friday, July 3, 2020 - link

    It's AMD. HBM + APU has been a blindingly obvious thing for a long, long time now. Intel even coaxed AMD to make a special HBM GPU for them, on Kaby Lake G (Quad core Intel SoC + AMD "Vega-not-Vega" GPU with HBM2 memory, all three on a single package substrate). Since then, Apple has made a habit of collecting special HBM2 GPUs from AMD, that never see the market in non-Apple laptops.

    Yet AMD refuses to release any HBM APU. We are on the third generation of Zen + Vega APUs. They were even willing to make a Zen + Vega APU with faster GDDR5 memory controllers for some small Chinese company, so that company could make a custom PC exclusively for the Chinese domestic market. That special APU had a 256bit GDDR5 memory bus and more than twice the number of CUs than any other Zen+Vega APU (24 in this one). Also more than 3 times the memory bandwidth of any Zen + Vega APU on the market (256GB/s vs Renoir's 68.3GB/s).

    Renoir finally provides an uplift in the memory department, only to see AMD regress in CU count. "Oh, but the individual CUs are faster," yet all we get are minimal GPU gains (sometimes minimal regressions, too) over the previous gen. Why? Fewer CUs, that's why. Having faster CUs doesn't mean much if their total number is cut down to the point where Nvidia's old MX150, which predates Zen/Ryzen altogether, is still a legitimate competitor against it. 14nm tech from 2017 should never be competitive against the 3rd/4th iteration of AMD 7nm products from 2020 (AMD had 7nm Vega in 2018, 7nm Zen2 in 2019, 7nm RDNA in 2019, and now another 7nm Vega [APU] in 2020).

    Never had I seen a company with such a collection of talent, leadership, and knowledge, fight so hard to eek out critical advantages in key areas, just to piss them away, time and time again.
  • brantron - Friday, July 3, 2020 - link

    You are pretty much just complaining that people have uses for integrated graphics and graphics cards. Good luck with that one.

    If the combined efforts of Intel, AMD, Dell, HP, et al couldn't turn Kaby Lake G into a de facto standard, that should say all you need to know about the odds of AMD pulling that off all by their lonesome.
  • jeremyshaw - Friday, July 3, 2020 - link

    @branton
    "Combined efforts" = no driver support whatsoever.

    I highly doubt your POV, when it's clear as day it's not a combined effort. I stand by my view of Intel coaxing AMD to make a custom GPU for them. AMD didn't care that they just proved the idea of HBM in mobile installations. Intel wasn't about to share the love on their platform. Dell and HP both went out of their way to design new laptops altogether for that package.

    HP proved the laptop they designed could handle a small-footprint dGPU. So what did AMD do? Wait until Apple wanted one, to make sure it would only stay exclusive to Apple. Another wasted opportunity to take point and lead, instead of following Apple and staying a subserviently dog. Problem is, Apple is eyeing their own newborn, and they will eventually put down their old dog, sooner or later.

    Again, AMD has all of the necessary technology and has been part of taking the lead in many critical partnerships. They are just wasting away their tech lead, letting others take the credit, and sapping away their advantages. It just wouldn't surprise me if Intel made it to the market first with a HBM APU.
  • Deicidium369 - Sunday, July 5, 2020 - link

    by coaxing, you mean asking them to produce a custom part that would be sold in quantities that AMD has never sold before.
  • quorm - Friday, July 3, 2020 - link

    I think part of the reason AMD reduced the CUs in Renoir is limited memory bandwidth.

    As far as laptops go, I wonder how big of an advantage physically separating the two major heat sources via discrete graphics is. It seems that even if the apu bandwidth issue were addressed via hbm, this would be a limiting factor. So perhaps, this is yet another reason such apus are still not being made.
  • Fulljack - Friday, July 3, 2020 - link

    it's not that simple in mobile space. currently AMD strategies for APUs are focusing on mobile first then bin it for desktop space. that's making it limited to die size, PPA, and efficiency, which is the design rules for laptop with limited thermal headroom but not really a problem on desktop.

    also, increasing CU count doesn't always show consistent performance gain, especially when Vega are bandwidth starved while using DDR4 memory.

    if you look at Renoir die size, the NGCUs are smaller than the CCX, unlike Picasso where it's the other way around. I think it shows that AMD here focusing at their 25x20, by making it far more efficient than it's predecessor, not brute-forcing performance gains like Intel's 14nm+++ through clock speed.
  • Deicidium369 - Sunday, July 5, 2020 - link

    Mid range cards should not get HBM. AMD produces mid range cards.

    "Never had I seen a company with such a collection of talent, leadership, and knowledge, fight so hard to eek out critical advantages in key areas, just to piss them away, time and time again." I guess you are too young to remember Atari and Commodore.
  • Deicidium369 - Sunday, July 5, 2020 - link

    When you are the goat at the top of the hill, the status quo is fine - it's called Winning.
  • bananaforscale - Thursday, July 2, 2020 - link

    "This in turn has led to Samsung becoming the principle memory partner"
    Principal. As in main. Principle is something you follow in actions.

Log in

Don't have an account? Sign up now