CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

In terms of the core-to-core tests on the Tiger Lake-H 11980HK, it’s best to actually compare results 1:1 alongside the 4-core Tiger Lake design such as the i7-1185G7:

What’s very interesting in these results is that although the new 8-core design features double the cores, representing a larger ring-bus with more ring stops and cache slices, is that the core-to-core latencies are actually lower both in terms of best-case and worst-case results compared to the 4-core Tiger Lake chip.

This is generally a bit perplexing and confusing, generally the one thing to account for such a difference would be either faster CPU frequencies, or a faster clock of lower cycle latency of the L3 and the ring bus. Given that TGL-H comes 8 months after TGL-U, it is plausible that the newer chip has a more matured implementation and Intel would have been able to optimise access latencies.

Due to AMD’s recent shift to a 8-core core complex, Intel no longer has an advantage in core-to-core latencies this generation, and AMD’s more hierarchical cache structure and interconnect fabric is able to showcase better performance.

Cache & DRAM Latency

This is another in-house test built by Andrei, which showcases the access latency at all the points in the cache hierarchy for a single core. We start at 2 KiB, and probe the latency all the way through to 256 MB, which for most CPUs sits inside the DRAM (before you start saying 64-core TR has 256 MB of L3, it’s only 16 MB per core, so at 20 MB you are in DRAM).

Part of this test helps us understand the range of latencies for accessing a given level of cache, but also the transition between the cache levels gives insight into how different parts of the cache microarchitecture work, such as TLBs. As CPU microarchitects look at interesting and novel ways to design caches upon caches inside caches, this basic test proves to be very valuable.

What’s of particular note for TGL-H is the fact that the new higher-end chip does not have support for LPDDR4, instead exclusively relying on DDR4-3200 as on this reference laptop configuration. This does favour the chip in terms of memory latency, which now falls in at a measured 101ns versus 108ns on the reference TGL-U platform we tested last year, but does come at a cost of memory bandwidth, which is now only reaching a theoretical peak of 51.2GB/s instead of 68.2GB/s – even with double the core count.

What’s in favour of the TGL-H system is the increased L3 cache from 12MB to 24MB – this is still 3MB per core slice as on TGL-U, so it does come with the newer L3 design which was introduced in TGL-U. Nevertheless, this fact, we do see some differences in the L3 behaviour; the TGL-H system has slightly higher access latencies at the same test depth than the TGL-U system, even accounting for the fact that the TGL-H CPUs are clocked slightly higher and have better L1 and L2 latencies. This is an interesting contradiction in context of the improved core-to-core latency results we just saw before, which means that for the latter Intel did make some changes to the fabric. Furthermore, we see flatter access latencies across the L3 depth, which isn’t quite how the TGL-U system behaved, meaning Intel definitely has made some changes as to how the L3 is accessed.

Power Consumption - Up to 65W or not? SPEC CPU - Single-Threaded Performance


View All Comments

  • vyor - Monday, May 17, 2021 - link

    Worst case for Zen3 is matching Zen2. That's the *worst* case. Name a single actual workload it's slower in. Reply
  • Otritus - Monday, May 17, 2021 - link

    Vermeer was consistently faster than Matisse, but Milan was not consistently faster than Rome. Cezanne is faster than Renoir in all but 1 subtest. All 3 comparisons are Zen 3 vs Zen 2. Also SPEC isn't an actual workload by the standards of it's something people run for work or entertainment. It's just a series of industry-standard benchmarks to evaluate the performance of processors. In all of the real workloads Cezanne wins. Reply
  • mode_13h - Monday, May 17, 2021 - link

    > Milan was not consistently faster than Rome.

    Because the IO die is consuming too much power @ the higher frequency it uses in Milan. Not due to the cores, themselves.
  • Bagheera - Tuesday, May 18, 2021 - link

    Rocket Lake was well loved.... by who? it was universally panned by reviewers. the lower end sub-$300 i5 may be good value, but that's about it. the high end parts not only lose to Zen 3 but loses even to CML in some cases. Reply
  • Makste - Monday, May 31, 2021 - link

    Take a sarcasm 😉 Reply
  • Hifihedgehog - Monday, May 17, 2021 - link

    Exactly. Generally, I find the results here very accurate here, but that needs serious attention. Reply
  • vyor - Monday, May 17, 2021 - link

    I find that his Spec testing has gotten worse and worse over the years. Andrei honestly just don't know how to use the suite and it almost always makes some parts look better than others when they really shouldn't be. Just look at the M1 tests for that, where the single thread perf in SPEC vastly exceeds that scene in any other test. Reply
  • Andrei Frumusanu - Monday, May 17, 2021 - link

    You're welcome to demonstrate what is flawed with actual technical arguments.

    The M1 exceeds because it's that good, we're missing it in many other benchmarks simply because they aren't ported to macOS or currently don't have data on them.
  • vyor - Monday, May 17, 2021 - link

    "it's just that good" except in every single case it isn't.

    Name a single workload where the spec results line up with application performance.
  • Ppietra - Monday, May 17, 2021 - link

    Single thread performance seems to align quite well with other tasks!
    Look at Cinebench single thread performance. Look at compiling performance. Look at javascript performance, etc, etc!

Log in

Don't have an account? Sign up now