Cortex-A55 Microarchitecture

The Cortex-A55 is the next CPU microarchitecture in ARM’s Cambridge family of low-power, small-footprint cores, which also include the A5, A7, and A53. This new little core is an evolution of the A53, so the two cores share much in common. Using the A53 as the starting point makes sense—it has proven to be a successful design—but as the big cores evolved over time, the performance gap between the big cores and the A53 grew, altering the balance in ARM’s big.LITTLE scheme. And as ARM continues its push into new markets beyond mobile, it needs new features that the A53 lacks. The A55 addresses the performance issue with improvements to the memory system and other microarchitectural changes and adds key features by moving from the ARMv8.0 architecture to ARMv8.2.

At a high level, the A55 is still a dual-issue, in-order CPU with an 8-stage pipeline. According to ARM, 8 stages is still the sweet spot, because it’s not seeing significant frequency improvements when moving from 16/14nm to 10nm to 7nm (most of the process gains are with area scaling and reduced dynamic/leakage power). With 8 stages, the A55 should reach a similar peak frequency as A53. Moving to a shorter pipeline would reduce the max frequency without a significant improvement to power or area, while a longer pipeline would increase area and power consumption for only a small frequency gain.

The similarities between the two cores is evident when examining the diagrams above. There’s still a 2-wide decode stage that decodes most instructions in a single cycle. Another feature that carries over from A53 is symmetric dual-issue for most instructions, meaning both issue slots can feed instructions to any pipeline. We’ll cover the execution pipelines in more detail below, but the big change here is that the A55 moves to independent load and store AGUs that can perform loads and stores in parallel instead of a single, combined AGU like the A53.

The A53’s core already provides good throughput, but not having instructions or data ready to process, perhaps because of a mispredicted branch or a cache miss, nullifies this advantage. Without the ability to execute instructions out of order while the CPU waits to fetch an instruction or data from elsewhere in the memory hierarchy, the entire core can stall (all other instructions need to wait for the current instruction to finish), so keeping an in-order core fed with instructions and data is critical. This is why ARM focused heavily on improving the A55’s memory system performance.

Starting on the instruction side, the L1 I-cache is now 4-way set associative versus 2-way for the A53. It’s still VIPT (Virtually Indexed, Physically Tagged), which is commonly used for L1 caches as it reduces latency, but there’s a larger 15-entry L1 TLB that supports multiple page sizes. This simple-sounding change actually required a significant amount of work to alleviate issues with timing pressure. The size of the L1 I-cache is configurable with options for 16KB, 32KB, or 64KB, which is similar to the A53 that offers a range from 8KB to 64KB.

It seems that every new CPU microarchitecture comes with a new branch predictor, and the A55 is no exception. The new conditional predictor increases prediction accuracy by using neural network based algorithms. It also adds loop termination prediction, which avoids a mispredict occurring at the end of a loop. There are also new 0-cycle micro-predictors ahead of the main conditional predictor. These are not as accurate as the main predictor, but their ability to perform predictions back to back to back provide a significant performance uplift by reducing pipeline bubbles even in tight loops. There’s also an indirect predictor that’s only used when necessary, reducing the power penalty because indirect branches do not occur as frequently. This includes a 256-entry BTAC (Branch Target Address Cache).

Switching over to the data path, the A55 includes an improved data prefetcher that delivers higher bandwidth. It’s capable of detecting more complex cache miss patterns and can prefetch to the L1 or L3 caches. ARM expects these improvements to make a noticeable impact on mobile device UI performance.

The L1 D-cache is still 4-way set associative like the A53, but is now fully-exclusive instead of pseudo-exclusive, which saves area because data is not duplicated in the L2 cache. The size of the L1 D-cache is also configurable with 16KB, 32KB, or 64KB options, which is also similar to the A53’s range from 8KB to 64KB.

Another big change for the L1 D-cache is moving from a PIPT (Physically Indexed, Physically Tagged) to a VIPT access scheme. This reduces cache latency because the cache index lookup happens in parallel with the TLB translation, but also creates aliasing issues where several virtual addresses might reference the same physical address. The A55 deals with aliasing in hardware, however, making the VIPT cache appear like a PIPT cache to the programmer.

Further enhancements include a larger 16-entry micro-TLB for the L1 D-cache, a significant increase over the A53’s 10-entry micro-TLB, and a larger store buffer with higher bandwidth into the L1 to better cope with workloads that generate a lot of stores. ARM also reduced the L1 pointer chasing load-to-use latency from 3 cycles in A53 to 2 cycles in A55, providing a small performance bump when working with certain types of data structures.

Being compatible with DynamIQ means the A55 gets an integrated L2 cache that operates at core speed. This reduces L2 latency by 50% (12 cycles to 6 cycles) compared to the A53 and its shared L2. The L2 cache size options include 0KB, 64KB, 128KB, and 256KB. ARM expects 128KB to be the most common for mobile applications, but there should be a reasonable number of 256KB configurations too. The smaller sizes will appeal to the networking and embedded markets that desire lower area/cost or need more deterministic memory behavior. With the L2 now part of the core, the A55’s L2 TLB grows to 1024 entries up from 512 entries for A53.

The A55’s integrated L2 uses PIPT, which is simpler to implement and uses less power than VIPT. Unlike the VIPT L1, using PIPT for the L2 does not incur a performance penalty because its naturally higher latency means there’s sufficient time to get the physical address from the TLB before performing the tag compare.

The L2 is 4-way set associative like the L1, a design choice meant to minimize cache latency, which is important for reducing the impact of stalls on in-order cores. The downside to using a less associative cache is a greater chance to miss, but ARM feels comfortable with this compromise with an L3 cache sitting reasonably close to the core.

Having access to a (potentially) large L3 cache, something the A53 did not have, will also help improve performance. As previously stated, in-order cores are sensitive to cache misses that cause the core to stall, so they respond particularly well to increases in cache size and reductions in memory access latency. ARM says the A53/A55 sees roughly a 5% performance improvement for each doubling of cache size depending on workload.

We’ll end our tour through the data path and our discussion about the A55’s memory system improvements with the AGUs (Address Generation Units). According to ARM’s internal benchmarking, the CPU can spend a lot of time doing loads and stores, so improving performance here was crucial. Instead of sticking with the A53’s single, combined AGU, the A55 uses independent load and store AGUs that can perform loads and stores in parallel, enabling a higher issue rate into the memory system. Note that the A55’s AGUs are not capable of performing both loads and stores (just one or the other) like the AGUs in the A73/A75, a trade off between performance and complexity.

Shifting our focus to the execution pipes, we find 2 ALU/INT units that are functionally the same as the A53’s. Both ALUs can perform basic operations such as additions and shifts, but only one ALU handles integer multiplication and multiply-accumulate operations, while the other focuses on integer division with a Radix-16 divider. So while the A53/A55 cannot perform 2 integer multiplies or divides in parallel, it can dual issue a MUL/MAC alongside a divide/add/shift. The ALUs are still 2 stages, but nearly all instructions complete in 1 cycle. There are also several forwarding paths built in, including paths within the same pipe or cross pipe, to reduce execution latency.

The 2 64-bit NEON/Floating Point pipes are still optional (some markets do not require them) and are served from a dedicated 128-bit register file like the A53. Each SIMD NEON pipe in the A53/A55 can perform 8 8-bit integer, 4 16-bit integer, 2 32-bit integer or single-precision floating-point (FP), or 1 64-bit integer or double-precision FP operations per cycle, giving programmers the flexibility to choose the right balance between precision and performance. As part of its support for the ARMv8.2 architecture, the A55 gains native support for half-precision FP16 operations too. The A53 can fetch FP16 values, but they need to be converted to FP32 before execution. This is no longer the case with A55, so the core performs less work when dealing with FP16 values that are frequently used in some image processing and machine learning applications that require 16-bit precision.

The A55 also includes a new INT8 dot product instruction, another useful addition for machine learning, that offers a 4x performance improvement over the A53 (which increases to about 5x when taking the memory system improvements into account). Instead of executing multiple instructions back to back to back like the A53, the A55 only needs to push a single instruction through the pipe.

One final noteworthy improvement is the ability to do a fused multiply–add (FMA) in a single pass instead of two for the A53, reducing latency from 8 cycles to 4.

The A55 has much in common with the A53, which is not surprising because the A53 already delivers good core throughput. While the A55 gets some improvements to the NEON/FP pipes, mostly from additional instructions courtesy of ARMv8.2, most of its performance gains come from changes to the memory system meant to reduce both the number of core stalls and their latency impact when they do occur.

Cortex-A75 Microarchitecture Final Thoughts


View All Comments

  • melgross - Wednesday, May 31, 2017 - link

    No, we don't know if they're fake. TSMC stated, months ago, that they were delivering 10nm parts to their largest customers, which one would presume, is Apple.

    And my statement stand. If the best the u35 can do is just over 2,000, then these parts are slightly over twice as fast. And if the claim for the multiprocessing score is right, then that's well over the score for any 4 core ARM chip from anyone else.

    ''Traditionally", these scores that leak out, whether real, or not, are remarkably close to what's tested after Apple's product does come out, often being somewhat lower that the "real" scores.
  • jjj - Monday, May 29, 2017 - link

    Apple's ST perf is marketing for folks like you, nothing more.
    ST perf is too high for mobile even with A75 and ST perf is not what matters. We would all be better off with less ST and higher efficiency.
    Sadly people like you are pushing the industry into pushing ST for no reason.
    Apple 's core is huge compared to ARM's core and for what, ST perf you don't need ,lower MT and efficiency.
  • aryonoco - Monday, May 29, 2017 - link

    I'm curious to know what you are basing this on.

    From where I stand, the great majority of times on mobiles are spent either on the web, or in games. Javascript is still very much single-threaded, so higher ST performance directly results in better web experience.

    Why do you think that "I don't need ST perf"?

    Note: I don't have a single iOS device, though I'd have loved to have an Android device with A10 inside.
  • melgross - Wednesday, May 31, 2017 - link

    Nonsense! If this were a Qualcomm or Samsung chip, you wouldn't be saying that, and we both know that. While I don't know what other chips use, Apple's is about 3 watts, which is likely about what the others are. But Apple manages to get far better performance. That's never a bad thing.

    I don't think you understand what smartphones are being used for.
  • melgross - Monday, May 29, 2017 - link

    While we don't know if the benchmarks that have been listed for the new A11 from Apple are real, though they seem to be what we would expect, individual cores are hitting over 4,500 and slightly under 9,000 multicore, with both cores.

    With everything I've read here, I'm still not sure what we would expect from these parts. The highest performing ARM used on Android seems to be well below 2,000 per core, with almost 7,000 for 4 core multicore.

    So, what's to expect here? And how much of this advantage is coming from the process shrink, rather than from core improvements?
  • tipoo - Monday, May 29, 2017 - link

    Yeah that's what I'm wondering, how much is IPC improvement and how much is just clocking it higher on a new node. Reply
  • Wardrive86 - Monday, May 29, 2017 - link

    Shouldn't that be 2-128 bit NEON/FPU pipelines for the A75? If not that's a Max 4 flops per clock and lower than the cores it is replacing Reply
  • serendip - Monday, May 29, 2017 - link

    I hope chip vendors don't push 8x A55 designs for the midrange because they're only good for the low end. Having so many similar cores is pointless because Android rarely uses all 8 cores.

    I'd rather see more 2+4 or 4+4 designs with the A55 and A75, especially something like the old Snapdragon 650/652 with the latest cores and processes. I'm looking to upgrade my Mi Max a year from now and the relevant chips should appear by then. On the other hand, with constant driver updates, this phone could last for a few years still.
  • Wilco1 - Tuesday, May 30, 2017 - link

    A Cortex-A55 at 2.5GHz (same as Helio P25) would get close to ST performance of Galaxy S6 (and match MT perf). That was top-end 2 years ago... So while I agree 1+7 or 2+6 would be much better than 8x A55, I don't think you could call an S6 a low-end phone even in 2018! Reply
  • serendip - Tuesday, May 30, 2017 - link

    The Helios with their decacore design couldn't beat the real world speed and battery life of a Snapdragon 65x. It's foolish to run an A55 at 2.5 GHz when an A75 at lower speed uses similar amounts of power while being much faster. At one point, you move the load from the donkey and put it on a race horse :) Reply

Log in

Don't have an account? Sign up now