Apple's M1 Pro, M1 Max SoCs Investigated: New Performance and Efficiency Heights
by Andrei Frumusanu on October 25, 2021 9:00 AM EST- Posted in
- Laptops
- Apple
- MacBook
- Apple M1 Pro
- Apple M1 Max
Last week, Apple had unveiled their new generation MacBook Pro laptop series, a new range of flagship devices that bring with them significant updates to the company’s professional and power-user oriented user-base. The new devices particularly differentiate themselves in that they’re now powered by two new additional entries in Apple’s own silicon line-up, the M1 Pro and the M1 Max. We’ve covered the initial reveal in last week’s overview article of the two new chips, and today we’re getting the first glimpses of the performance we’re expected to see off the new silicon.
The M1 Pro: 10-core CPU, 16-core GPU, 33.7bn Transistors
Starting off with the M1 Pro, the smaller sibling of the two, the design appears to be a new implementation of the first generation M1 chip, but this time designed from the ground up to scale up larger and to more performance. The M1 Pro in our view is the more interesting of the two designs, as it offers mostly everything that power users will deem generationally important in terms of upgrades.
At the heart of the SoC we find a new 10-core CPU setup, in a 8+2 configuration, with there being 8 performance Firestorm cores and 2 efficiency Icestorm cores. We had indicated in our initial coverage that it appears that Apple’s new M1 Pro and Max chips is using a similar, if not the same generation CPU IP as on the M1, rather than updating things to the newer generation cores that are being used in the A15. We seemingly can confirm this, as we’re seeing no apparent changes in the cores compared to what we’ve discovered on the M1 chips.
The CPU cores clock up to 3228MHz peak, however vary in frequency depending on how many cores are active within a cluster, clocking down to 3132 at 2, and 3036 MHz at 3 and 4 cores active. I say “per cluster”, because the 8 performance cores in the M1 Pro and M1 Max are indeed consisting of two 4-core clusters, both with their own 12MB L2 caches, and each being able to clock their CPUs independently from each other, so it’s actually possible to have four active cores in one cluster at 3036MHz and one active core in the other cluster running at 3.23GHz.
The two E-cores in the system clock at up to 2064MHz, and as opposed to the M1, there’s only two of them this time around, however, Apple still gives them their full 4MB of L2 cache, same as on the M1 and A-derivative chips.
One large feature of both chips is their much-increased memory bandwidth and interfaces – the M1 Pro features 256-bit LPDDR5 memory at 6400MT/s speeds, corresponding to 204GB/s bandwidth. This is significantly higher than the M1 at 68GB/s, and also generally higher than competitor laptop platforms which still rely on 128-bit interfaces.
We’ve been able to identify the “SLC”, or system level cache as we call it, to be falling in at 24MB for the M1 Pro, and 48MB on the M1 Max, a bit smaller than what we initially speculated, but makes sense given the SRAM die area – representing a 50% increase over the per-block SLC on the M1.
The M1 Max: A 32-Core GPU Monstrosity at 57bn Transistors
Above the M1 Pro we have Apple’s second new M1 chip, the M1 Max. The M1 Max is essentially identical to the M1 Pro in terms of architecture and in many of its functional blocks – but what sets the Max apart is that Apple has equipped it with much larger GPU and media encode/decode complexes. Overall, Apple has doubled the number of GPU cores and media blocks, giving the M1 Max virtually twice the GPU and media performance.
The GPU and memory interfaces of the chip are by far the most differentiated aspects of the chip, instead of a 16-core GPU, Apple doubles things up to a 32-core unit. On the M1 Max which we tested for today, the GPU is running at up to 1296MHz - quite fast for what we consider mobile IP, but still significantly slower than what we’ve seen from the conventional PC and console space where GPUs now can run up to around 2.5GHz.
Apple also doubles up on the memory interfaces, using a whopping 512-bit wide LPDDR5 memory subsystem – unheard of in an SoC and even rare amongst historical discrete GPU designs. This gives the chip a massive 408GB/s of bandwidth – how this bandwidth is accessible to the various IP blocks on the chip is one of the things we’ll be investigating today.
The memory controller caches are at 48MB in this chip, allowing for theoretically amplified memory bandwidth for various SoC blocks as well as reducing off-chip DRAM traffic, thus also reducing power and energy usage of the chip.
Apple’s die shot of the M1 Max was a bit weird initially in that we weren’t sure if it actually represents physical reality – especially on the bottom part of the chip we had noted that there appears to be a doubled up NPU – something Apple doesn’t officially disclose. A doubled up media engine makes sense as that’s part of the features of the chip, however until we can get a third-party die shot to confirm that this is indeed how the chip looks like, we’ll refrain from speculating further in this regard.
493 Comments
View All Comments
michael2k - Thursday, October 28, 2021 - link
Power consumption scales linearly with clock speed.Clock speed, however, is constrained by voltage. That said, we already know that the M1M itself has a 3.2GHz clock while the GPU is only running at 1.296GHz. It is unknown if there is any reason other than power for the GPU to run so slowly. If they could double the GPU clock (and therefore double it's performance) without increasing it's voltage, it would only draw about 112W. If they let it run at 3.2GHz it would draw 138W.
Paired with the CPU drawing 40W the M1M would still be several times under the Mac Pro's current 902W. So that leaves open the possibility of a multiple chip solution (4 M1P still only draws 712W if the GPU is clocked to 3.2GHz) as well as clocking up slightly to 3.5GHz, assuming no need to increase voltage. Bumping up to 3.5GHz would still only consume 778W while giving us almost 11x the GPU power of the current M1P, which would be 11x the performance of the 3080 found in the GE76 Raider
Also, you bring up AMD/Intel/NVIDIA at 5nm, without also considering that when Apple stops locking up 5nm it's because they will be at 4nm and 3nm.
uningenieromas - Thursday, October 28, 2021 - link
You would think that if Apple's silicon engineers are so freakin' good, they could basically work wherever they want...and, yep, they chose Apple. There might be a reason for that?varase - Wednesday, November 3, 2021 - link
We're glad you shared your religious epiphany with the rest of us 😳.Romulo Pulcinelli Benedetti - Sunday, May 22, 2022 - link
Sure, Intel and AMD would take all the hard work to advance humanity toward Apple level chips if Apple was not there, believe in this...Alej - Tuesday, October 26, 2021 - link
The native ARM Mac scarcity I don’t fully get, a lot of games get ported to the switch which is already ARM. And if they are using Vulkan as the graphics API then there’s already MoltenVK to translate it to Metal, which even if not perfect and won’t use the 100% of available tricks and optimizations, it would run well enough.Wrs - Tuesday, October 26, 2021 - link
@Alej It's a numbers and IDE game. 90 million Switches sold, all purely for gaming, supported by a company that exclusively does games. 20 million Macs sold yearly, most not for gaming in the least, built by a company not focused on gaming for that platform. iPhones are partially used for gaming, however, and sell many times the volume of the Switch, so as expected there's a strong gaming ecosystem.Kangal - Friday, October 29, 2021 - link
Apple is happy where they are.However, if Apple were a little faster/wiser, they would've made the switch from Intel Macs to M1 Macs back in 2018 using the TSMC 7nm node, their Tempest/Vortex CPUs and their A12-GPU. They wouldn't be too far removed from the performance of the M1, M1P, M1X if scaled similarly.
And even more interesting, what if Apple released a great Home Console?
Something that is more compact than the Xbox Series S, yet more powerful than the Xbox Series X. That would leave both Microsoft and Sony scrambling. They could've designed a very ergonomic controller with much less latency, and they could've enticed all these AAA-developers to their platform (Metal v2 / Swift v4). It would be gaming-centric, with out-of-box support for iOS games/apps, and even a limited-time support (Rosetta v2) for legacy OS X Applications. They wouldn't be able to subsidies the pricing like Sony, but could basically front the costs from their own pocket to bring it to a palatable RRP. After 2 years, then they would be able to turn a profit from its hardware sales and software sales.
I'm sure they could have been a hit. And it would then pivot to make MacBook Pro's more friendly for media consumption, and developer-supported. Strengthening their entire ecosystem, and leveraging their unique position in software and hardware to remain competitive.
kwohlt - Tuesday, October 26, 2021 - link
I think it is just you. Imagine a hypothetical ultra thin, fanless laptop that offered 20 hours of battery under load and could play games at desktop 3080 levels...Would you wish this laptop was louder, hotter, and had worse battery?No of course not. Consuming less power and generating less heat, while offering similar or better performance has always been the goal of computing. It's this trend that allows us to daily carry computing power that was once the size of a refrigerator in our pockets and on our wrists.
Wrs - Wednesday, October 27, 2021 - link
No, but I might wish it could scale upward to a desktop/console for way more performance than a 3080. :) That would also be an indictment of how poorly the 3080 is designed or fabricated, or how old it is.Now, if in the future silicon gets usurped by a technology that does not scale up in power density, then I could be forced to say yes.
turbine101 - Monday, October 25, 2021 - link
Why would developers waste there time on a device which will have barely any sales?The M1 Mac Max costs $6knzd. That's just crazy, even the most devout Apple enthusiasts cannot justify this. And Mac is far less usable than IOS.