It has been over a year since Intel launched its Skylake-X processors and Basin Falls platform, with a handful of processors from six-core up to eighteen-core. In that time, Intel’s competition has gone through the roof in core count, PCIe lanes, power consumption. In order to compete, Intel has gone down a different route, with its refresh product stack focusing on frequency, cache updates, and an updated thermal interface. Today we are testing the top processor on that list, the Core i9-9980XE.

Intel’s Newest Processors

In October, at their Fall Desktop Event, Intel lifted the lid on its new generation of high-end desktop processors. There will be seven new processors in all, with the Core i9-9980XE eighteen-core part at the top, down to the eight-core Core i7-9800X as the cheapest model.

Intel Basin Falls Skylake-X Refresh
AnandTech Cores TDP Freq L3
(MB)
L3 Per
Core
DRAM
DDR4
PCIe
i9-9980XE $1979 18 / 36 165 W 3.0 / 4.5 24.75 1.375 2666 44
i9-9960X $1684 16 / 32 165 W 3.1 / 4.5 22.00 1.375 2666 44
i9-9940X $1387 14 / 28 165 W 3.3 / 4.5 19.25 1.375 2666 44
i9-9920X $1189 12 / 24 165 W 3.5 / 4.5 19.25 1.604 2666 44
i9-9900X $989 10 / 20 165 W 3.5 / 4.5 19.25 1.925 2666 44
i9-9820X $889 10 / 20 165 W 3.3 / 4.2 16.50 1.650 2666 44
i7-9800X $589 8 / 16 165 W 3.8 / 4.5 16.50 2.031 2666 44

The key highlights on these new processors is the increased frequency of most of the parts compared to the models they replace, the increased L3 cache on most of the parts, and now all of Intel’s high-end desktop processors will have 44 PCIe lanes from the CPU out of the box (not including chipset lanes).

When doing a direct comparison to the previous generation of Intel’s high-end desktop processors, the key highlights are in the frequency. There is no six-core processor any more, given that the mainstream processor line goes up to eight cores. Anything with twelve cores or below gets extra L3 cache and 44 PCIe lanes, but also rises up to a sustained TDP of 165W.

All of the new processors will be manufactured on Intel’s 14++ node, which allows for the higher frequency, and will use a soldered thermal material between the processor and the heatspreader to help manage temperatures.

New HCC Wrapping

Intel’s high-end desktop cadence since 2010 has been relatively straightforward: firstly a new microarchitecture with a new socket on a new platform, followed by an update using the same microarchitecture and socket but on a new process node. The Skylake-X Refresh (or Basin Falls Refresh, named after the chipset family) series of processors breaks that mold.

The new processors instead take advantage of Intel’s middle-sized silicon configuration to exploit additional L3 cache. I’ll break this down in to what it actually means:

Intel historically creates three different sizes of enterprise CPUs: a low core count (LCC) design, a high core count (HCC) design, and an extreme core count design (XCC). For the Skylake-SP family, the latest enterprise family, the LCC design offers up to 10 cores and 13.75MB L3 cache, the HCC design is up to 18 cores and 24.75 MB L3 cache, and the XCC design is up to 28 cores with 38.5 MB of L3 cache. Intel then disables cores to fit the required configurations its customer needs. However, an 8 core processor in this model could be a 10-core LCC cut down by two cores, or an 18-core HCC cut down by 10 cores. Some of those cut cores can have their L3 cache still enabled, creating more differentiation.

For the high-end desktop, Intel usually uses the LCC core count design exclusively. For Skylake-X, this changed, and Intel started to offer HCC designs up to 18 cores in its desktop portfolio. The split was very obvious – anything 10 core or under was LCC, and 12-18 core was HCC. The options were very strict – the 14 core part had 14 cores of L3 cache, 12 core part had 12 cores of L3 cache, and so on. Intel also split the processors into some that had 28 PCIe lanes and some with 44 PCIe lanes.

For the new refresh parts, Intel has decided that there are no LCC variants any more. Every new processor is a HCC variant, cut down from the 18-core HCC die. If Intel didn’t tell us this outright, it would be easily spotted by the L3 cache counts: the lowest new chip is the Core i7-9800X, an eight-core processor with 16.5 MB of L3 cache, which would be more than the LCC silicon could offer.

Using HCC across all of the new processors is a double-edged sword. On the plus side, some CPUs have more cache, and everyone has 44 PCIe lanes. On the downside, the TDP has increased to 165W for some of those parts, but it also means that a lot of silicon is perhaps being ‘wasted’. The HCC silicon is significantly larger than the LCC silicon, and Intel gets fewer working processors per wafer it manufactures. This ultimately means that if these processors were in high demand, its ability to manufacture more could be lower. On the flip side, having one silicon design for the whole processor range, but with bits disabled, might make stock management easier.

Because the new parts are using Skylake-X cores, it comes with AVX512 support as well as Intel's mesh interconnect. As with the original Skylake-X parts, Intel is not defining the base frequency of the mesh, but suggesting a recommended range for the mesh frequency. This means that we are likely to see motherboard vendors very in their mesh frequency implementation: some will run at the peak turbo mesh frequency at all times, some will follow the mesh frequency with the core frequency, and others will run the mesh at the all-core turbo frequency.

Soldered Thermal Interface Material (sTIM)

One of the key messages out of Intel’s Fall Desktop Launch event is a return to higher performance thermal interface materials. As we’ve covered several times in the past, Intel has been reverting back to using a base thermal grease in its processor designs. This thermal grease typically has better longevity through thermal cycling (although we’re comparing years of use to years of use), is cheaper, but performs worse for thermal management. The consumer line from Intel has been using thermal grease since Ivy Bridge, while the high-end desktop processors were subjected to grease for Skylake-X.

Thermal Interface
Intel
Intel Celeron Pentium Core i3 Core i5 Core i7
Core i9
HEDT
Sandy Bridge LGA1155 Paste Paste Paste Bonded Bonded Bonded
Ivy Bridge LGA1155 Paste Paste Paste Paste Paste Bonded
Haswell / DK LGA1150 Paste Paste Paste Paste Paste Bonded
Broadwell LGA1150 Paste Paste Paste Paste Paste Bonded
Skylake LGA1151 Paste Paste Paste Paste Paste Paste
Kaby Lake LGA1151 Paste Paste Paste Paste Paste -
Coffee Lake 1151 v2 Paste Paste Paste Paste Paste -
CFL-R 1151 v2 ? ? ? K = Bonded -
AMD
Zambezi AM3+ Bonded Carrizo AM4 Bonded
Vishera AM3+ Bonded Bristol R AM4 Bonded
Llano FM1 Paste Summit R AM4 Bonded
Trinity FM2 Paste Raven R AM4 Paste
Richland FM2 Paste Pinnacle AM4 Bonded
Kaveri FM2+ Paste / Bonded* TR TR4 Bonded
Carrizo FM2+ Paste TR2 TR4 Bonded
Kabini AM1 Paste      
*Some Kaveri Refresh were bonded

With the update to 9th Generation parts, both consumer overclockable processors and all of the high-end desktop processors, Intel is moving back to a soldered interface. The use of a liquid-metal bonding agent between the processor and the heatspreader should help improve thermal efficiency and the ability to extract thermal energy away from the processor quicker when sufficient coolers are applied. It should also remove the need for some extreme enthusiasts to ‘delid’ the processor to put their own liquid metal interface between the two.

The key thing here from the Intel event is that in the company’s own words, it recognizes that a soldered interface provides better thermal performance and ‘can provide benefits for high frequency segments of the business’. This is where enthusiasts rejoice. For professional or commercial users who are looking for stability, this upgrade will help the processors run cooler for a given thermal solution.

How Did Intel Gain 15% Efficiency?

Looking at the base frequency ratings, the Core i9-9980XE is set to run at 3.0 GHz for a sustained TDP of 165W. Comparing that to the previous generation which is only at 2.6 GHz, this would equate to a 15% increase in efficiency in real terms. These new processors have no microarchitectural changes over the previous generation, so the answers lie into two main areas: binning and process optimization.

The binning argument is easy – if Intel tightened the screws on its best bin, then we would see a really nice product. A company as large as Intel has to balance how often they get a processor to fall into a bin with demand – there’s no point advertising a magical 28-core 5 GHz CPU at a low TDP if only one in a million hits that value.

Process optimization is going to be a likely cause in that case. Intel is now manufacturing these parts on its 14++ manufacturing node, which is part of Intel’s ‘14nm class family’, and is a slightly relaxed version of 14+ with a larger transistor gate pitch to allow for a higher frequency:

As with all adjustments to semiconductor processes, if you improve one parameter then several others change as well. The increase in higher frequency typically means higher power consumption and energy output, which can be assisted with the thermal interface. To use 14++ over 14+, Intel might have also had to use new masks, which might allow for some minor adjustments to also improve power consumption/thermal efficiency. One of the key features right now in CPU world is the ability for the chip to track voltage per-core more efficiently do decrease overall power consumption, however Intel doesn’t usually spill the beans on features like this unless it has a point to prove.

Exactly how much performance Intel has gained with its new processor stack will come through in our testing.

More PCIe 3.0 Please: You Get 44 Lanes, Everyone Gets 44 Lanes

The high-end desktop space has started becoming a PCIe lane competition. The more lanes that are available direct from the processor, the more accelerators, high-performance networking, and high-performance storage options can be applied on the same motherboard. Rather than offering slightly cheaper, lower core count models with only 28 lanes (and making motherboard layout a complete pain), Intel has decided that each of its new CPUs will offer 44 PCIe 3.0 lanes. This makes motherboard layouts much easier to understand, and allows plenty of high-speed storage through the CPU even for the cheaper parts.

On top of this, Intel likes to promote that its high-end desktop chipset also has 24 PCIe 3.0 lanes available. This is a bit of a fudge, given that these lanes are bottlenecked by a PCIe 3.0 x4 link back to the CPU, and that some of these lanes will be taken up by USB ports or networking, but much like any connectivity hub, the idea is that the connections through the chipset are not ‘always hammered’ connections. What sticks in my craw though is that Intel likes to add up the 44 + 24 lanes to say that there are ’68 Platform PCIe 3.0 Lanes’, which implies they are all equal. That’s a hard no from me.

Intel faces competition here on PCIe lanes, as AMD’s high-end desktop Threadripper 2 platform offers 60 PCIe lanes on all of its parts. AMD also recently announced that its next generation 7nm enterprise processors will be using PCIe 4.0, so we can expect AMD’s HEDT platform to also get PCIe 4.0 at some point in the future. Intel will need to up its game here to remain competitive for sure.

Motherboard Options

The new high-end desktop processors are built to fit the LGA2066 socket and use X299 chipsets, and so any X299 motherboard on the market with a BIOS update should be able to accept these new parts, including the Core i9-9980XE. When we asked ASRock for a new BIOS, given that the ones listed did not state if the new processors were supported, we were told ‘actually the latest BIOS already does support them’. It sounds like the MB vendors have been ready with the microcode for a couple of months at least, so any user with an updated motherboard could be OK straight away (although we do suggest that you double check and update to the latest anyway).

For users looking at new high-end desktop systems, we have a wealth of motherboard reviews for you to look through:

We are likely to see some new models come to market as part of the refresh, as we’ve already seen with GIGABYTE’s new X299-UA8 system with dual PLX chips, although most vendors have substantial X299 motherboard lines already.

The Competition

In this review of the Core i9-9980XE, Intel has two classes of competition.

Firstly, itself. The Core i9-7980XE was the previous generation flagship, and will undoubtedly be offered at discount when the i9-9980XE hits the shelves. There’s also the question of whether more cores or higher frequency is best, which will be answered on a per-benchmark basis. We have all the Skylake-X 7000-series HEDT processors tested for just such an answer. We will test the rest of the 9000-series when samples are made available.

AnandTech Cores TDP Freq L3
(MB)
L3 Per
Core
DRAM
DDR4
PCIe
Intel
i9-9980XE $1979 18 / 36 165 W 3.0 / 4.5 24.75 1.375 2666 44
i9-7980XE $1999 18 / 36 165 W 2.5 / 4,4 24.75 1.375 2666 44
AMD
TR 2990WX $1799 32 / 64 250 W 3.0 / 4.2 64.00 2.000 2933 60
TR 2970WX $1299 24 / 48 250 W 3.0 / 4.2 64.00 2.000 2933 60
TR 2950X $899 16 / 32 180 W 3.5 / 4.4 32.00 2.000 2933 60

Secondly, AMD. The recent release of the Threadripper 2 set of processors is likely to have been noticed by Intel, offering 32 cores at just under the price of Intel’s 18-core Core i9-9980XE. What we found in our review of the Threadripper 2990WX is that for benchmarks that can take advantage of the bi-modal configuration, Intel cannot compete. However, Intel’s processors cover a wider range of workloads more effectively. It’s a tough sell when we compare items such as the 12-core AMD vs 12-core Intel, where benchmarks come out equal but AMD is half the price with more PCIe lanes. This is going to be a tricky one for Intel to be competitive on all fronts (and vice versa).

Availability and Pricing

Interestingly, Intel only offered us our review sample of the i9-9980XE last week. I suspect that this means that the processors, or at least the i9-9980XE, should be available from today. If not, then very soon: Intel has promised by the end of the year, along with the 28-core Xeon W-3175X as well (still no word on that yet).

Pricing is as follows:

Pricing
Intel*   AMD**
i9-9980XE $1979  
  $1799 TR 2990WX
i9-9960X $1684  
i9-9940X $1387  
  $1299 TR 2970WX
i9-9920X $1189  
i9-9900X $989  
i9-9820X ~$890 TR 2950X
  $649 TR 2920X
i7-9800X $589  
i9-9900K $488  
  $329 Ryzen 7 2700X
* Intel pricing is per 1k units
** AMD pricing is suggested retail pricing 

 

Pages In This Review

  1. Analysis and Competition
  2. Test Bed and Setup
  3. 2018 and 2019 Benchmark Suite: Spectre and Meltdown Hardened
  4. HEDT Performance: Encoding Tests
  5. HEDT Performance: Rendering Tests
  6. HEDT Performance: System Tests
  7. HEDT Performance: Office Tests
  8. HEDT Performance: Web and Legacy Tests
  9. HEDT Performance: SYSMark 2018
  10. Gaming: World of Tanks enCore
  11. Gaming: Final Fantasy XV
  12. Gaming: Shadow of War
  13. Gaming: Civilization 6
  14. Gaming: Ashes Classic
  15. Gaming: Strange Brigade
  16. Gaming: Grand Theft Auto V
  17. Gaming: Far Cry 5
  18. Gaming: Shadow of the Tomb Raider
  19. Gaming: F1 2018
  20. Power Consumption
  21. Conclusions and Final Words
Test Bed and Setup
Comments Locked

143 Comments

View All Comments

  • nadim.kahwaji - Tuesday, November 13, 2018 - link

    Niceeee , keep up the great work Ian ‘:)
  • AshlayW - Tuesday, November 13, 2018 - link

    In my opinion the entire Intel HEDT lineup is a joke. And the 9980XE: $180 more for literally just a bit over *half* the cores and threads. Sure it has better lightly threaded performance but surely that's not the intention of this processor, and surely it is not worth charging this insane 'Intel Tax' premium for it.
  • TEAMSWITCHER - Tuesday, November 13, 2018 - link

    Intel is free to charge whatever they want for a device that I have zero intention of purchasing. Most professionals I know have stopped using desktop computers for their daily drivers. The Dell XPS 15 and Apple's 15" MacBook Pro seam to be the weapons of choice these days. These products surely have their uses, but in the real world, most users are happy to sacrifice absolute performance for mobility.
  • imaheadcase - Tuesday, November 13, 2018 - link

    Most be a strange world you live on. Mobile won't ever be anything close to a desktop for daily tasks. I don't know any professional who have did that. They use mobile devices mainly to view items they did on desktop, not for working.
  • TEAMSWITCHER - Tuesday, November 13, 2018 - link

    Really? I work in software development (WEB, C++, OpenGL, and yes our own ray tracing engine) We have one guy with a desktop, the rest of the developers use either an XPS 15, a MacBook Pro, and one guy with a Surface Book. All were given a choice...this was the result.

    Interesting story about how we got here... Windows used to be a requirement for developing browser plugins. But with the move to Web Assembly, we can now compile and test our plugin on the Mac just as easily as we do on Windows. While many fanboys will lament this change .. I personally love it!

  • Endda - Tuesday, November 13, 2018 - link

    Yea, for code development only. Mobility has been the choice for that for years.

    Not everyone is a coder though. Some need these desktops for rendering big animations, videos, etc. You're simply not going to do that in any meaningful way on a laptop
  • PeachNCream - Tuesday, November 13, 2018 - link

    Rendering and production work can indeed happen on laptop hardware. I don't argue that desktop hardware with fewer limits on TDP and storage aren't a faster way to accomplish the same tasks, but as Team noted, given a choice, a lot of people opt for mobility over raw compute power.
  • nerd1 - Tuesday, November 13, 2018 - link

    It's a big joke to use XPS or Macbook GPU to do anything intensive. It's good for remote code editing though (except macbooks with absolute terrible keyboard)
  • TEAMSWITCHER - Tuesday, November 13, 2018 - link

    Define "intensive." Our software does real-time (WebGL) and photo-realistic (ray-tracing) rendering. I suppose that a Path Tracing engine would be MORE intensive. But the goal of our software is to be as ubiquitous as possible. We support the iPad and some Android tablets.
  • linuxgeex - Wednesday, November 14, 2018 - link

    There's your answer: anything that runs on iPad and Android Tablets is not "intensive". I'll grant you that it's "intensive" compared to what we were doing on workstations a decade ago, and mobile is closing the gap... but a workstation today has 24-56 cores (not threads) at 5Ghz and dual NVidia 2080 GPUs. You can get a 12-core CPU and dual 1080 in the pinnacle gaming laptops but they don't have ECC or the certifications of a workstation. At best they have half to 2/3 the performance. If you're paying your engineers by the hour you don't want them sitting on their hands twice as long. But I can see how they might make that choice for themselves. You make an excellent point there, lol.

Log in

Don't have an account? Sign up now