Battlefield 4

Kicking off our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 2560x1440 - Ultra Quality

Battlefield 4 - 1920x1080 - Ultra Quality

Though not doing poorly, Battlefield 4 has not been a game AMD’s products have excelled at lately. Case in point, at 1080p even the referenced clocked R9 380X can’t unseat the GeForce GTX 960; it takes the ASUS factory overclock to do that. Overall while the 380X is on average 10% faster than the GTX 960, as we’ll see as we work through our games it will not take the top spot in every single game, so this will not be a clean sweep.

Meanwhile Battlefield 4 is a good example of why AMD wishes to focus on 1440p, despite the fact that Tonga is going to come up a bit short in overall performance. As we’ve seen time and time again, AMD’s performance hit with resolution increases is less than NVIDIA’s, so a loss for the R9 380X at 1080p is a win at 1440p. There are a few cases where the R9 380X is fast enough for 1440p, but by and large you’d have to take a quality hit to reach the necessary performance. So unfortunately for AMD this bulk of the focus on the R9 380X is going to be at 1080p.

As for comparisons with past cards, we’ve gone ahead and thrown in the Radeon HD 7850 and the GeForce GTX 660, 2GB cards that launched at $249 and $229 respectively in 2012. Part of AMD’s marketing focus for the R9 380X will be as an upgrade for early 28nm cards, where the R9 380X is a significant step up. Between the greater shader/ROP throughput, greater memory bandwidth, and doubled memory, the R9 380X is around 82% faster than the 7850, which traditionally is around the area where a lot of gamers look for an upgrade.

Finally, at the other end of the spectrum, it’s worth pointing out just how far ahead of the R9 380X the R9 390 and GTX 970 are. In the introduction we called them spoilers, and this is exactly why. They cost more, but the performance advantage of the next step up is quite significant.

The Test Crysis 3
Comments Locked


View All Comments

  • CaedenV - Monday, November 23, 2015 - link

    My guess is that these cards are factory OC'd, which means that they would need to be underclocked to run an apples-to-apples comparison at true 'stock' settings.
  • Zeus Hai - Monday, November 23, 2015 - link

    Can anyone confirm that AMD's Frame Limiter still doesn't work on Windows 10?
  • nathanddrews - Monday, November 23, 2015 - link

    That's news to me.

    Just for you, I tested it using my i3-2100/HD7750/W10 test mule. VSync globally disabled in CCC, VSync disabled in Dota 2, Frame Target set to 60fps. Steam overlay shows 60fps and I see no signs of tearing or stuttering. To my knowledge, it never stopped working.
  • Zeus Hai - Monday, November 23, 2015 - link

    Hmm.., it should have some tearing because it doesnt really sync with the monitor anyway, mate. Can you set it to 65, 70, 75? Mine doesnt work in LoL, I set it to 60, but it always fires up over 150fps+
  • Dirk_Funk - Monday, November 23, 2015 - link

    LoL does have its own fps limiter, so perhaps that's causing a mix-up in the software. Also, LoL might be running in fake fullscreen mode whereas the catalyst fps limiter specifies it will "Reduce power consumption by running full-screen applications at reduced frame rates." I'm gonna go try a round of LoL now because you have me curious.
  • Asomething - Tuesday, November 24, 2015 - link

    Mine does, was just benching my new 290x and forgot to turn it off so my results were skewed by the 75fps frame cap i set.
  • nirolf - Monday, November 23, 2015 - link

    There's "ASUS R9 Fury OC" mentioned in the first table in the Overclocking section.
  • Ryan Smith - Monday, November 23, 2015 - link

  • Shadowmaster625 - Monday, November 23, 2015 - link

    Tonga is an epic disaster. It is less than 10% more efficient than tahiti in terms of performance per watt, and in terms of performance per transistor (fps per mm^2) it apeears to be actually worse. Meanwhile, Nvidia releases maxwell which outperformas kepler on both these metrics not by some paltry 10% or less, but by a very wide margin.
  • CiccioB - Tuesday, November 24, 2015 - link

    All the GCN architecture is a disaster.
    With TeraScale architecture AMD could fight with smaller dies and less W for a bit less performance.
    With GCN AMD has to compete using larger and power hungry dies that have brought it to go in red also in the graphics division, while with older TeraScale it at least could be at least on par.
    GCN is an architecture not up with that of the competition.
    DP64 presence is not the problem, as AMD has kept on reducing it influence over every GCN step (starting from 1/4FP and ending to 1/24FP) with no real results under the power consumption term. They probably could just spare few mm^2 on the die, but they are too way back with memory compression (I can't really believe they never thought about that) and their bus are way too big, expensive and power hungry.
    All the architecture is a fail. And DX12 is not going to solve anything, as if they ever raise their performances of 10% over the competition, they are still way back in efficiency both in terms of W and die size.

Log in

Don't have an account? Sign up now