AnandTech Storage Bench - The Destroyer

The Destroyer is an extremely long test replicating the access patterns of very IO-intensive desktop usage. A detailed breakdown can be found in this article. Like real-world usage, the drives do get the occasional break that allows for some background garbage collection and flushing caches, but those idle times are limited to 25ms so that it doesn't take all week to run the test. These AnandTech Storage Bench (ATSB) tests do not involve running the actual applications that generated the workloads, so the scores are relatively insensitive to changes in CPU performance and RAM from our new testbed, but the jump to a newer version of Windows and the newer storage drivers can have an impact.

We quantify performance on this test by reporting the drive's average data throughput, the average latency of the I/O operations, and the total energy used by the drive over the course of the test.

ATSB - The Destroyer (Data Rate)

The Crucial BX300 is tied for second-fastest average data rate on The Destroyer among SATA drives. The BX300's performance falls between the Samsung 850 EVO and 850 PRO, and matches the Intel 545s that uses a newer generation of 3D NAND and a newer SSD controller.

ATSB - The Destroyer (Average Latency)ATSB - The Destroyer (99th Percentile Latency)

The BX300's latency during The Destroyer is best in class, with both average and 99th percentile latencies at the top of the chart.

ATSB - The Destroyer (Average Read Latency)ATSB - The Destroyer (Average Write Latency)

Breaking the average latency score down by read and write operations, we find the BX300 in second place for each subscore, but with a different drive in first place each time: the 850 PRO is what beats the BX300's average read latency, and the Crucial MX200 beats the BX300's average write latency.

ATSB - The Destroyer (99th Percentile Read Latency)ATSB - The Destroyer (99th Percentile Write Latency)

The Crucial BX300 does a great job keeping read latency low throughout the destroyer, with the lowest 99th percentile read latency out of this bunch of drives. By contrast, the 99th percentile write latency only ranks third, behind the Intel 545s and Samsung 850 PRO. The MX300's 99th percentile write latency is moderately worse than the BX300's, but its 99th percentile read latency is almost twice as high.

ATSB - The Destroyer (Power)

The BX300 further improves on the power efficiency of the MX300, but not enough to match the Intel 545s that benefits both from a newer Silicon Motion controller and from newer 64L 3D NAND.

Introduction AnandTech Storage Bench - Heavy
Comments Locked


View All Comments

  • MrSpadge - Tuesday, August 29, 2017 - link

    A budget drive with budget price, without any real weakness - well done!
  • nwarawa - Tuesday, August 29, 2017 - link

    Does this thing still have partial power loss protection? I don't see much in the way of capacitors in the images, at least compared to the M500 up to the MX300
  • Ryan Smith - Wednesday, August 30, 2017 - link

    No, it does not. The BX series always omits that feature.
  • nwarawa - Wednesday, August 30, 2017 - link

    "The BX series always omits that feature."

    Incorrect. The BX100 most definitely did. I even confirmed with Crucial themselves.
  • Samus - Sunday, September 3, 2017 - link

    BX100 PCB:

    No power loss protection.

    BX series has never offered it. If Micron/Crucial said otherwise, they lied.
  • Samus - Sunday, September 3, 2017 - link

    Here is a high-res shot from AT:

    Kristian seems to believe in that review there are enough caps to drive 8 NAND dies, a piece of 1.35v DDR3 DRAM, and the SMI controller, for 200us.

    As an engineer, without even measuring the capacitance of the tiny inlays of that PCB, it's visually clear this is physically impossible. Just comparing to the PCB of the MX100 which has a dedicated PLP circuit and rows of caps, no matter how much power efficiency the BX100 design has over the MX100, the level of PLP is going to be entirely different, which leads me to this thread:

    This thread has a good definition of "power loss protection" on the BX100:

    Basically, it's discussed that about 2-4MB of the indirection table cache (which is write-thru to the NAND by design) can be protected by the design. In other words, insignificant and irrelevant. This is why PLP was never marketed for the BX100. It's useless. Most non-enterprise implementations are.
  • nwarawa - Tuesday, September 12, 2017 - link

    I wouldn't call partial PLP "useless". Old SSDs wouldn't just lose SOME data. They would often lose ALL data. It would be nice to see an updated version of this test from years ago:

    The M4 didn't have the partial PLP, so it would be interesting to see how much of an improvement the M500 with it's partial PLP made. For that matter, some Phison S10 drives and Samsung's last few years of models mention some form of firmware based PLP... so how effective are they?

    Anyone want to start a GoFundMe for this guy to run some updated tests?
  • nwarawa - Tuesday, September 12, 2017 - link

    Update: I reached out to lkcl to see if he's interested in continuing the testing, and if GoFundMe would work for him. I said I would chip in $10-$20 to see some updated test results. Anyone else interested in these tests?
  • nwarawa - Tuesday, September 12, 2017 - link

    Samus, you didn't read carefully enough. It's not whether or not it has FULL power loss protection, but PARTIAL power loss protection. You can read anandtech's review of the BX100 for more information on what that entails. The very link you posted shows the little capacitors that are sufficient for the PARTIAL power loss protection. The reason this was even brought up is that there seem to be fewer of those capacitors on the BX300, which raised doubt as to if the feature was still included. I was just in a convo with Crucial directly, and they confirmed that the BX300 does indeed still have partial PLP.
  • FunBunny2 - Tuesday, August 29, 2017 - link

    when 3D NAND was first proposed, durability was supposed to improve because such devices could/would be built on larger nm nodes. has that actually happened? what node(s) are being used for 32/64L?

Log in

Don't have an account? Sign up now