The PCIe SSD revolution is upon us. So far nearly every controller vendor has shown off its PCIe SSD controller design and the latest news I've heard is that we'll be seeing a large number of PCIe SSDs from numerous manufacturers in the second half of 2015 (watch out for Computex and Flash Memory Summit). Samsung got a head start in 2013 with the introduction of the XP941 and to-date the company is still the only manufacturer that is shipping a PCIe 2.0 x4 client SSD in volume. There are a couple of Marvell based PCIe 2.0 x2 products on the market from SanDisk and Plextor, but none that can truly challenge the XP941 in performance. 

Despite being an OEM-only product, the XP941 has been relatively popular among enthusiasts. The performance upgrade over a SATA 6 Gbps drive is significant enough that it has been worth the premium for a user with IO intensive workload. The truth is that SATA 6 Gbps has been saturated for quite some time already, so PCIe and namely the XP941 has been the only way to improve single-drive IO performance affordably (faster PCIe SSD exist, but due to their enterprise focus the prices make them unreachable for the majority).

Today we have the successor of the XP941 in the house. The SM951 made its first appearance at Samsung SSD Global Summit in July 2014 where it was touted to be the first client SSD with NVMe support. Unfortunately, Samsung changed its initial plans and the SM951 as it's known today does not support NVMe, but it still provides an upgrade from PCIe Gen2 to Gen3, which theoretically doubles the available bandwidth. Samsung is tight-lipped about the reasoning behind the decision to dump NVMe support, but from what I understand the current chipsets don't have proper NVMe support by default. It's likely that Samsung's PC OEM partners wanted to stick with known AHCI command set for improved compatibility, so Samsung decided to push the introduction of client NVMe SSD a bit further back. 

Some motherboard manufacturers have gone through the extra steps to update their BIOSes with NVMe support, but I haven't been able to get a detailed answer of what exactly needs to be changed to enable NVMe on current chipsets. Anyway, the SM951 is not NVMe enabled and will not gain NVMe support later either, so for now there isn't a single client-oriented NVMe SSD. Samsung has, however, stated that the company is working on a client NVMe SSDs, which means we may seen one soon after all.

The SM951 is an OEM-only product just like its predecessor. Currently the drive is not available through retail channel yet, but Lenovo uses the drive in its ThinkPad X1 Carbon laptop, which is how we got our early review sample. The drive will be available through RamCity within the next few months and the latest I've heard is that the first batch should be delivered in late May. I was told the pricing will be about 10% higher than what the XP941 currently sells for, which would translate to a bit over a dollar per gigabyte. The pricing will ultimately depend on Samsung's production capacity and demand, so it's too early to quote any exact prices and I will provide an update once the SM951 is available for order and the final pricing is out. Furthermore, since our sample came through Lenovo and carries a Lenovo specific firmware, I will also be reviewing the 'vanilla' version from RamCity to make sure our results reflect the model that is available for purchase.

Samsung SM951 Specifications
Capacity 128GB 256GB 512GB
Form Factor M.2 2280 (double-sided)
Controller Samsung S4LN058A01 (PCIe 3.0 x4 AHCI)
NAND Samsung 19nm 64Gbit MLC
Sequential Read 2,000MB/s 2,150MB/s 2,150MB/s
Sequential Write 600MB/s 1,200MB/s 1,500MB/s
4KB Random Read 90K IOPS 90K IOPS 90K IOPS
4KB Random Write 70K IOPS 70K IOPS 70K IOPS
L1.2 Power Consumption 2mW 2mW 2mW
Idle Power Consumption 50mW
Active Power Consumption  6.5W
Encryption N/A

Similar to the XP941, the SM951 comes in the M.2 2280 form factor and is available in capacities of 128GB, 256GB and 512GB. The lack of a 1TB model is another change from the original product plan, but it's entirely possible that a 1TB SKU will follow later. Performance wise Samsung claims up to 2.15GB/s read and 1500MB/s, which is nearly twice the throughput of the XP941. However, it's nowhere near the maximum bandwidth of the PCIe 3.0 x4 bus, though, which should be about 3.2GB/s (PCIe only has ~80% efficiency with overhead after the 128b/132b scheme used by PCIe 3.0). 

Because the SM951 is an OEM product, the warranty and endurance limitation are specified by the reseller instead of Samsung. We will know more when the drive is available, but I would expect RamCity to offer the same three-year warranty and 72TB endurance as it does for the XP941. 

In addition to PCIe 3.0, the SM951 adds support for PCIe L1.2 power state. That is essentially a PCIe version of DevSleep (but it's not limited to just storage devices) and it allows for power consumption as low as 10µW per lane. In the case of the SM951 the L1.2 power consumption is 2mW, which that translates to 500µW per lane, so there seems to be room for further improvement, but the important news is that the L1.2 power state brings the slumber power consumption to the same level as DevSleep. The L1.2 also has lower exit latency at 70µs (i.e. how long it takes for the drive to be fully powered on again), whereas the DevSleep requirement is 20ms. 

Surprisingly, the SM951 doesn't make the transition to 3D V-NAND like the rest of Samsung SSDs we've seen lately. It's still utilizing planar NAND, which I believe is the same 19nm 64Gbit MLC NAND as in the XP941. There have been some reports claiming that the SM951 uses 16nm NAND based on the change in the generation character (i.e. the last character, which was C in the XP941) of the part number, but because the capacity per die is 64Gbit I'm very doubtful that the process node has changed. It wouldn't make sense to build a 64Gbit die at 16nm process because the peripheral circuitry does not scale as well as the memory array does, which would result in very low array efficiency. In other words, a 128Gbit die at 16nm would be substantially more economical than 64Gbit, hence I believe that the NAND in the SM951 is merely a second iteration of 19nm 64Gbit die. Besides, Samsung already has 3D NAND technology and is pushing it very aggressively, so investing on a new planar NAND node wouldn't be too logical either.

Bootable? Yes

When the XP941 was released, the number one issue with the drive was the lack of boot support. Because the XP941 was never designed for retail, it didn't have its own legacy drivers that load prior to the motherboard BIOS to enable boot support on any system, and hence the XP941 required a BIOS update from the motherboard manufacturer in order to be used as a boot drive. To date, most Z97 and X99 based motherboards have a BIOS that supports booting from the XP941 (RamCity has an extensive list on the topic), although unfortunately AMD and older Intel chipsets are not supported.

I can confirm that the SM951 is also bootable on a supported motherboard. I tested this with an ASUS Z97 Deluxe using the latest 2205 BIOS and the SM951 shows up like any other drive in the boot menu. I suspect that any motherboard that can boot from the XP941 will also work with the SM951, but obviously I can't guarantee that at this point.

I also verified that the SM951 is bootable in tower Mac Pros (2012 and earlier).

AnandTech 2015 Client SSD Suite


View All Comments

  • Railgun - Wednesday, February 25, 2015 - link

    Boot times are irrelevant as there, too, there are several variables. BIOS or UEFI? HW involved. Other applications involved. And in the grand scheme of things, it's a one and done thing. If someone is so concerned on booting taking 5 seconds over 30, one can assume they'd leave the thing on. It's an irrelevant metric. Installing an OS again is an irrelevant metric due to the HW involved once again. I've never understood the fascination over boot times. Reply
  • Edgar_in_Indy - Wednesday, February 25, 2015 - link

    Boot times and OS installation times (or game installation times, if it makes you feel better) would be interesting because they would be direct reflections of how a drive's theoretical speed is manifest in real world situations.

    I'm not really sure what your point is, and what you're arguing for. Why would you *not* want a few basic, real world metrics added to the other measurements? *Of course* the test system isn't going to be the same as every user's system. So what? We should still be able to glean some useful information about a drive's relative performance to other drives.

    Besides, I have seen some situations where the synthetic tests didn't look great for a particular drive, but in the real world tests it fared much better. This is what led me to choose the Crucial M4 when I was shopping for a 256GB SSD a couple years ago. It wasn't the darling of the synthetic tests, but in the real world scenarios it was right up there with the best of them. It did particularly well in Anand's "Light Workload" test, which seemed much more typical of real-world use than the stress-test type scenarios.

    And in regards to the "fascination" with boot times, I think that almost everybody prefers a computer that starts more quickly. I've been around since the DOS days, and that was the last time that I had a computer that booted in seconds, until recently. So having SSD's that can do the same is pretty dang cool to me.

    I guess you could also ask a car guy why they have a "fascination" with 0-60 times. I mean, how often will someone really need to get to 60 mph in 5.2 seconds? It's ridiculous to think that somebody would be so worried about 0-60 times, unless they're an Indy Car driver or something.

    And besides, there are too many variables (weather, humidity, altitude, tire conditions, driver skill, etc. etc.) to get a definitive 0-60 times, so we may as well junk the whole idea, and just assume that the $60,000 sports car is faster than the $30,000 sports car just because it puts up better numbers on the dyno or has a larger displacement engine.

    But I really can't believe I'm having to explain this...
  • Railgun - Thursday, February 26, 2015 - link

    I can't believe you had to explain that as well. You said you want real world tests, which already exist and refer to it during your selection of the M4. How does loading an OS by itself reflect anything? A plain vanilla install is a case that everyone will have only once during the lifecycle of that particular install. You allude to the light workload being more typical, which in itself is more than fine. If you've looked at what's included in the test suites, in particular the destroyer, you'll see that what can be considered normal usage tasks are there.
    -Download/install games, play games,
    -Copy and watch movies,
    -Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan.
    How are those not real world tests? They're not synthetic tests. What was your real world scenario that showed the M4 was better than whatever you were comparing it to? Why is that worse, or different than a boot speed test? I too don't hold a lot of value in the synthetic marks. As you mention, it's more for bragging rights than anything else.

    I too remember the DOS days. Compared to that, there is no comparison as DOS, compared to Win7 is like comparing a Yugo to a Ferrari. They're both cars and get you from point A to B, but one is so much more than the other. Yes, they're both operating systems, but one has so much more to it and is more complicated than another. DOS 1.0 was about 4000 lines of code. Win7 is around 40 million. What about a nice striped down Linux build? That will load faster. What about Mac OS? Throw in a RAID controller and boot times get tossed out the window.

    I don't think anyone has been missing any boot time metrics in the history of testing drives, whether SSD or otherwise. I've not seen one single review anywhere that shows boot times. The ONLY time I've ever seen it was initial comparison between an HDD and SSD. It's a moot point. You know it will be quick. Kristian's point is dead on. We're in the realm of possible milliseconds here. There's no point for the metric.

    BTW, I am a car guy, and while 0-60 is all great, I'm more interested in the whole package. Handling, build quality, design, etc. :)
  • Edgar_in_Indy - Thursday, February 26, 2015 - link

    I agree with most everything you said, but I would go back to my original gripe that there was no stopwatch involved. Data rates are great, and let's definitely keep them coming, but I would simply like to see some timed tests too. And even if the timed tests show little or no difference, then that is also a very valuable piece of information.

    My basic gripe with the article is that it does not clearly answer the question "Should I or shouldn't I?" Sure, some of the graphs are dramatic, but how much will they be manifest in the real world? I think the answer to "Should I buy it?" should be the payoff for reading a big review like this.

    And if we can now say that we've reached the point where the real-world difference between SSD's for 99.99% of users is negligible, then I guess it begs the question whether these types of in-depth articles are worth writing, and worth reading, for very much farther into the future.

    Kind of like how in-depth sound card reviews have mostly gone away, since we've reached the point where they just work without drawing attention to themselves. Unless you are in the tiny percentile where your occupation relies on having the best soundcard with very specific features, then you don't have to worry about it. Like I said, I came from the DOS days, and for many years soundcards were one of the hottest topics in PC hardware. Now they're pretty much a non-issue.

    To draw a non-computer parallel, I'm sure an engineer could also write a 7,000-word review of a particular garbage disposal, going into great detail on every aspect of how the unit is built, but it would be total overkill, because people basically just want to know if it works or not. If SSD's are reaching a level of near-parity, then how many people will want to wade through all the background information in minute detail?

    This has been a very informative discussion for me, and in a way it's refreshing to know that I no longer need to sweat about choosing an SSD in the future. That also means that I will be very unlikely to click articles or visit sites that are focused on SSD performance.
  • Railgun - Thursday, February 26, 2015 - link

    I think you and will find, and Kristian, correct me if I'm wrong, that native nvme drives will increase perceived responsiveness as it allows for full on simultaneous read/write IOPS as opposed to unidirectional operations.

    That should show a nice increase in some scenarios.
  • Kristian Vättö - Wednesday, February 25, 2015 - link

    I find that it's waste of time to run tests that show the obvious, which in this case is that boot and application launch times are the same for all drives. Like I said, it's starting to become common knowledge that for basic workloads there's no difference between SSDs and I've never argued against that.

    If I did real world testing, I would like to do it right. This means more than timing the boot time and how many tenths of a second it takes to launch a certain app. Frankly that has no value when you consider a power user's workload with dozens of apps already open, of which some might be rather IO intensive (like running a VM).

    In such scenarios it can be hard to time the absolute difference because we are talking about stuttering and not seconds long wait times, but it's something that many certainly don't want to experience.

    That said, I will probably craft something basic (boot, app and installation times) to show whether PCIe/NVMe has any relevance in basic IO workloads, but it's not something that I'm looking to make a part of our regular test suite since I don't think it gives an accurate picture of actual real world performance under multitasking workloads.
  • Edgar_in_Indy - Wednesday, February 25, 2015 - link

    So would you say we're reaching the point where having the "fastest" SSD is really mostly about bragging rights?

    If that's the case, then it seems like the two most important specifications of an SSD would be size and price (much like it is for platter HDD's now). It would certainly make shopping for an SSD much simpler, if relative speed is no longer a meaningful factor.
  • Kristian Vättö - Wednesday, February 25, 2015 - link

    Yes, and I don't think I've been trying shovel the high-end SSDs down people's throat.

    I think the SSD market mainly consists of two segments now, which are the mainstream and enthusiast/professional segments. For the mainstream segment, any modern SSD is good enough, which is why $/GB has been the dominating factor when I recommend drives for that market (and that's why the MX100 has been my recommendation for quite some time now if you've seen our "Best SSDs" articles).

    The high-end sector is different in the sense that the users tend to want the best performance they can get. In some cases it's just for the bragging rights, but there are also workloads where SSD performance really matters (multiple VMs, photo/video/audio editing, etc). Some of our tests are more geared towards these users and I think we've been pretty clear about that, but as you said the Light workload test does a good job of illustrating average consumer usage and frankly the difference between drives in that test is rather small.

    My goal has never been to push people to buy "faster" drives than they need and if some of my writings have come across as that then please, give me some examples and I'll try to learn from those.
  • Edgar_in_Indy - Wednesday, February 25, 2015 - link

    No, I'm not trying to say that you've been pushing people to faster or more expensive SSD's. And even if you were, I probably wouldn't know, since I don't read all the SSD articles on here and follow all the developments. I mostly just jump in every year or two when I'm shopping for upgrades, and I try to play catch-up at that point in order to make sure I'm spending my money as wisely as possible.

    So for someone like me, who doesn't follow this stuff religiously, it's good to know I don't need to worry about missing out on big speed gains by not getting the hottest SSD of the moment next time I want to upgrade.

    That being said, I'm still a little bit of a performance enthusiast, so I can't help but be curious when something like this comes along and shows the potential for big improvements over previous designs. That's why I was a little disappointed to not find much in the way of real-world results.

    Anyway, it's obvious you put a lot of time and effort into this review, and the some of the performance results really were dramatic, so this is some good work.
  • Redstorm - Tuesday, February 24, 2015 - link

    Why when updating the storage bench system did you pick a motherboard without a m.2 x 4 PCIe 3.0 slot. The Asus Z97 Delux is only providing 2 x pcie 2.0 lanes to the onboard M.2 slot. seams a bit short sited with the impending avalanche of x4 PCIe 3.0 SSD controllers coming out. Your new bench system is obsolete before it began. Using PCIe adapters is old school.. Reply

Log in

Don't have an account? Sign up now