8

I am approaching that time where my main PC has slowed enough due to Windows rot that I should probably reinstall. Instead of spending a week arm-wrestling, reinstalling and configuring only to find that I forgot to back up my Visual Studio settings yet again, I'm going to upgrade. I've already purchased all the parts and am in the process of benchmarking to find the best configuration.

I bought two Crucial M4 120GB SSDs (updated to latest firmware as of April '12: 00F) and have been running some quick benchmarks using CrystalDiskMark. Here are some results:

Single drive:

Single drive performance

RAID 0 via Intel Z77 chipset controller:

RAID 0

These benchmarks are obviously not exhaustive, but I think they give me a good idea of what to expect between various configurations.

My understanding is that for most common usage patterns, namely development with Visual Studio (my primary usage), 4K reads and writes are far more common, even during builds. Between RAID 0 and not, there's little difference. But with 512K and Sequential R/W, the differences are enough to merit attention.

The thing is, in order to avoid future rot issues - and, quite frankly, because I can - I'm going to be relying a lot more on virtualization. My plan is to segment different parts of my development environment into virtual machines using VMware Workstation: Visual Studio and accompanying tools on one, SQL Server on another, Adobe Design Suite on yet another, etc. By taking advantage of VM snapshots and the easy by which it is to create or clone new ones, I believe I'll see an improvement in long-term reliability (and only ever see Adobe update pop-ups when I want to).

So, my question is, does virtualization merit the usage of a RAID 0 SSD configuration over a traditional setup (in my case, OS and bare-metal apps on one SSD, VM's on the other)? Will virtualization take advantage of the 512K and Sequential R/W strengths of RAID 0?


An observation:

I've read that some modern SSDs are capable of managing garbage collection on their own, so not having TRIM is less of an issue. I don't know how to enable this on my SSDs or even see if they support it, however.


Edit:

Regarding disaster recovery, this system also has large standard platter drives for file storage and a secondary RAID controller that I'll eventually use in a mirrored array. Combined with nightly local backups, constant off-site backup via Carbonite and consistent off-site source control check-ins, I've got a sufficient means of preventing data loss.

3
  • 1
    I'd steer clear of RAID 0 without an exhaustive disaster recovery plan, but virtualisation will definitely take advantage of the improved speed from SSDs in that configuration.
    – user3463
    Commented Apr 25, 2012 at 22:59
  • Good point. I forgot to mention I've also got some standard drives on board and I've got multi-headed on-site and off-site backups. So I'm not completely screwed if a drive dies, but that's a good point because I forgot if one drive in a RAID 0 array goes, all data is lost.
    – Chad Levy
    Commented Apr 26, 2012 at 0:13
  • 1
    Another possible problem that may merit some attention: Running RAID may require all I/O to be in units of a particular stripe size. VMs are probably not going to be doing I/O aligned to these stripe sizes, and you'll cause massive write amplification that can significantly reduce your drives' lifetime.
    – afrazier
    Commented Jun 12, 2012 at 1:23

3 Answers 3

6

First, compiling code is known to be largely CPU bound, so don't expect improvements over a single SSD there.

In your benchmark, although the 4k 0-queue depth performance does not increase, 4k 32-queue depth (QD32) does increase pretty much linearly. IMO, this result should drive your decision.

Even though workstations are not database servers with constantly large queue depths, queue depths of 2-20 are common at least in burst during semi-intensive workstation usage - in which cases the random IO will indeed improve with RAID 0. Considering that VM add a further layer of simultaneous OS activity (and you may end up using 2+ simultaneous VM eventually), I would think that this metric should improve things in your scenario.

You can monitor your current queue depth usage in windows's Performance Monitor (Add counter...Physical Disk...Avg Read/Write Queue Depth) to get an idea.

RAID does adds a layer of complexity (and backup/recovery issues), but having a larger 240gb partition is definitely a plus IMO. Do verify about any TRIM issues before using RAID with your drives - could be a show-stopper.

2
  • Great info, thanks. Regarding TRIM support - I think TRIM is working, or at the very least it's enabled according to fsutil. Would Windows enable TRIM if the commands weren't being passed through the RAID interface?
    – Chad Levy
    Commented Apr 26, 2012 at 2:22
  • 1
    I'm not too familiar (only have 1 SSD here..), but I believe since RAID controllers don't support TRIM, you won't never see a reference to it anywhere. In other words, TRIM would be technically disabled so you must have confidence that your SSD's firmware is doing decent garbage management internally (don't quote me on that though). Note that Intel RST drivers are expected to support RAID TRIM sometimes in 2012, if that's what you're using. Other controllers will likely follow.
    – mtone
    Commented Apr 26, 2012 at 2:33
3

Hard drive speed is important to overall Visual Studio performance. Scott Guthrie touches on it well in this post:

Multi-core CPUs on machines have gotten fast enough over the past few years that in most common application scenarios you don't usually end up blocking on available processor capacity in your machine.

When you are doing development with Visual Studio you end up reading/writing a lot of files, and spend a large amount of time doing disk I/O activity. Large projects and solutions might have hundreds (or thousands) of source files (including images, css, pages, user controls, etc). When you open a project Visual Studio needs to read and parse all source files in it so as to provide intellisense. When you are enlisted in source control and check out a file you are updating files and timestamps on disk. When you do a compilation of a solution, Visual Studio will check for updated assemblies from multiple disk path locations, write out multiple new assemblies to disk when the compilation is done, as well as persist .pdb debugger symbol files on disk with them (all as separate file save operations). When you attach a debugger to a process (the default behavior when you press F5 to run an application), Visual Studio then needs to search and load the debugger symbols of all assemblies and DLLs for the application so as to setup breakpoints.

In my personal experience using a SSD has helped a lot but given that a large amount of the disk I/O is probably small random reads RAID 0 might not be a huge improvement. The other thing you might find is that virtual disk I/O eats up some of your gains.

3

Intel Z77 supports Trim with RAID, and is one of the first low-budget chipsets to do so. But you need to specify in storage setup that you are using an SSD.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .