0

I'm banging my head against the wall trying to work something out with Windows Storage Spaces (WSS). Here's what I'm up against:

I have 2 250GB drives in RAID1 (OS Drive), 2 3TB drives and 12 1TB drives (also RAID1). I've configured a simple WSS that pools the 3TB and 1TB pairs into a single virtual disk of around 8.16TB. I set things up this way because I had a big stack of old 1TB drives and plan to slowly replace them with bigger drives as they fail - something I can't easily do with RAID5/6.

So, I've finally had one of the 1TBs fail, so that particular RAID1 unit is degraded, but the WSS is still healthy, as expected. Using Set-PhysicalDisk in Powershell, I've set the usage of the appropriate RAID1 pair to "Retired" in WSS and repaired the WSS virtual disk. However, it hasn't moved any data off this RAID pair, and prompts me to add another physical disk to the WSS virtual disk if I want to remove the degraded RAID unit - despite the fact that I have ~50% of the overall storage pool in free space.

Is this a limitation of WSS that I was naively unaware of? It seems really simple and straight forward, like a defrag, and if it's capable of moving the "retired" data to a new drive, why wouldn't it be capable of moving it to an existing drive with more than enough free space? Particularly since the WSS GUI proudly states that a storage space can be provisioned to a larger size than the available storage.

I'm really hoping there's a nice easy way to do this. I don't have enough external storage to move everything off the virtual disk and rebuild it from scratch - my contingency is running a Powershell script to enumerate every file on the virtual disk and move each one to the OS drive and back, one-by-one - the theory being that it won't write "new" data to the retired disk, and it will eventually be empty, and removable.

Any suggestions?

3
  • I'm not experienced with WSS but it should be treating the pair of drives as one physical device. If you retire the pair, not just the failed disk, it may contract back into the 6 healthy pairs, then you could dump the dead one, make up a fresh RAID1 array with new drives and away you go. I guess.
    – Linef4ult
    Commented Oct 6, 2015 at 10:39
  • I've just made a correction to my original question, to clear up my terminology. "Physical disk" in this case is ambiguous - At the Windows layer each "physical disk" is a RAID1 pair, made up of actual physical disks.
    – Vocoder
    Commented Oct 6, 2015 at 12:04
  • Otherwise, what you're describing is exactly how I would expect things to work. However, because at the Windows layer, the virtual disk is ostensibly a healthy array of disks, the Repair-VirtualDrive cmdlet does nothing. I'd have expected it to shuffle data off anything marked as "Retired".
    – Vocoder
    Commented Oct 6, 2015 at 12:36

1 Answer 1

0

Okay, so I can confirm that shuffling files off the WSS virtual disk and back again was a slow but effective way to move data off a retired drive - to a point. After moving everything off and back, I was somehow left with 22% on the retired drive.

At that point, I split up my RAID1 pairs, running them in a degraded state, and reprovisioned enough single drives to move the data off entirely. Somehow, my newly empty drive still showed 1.8TB in use on the WSS management console (and 22% on the retired drive). The only thing I left behind was the sysvol info folder, so I can only think that WSS doesn't clean up after itself until it overwrites junk slabs.

So I'm moving in a different direction, and trying out Drive Bender. I'm just glad I had plenty of (non-resilient) space to mess around with.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .