1

I have an Adaptec 81605ZQ RAID Controller currently connected to:

  • 2 x 8 TB HGST HDD in RAID-1
  • 2 x 1 TB Samsung 850 Pro SSD in RAID-0 (separate array)
  • 1 x 128 GB Samsung 850 Pro SSD as "MaxCache" (read cache for RAID-1 HDD array)

On Windows 10, with the latest Adaptec firmware and drivers, I get these long hangs (30-50 seconds, on average) when writes are pegged on the HDD array. It's a lot worse during simultaneous read/write scenarios, where some process is doing significant reads while writes are going on. At some point, often, there's literally no disk I/O for more than 30 seconds.

I'd like to drop the RAID controller (to troubleshoot, and because it's probably not necessary at all with a 6700K) and just use software RAID. Does Windows support mounting the Adaptec on-disk RAID format in software (without the RAID controller)? Or would it be better to shrink the RAID array to one HDD (no redundancy) and do a block-level copy of the array into the plain (non-RAID) disk? I want to avoid buying any more disks for this if possible.

Also, what would do I do about the RAID-0 array? I'd need another 2 TB of space somewhere if I can't do an in-place mount of the Adaptec disk format in Windows.

I either want to (1) mount the RAID arrays directly without the RAID controller (connected to the SATA chipset of the motherboard), or (2) copy the data "in-place" (without buying more hardware) and reinitialize the array as some other, software RAID format.

2
  • What is the type of those 8TB HGST HDDs? Is it something SMR based? Commented Oct 11, 2017 at 8:39
  • Yes I believe they are SMR... Commented Oct 18, 2017 at 17:07

2 Answers 2

1

Here are the steps I take to move from one array to another using existing disks (generic).

  1. Degrade the array by manually failing/removing a disk from the old array.
  2. Start a new degraded array on newly available disk.
  3. Copy files from the old degraded array to the new degraded array.
  4. After verifying the file transfer, blitz the old array and add the newly available device to the new, currently degraded, array.
  5. When the rebuild completes, you have just migrated arrays.
3
  • Nice, but doesn't take care of the RAID-0 array. I guess I'd have to make a block-level backup of the SSDs into the HDDs (filesystem in a file). Commented Oct 16, 2017 at 13:45
  • @allquixotic You can use one of your 8TB drives as a temporary storage location for the RAID 0. Theoretically you will have an 8TB dive empty and available twice during the migration; once when your are ready to start a new degraded array, and a second time after you migrate from one degraded array to another.
    – Damon
    Commented Oct 17, 2017 at 0:35
  • 1
    Good info. I ended up just buying 2 x Samsung 850 Pro 256 GB and using them for read/write (instead of just read) MaxCache. This solved my performance problem by shuffling the IOPS load of writes from the HDDs directly to the SSDs in full "Tiered Storage" mode. Turns out the correct way to use this RAID card is to go all-out and use SSDs for read and write cache, that way the performance of the HDDs is mostly invisible :P Commented Oct 18, 2017 at 17:08
0

Migrating to software RAID won't help you.

I'm afraid, the described behaviour is "feature" of archive disks based on SMR. The disk has some part of its capacity in normal format (in your case probably those 600 GB). This area serves as cache for writing. But the problem is, that data in SMR disks must be written at once in quite big area. Usualy the size of such area is 256 MB. And even if you change single byte, complete 256 MB area must be rewritten again.

So if you fill the cache buffer completely, the disk must first write the data into SMR form, which takes much longer time...

So SMR disks are really more usable for archiving with reading from time to time, than for write load operations...

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .