2

What exactly is "Parity Consistency Check" that Synology NAS runs when a new disk is added and why does it normally take a long time to complete?

Why does the process takes a long time? (can take up to a week when adding a new drive, when all I seem to have is three 8Tb drives)

I only have two 8Tb disks, in my Synology NAS using SHR. Would it be better to copy over the data that I have there now elsewhere, then insert a new disk into NAS, and initialize them all as a new SHR drive, and then copy my data back from elsewhere to the new SHR? It seems like it will take faster to do that than to wait for the natural Parity Consistency Check that is running.

This has been running just around 24 hours at 20% now This has been running 24 hours and just about 20% now, that seems awfully long time.... which again makes me wonder if it's better to do the "copy over, re-initialize, and copy back" method

2 Answers 2

1

With a quick look I didn't find out how Synology's Parity Consistency Check actually works. Anyhoo my best guess is that when the new drive is being built the data is validated against stored checksum values to ensure integrity. Something along those lines.

This is a common woe in Synology forums, Reddit etc. According to this Reddit thread "when you start talking volumes measured in TB, the parity check is measured in days".

According to Synology knowledge center the resync speed can be adjusted in the storage manager. That might speed up the process. User who posted this thread to Synology forums says disabling iSCSI LUNs speeded up the process considerably.

0

The Parity Consistency Check at least for SHR volumes is the exact output of the recovery from mdstat when rebuilding with the spare drive.

Check the below output from mdstat while recovering an SHR volume (the failed disk has been replaced)

billy@DiskStation:/proc$ cat mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[2] sda5[0]
      483555456 blocks super 1.2 [2/1] [U_]
      [===>.................]  recovery = 16.5% (79943680/483555456) finish=672.7min speed=9999K/sec

md1 : active raid1 sdb2[1] sda2[0]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      2490176 blocks [2/2] [UU]

unused devices: <none>

In addition

billy@DiskStation:/proc$ sudo mdadm -D /dev/md2
Password:
/dev/md2:
        Version : 1.2
  Creation Time : Sun Oct 25 23:55:39 2020
     Raid Level : raid1
     Array Size : 483555456 (461.15 GiB 495.16 GB)
  Used Dev Size : 483555456 (461.15 GiB 495.16 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Wed Mar 16 17:12:19 2022
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 16% complete

           Name : DiskStation:2  (local to host DiskStation)
           UUID : 0f8abaff:6ae83dfd:aac88c7c:50657942
         Events : 2246

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       2       8       21        1      spare rebuilding   /dev/sdb5

Time to finish depends on the md rebuild speed which is 10000 KB/sec/disk

billy@DiskStation:~$ cat /proc/sys/dev/raid/speed_limit_max
600000
billy@DiskStation:~$ cat /proc/sys/dev/raid/speed_limit_min
10000

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .