3

The QNAP TS-410 had a failed disk the other day and went into degrade mode. So I bought a new disk. Former disks were Seagate, but I bought a Western Digital now which it approved by QNAP in its database of supported drives, its the same size so it shouldn't matter right? So now I have 3 seagate and 1 wd. I hot swapped the new and old disks and the system log said

[RAID5 Disk Volume: Drive 1 2 3 4] Start rebuilding

but I can't see any indication in the web interface that the rebuild is happening, there is no progress bar anywhere, but the light on front of the unit is blinking red/green indicating it is rebuilding. Is this normal or is there something strange going on? Is there some way I can check with command line through ssh that the rebuild is happening?

Also under Control Panel -> Storage Manager -> Volume Management (in the QNAP web interface, not windows control panel) the new drive has a "Disk read/write error" under status but SMART information says its good.

I have been fiddling with this for some time now and I tried to do a scan on the new drive, that took about a day to finish and after that the status went to Ready but still no indication that the RAID rebuild was happening (except for this log entry). I restarted the QNAP and the new drive got the "Disk read/write error" status again and the log said again it was rebuilding the RAID.

The top bar of the web interface has a button showing background processes but there is nothing shown there, so the rebuild is not a background process.

If I go to Storage Manager -> RAID Management and select the RAID then the Action button is grayed out so I can't perform any actions on the RAID, I guess this is because it's in degraded mode and mounted as read only.

So I am confused, is the RAID being rebuilt or isn't it? And if its not being rebuilt, is there some way I can force the rebuild? Or is that not a good idea?

This QNAP has firmware 4.1.1 Build 20140927 if that matters.

cat /proc/mdstat gives me the following output:

Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4]
md0 : active (read-only) raid5 sda3[0] sdc3[2] sdb3[1]
             5855836800 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

md4 : active raid1 sdd2[2](F) sdc2[3](S) sdb2[1] sda2[0]
             530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdd4[3] sdc4[2] sdb4[1]
             458880 blocks [4/4] [UUUU]
             bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
             530048 blocks [4/4] [UUUU]
             bitmap: 4/65 pages [16KB], 4KB chunk

unused devices: <none>

As can be seen in md0 the last drive is not in the RAID array (UUU_ the last underscore should be U if the drive was in the RAID as far as understand.

5
  • This is normal. Expect about 4 hours at least.
    – rastaBob
    Commented Jun 25, 2015 at 12:45
  • Well, its been 2 days. Its a 2TB disk array, but I can't see any progress, there is nothing to indicate the rebuild is being done except for the lights on the device and that one log entry.
    – ojs
    Commented Jun 25, 2015 at 13:11
  • Wow, that's crazy. It does sound like its working though. Apparently you can monitor the rebuild status via ssh using cat/proc/mdstat making sure you go back to the root directory first using cd .. - source
    – rastaBob
    Commented Jun 25, 2015 at 13:17
  • Added info from cat /proc/mdstat .. can't tell that any rebuild is being done. So I guess the web interface is lying to me.
    – ojs
    Commented Jun 25, 2015 at 14:29
  • same for me with qnap 459pro and fw 4.2.0
    – sivann
    Commented Jan 25, 2016 at 15:29

2 Answers 2

5

mdadm --misc --detail /dev/md0 will show you the status and the progress of the rebuild

E.g.

# mdadm --misc --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Tue Sep 28 21:28:33 2010
     Raid Level : raid5
     Array Size : 4390708800 (4187.31 GiB 4496.09 GB)
  Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat Jan 21 10:26:49 2017
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 55% complete

           UUID : 454eaf79:0744a748:319e242f:5ff1ef4c
         Events : 0.7528612

    Number   Major   Minor   RaidDevice State
       0       8       35        0      active sync   /dev/sdc3
       4       8        3        1      spare rebuilding   /dev/sda3
       2       8       51        2      active sync   /dev/sdd3
       3       8       19        3      active sync   /dev/sdb3
2

Well, cat /proc/mdstat does show if the RAID is being rebuilt, and it wasn't being rebuilt in my case.

What was wrong is that the RAID had gone into degraded mode and the disks were read-only so there was nothing the software could do to add another disk to the RAID and start rebuilding it.

What I did was I forced the RAID to go into resync and then manually added the new disk that replaced the ruined disk.

The commands used were:

mdadm --readwrite /dev/md0

mdadm --add /dev/md0 /dev/sdd1

The former command put the RAID back into read/write mode and the latter command added the missing drive which started the RAID rebuilding (and that could be seen through /proc/mdstat).

2
  • Wow something must have went wrong for you to have to do this, right? Using mdadm much less the web interface. It's no wonder people have problems with RAIDs. QNAP documentation says it will start rebuilding automatically once the failed disk is replaced - helpdesk.qnap.com/index.php?/Knowledgebase/Article/View/89/0/… As always, keep a backup just in case. Commented Jul 1, 2015 at 15:26
  • In my case a "mdadm --manage /dev/md0 -a /dev/sdd3" was sufficient.
    – sivann
    Commented Jan 25, 2016 at 16:06

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .