2

3 months ago, after a powercut, our Synology NAS DS415+ started to reboot to hell, staying 5 minutes up and then rebooting. We recently tried to recover the data from it,2*3TB SHR-RAID 1.

After a long way through Synology Documentation, I mounted the raid without any problem, and when I run a classic ls in the folder, I have the following error:

ls: cannot access '/mnt/FOLDERA': Input/output error
FOLDERA
FOLDERB

I tried to run ls /mnt/FOLDERA and I have the same io error. I also tried on folderB and I have no problem to access to this folder.

I tried also to mount the disk without the raid, and the same problem comes on both disks. Looking at dmesg, every time I try accessing any io error folder, I have the following logs:

[109433.244496] BTRFS error (device dm-0): block=133562368 read time tree block corruption detected
[109433.301567] BTRFS critical (device dm-0): corrupt leaf: root=1 block=133070848 slot=30, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109433.301575] BTRFS error (device dm-0): block=133070848 read time tree block corruption detected
[109433.301876] BTRFS critical (device dm-0): corrupt leaf: root=1 block=133070848 slot=30, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109433.301883] BTRFS error (device dm-0): block=133070848 read time tree block corruption detected
[109441.972111] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109441.972121] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected
[109441.972356] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109441.972362] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected
[109449.284056] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109449.284066] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected
[109449.284318] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109449.284323] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected

So my problem seems to be with btrFS, so I tried a scrub

UUID:             f9995e41-6f97-41e8-bf0a-31f83c9e8314
Scrub started:    Wed Aug  5 08:53:01 2020
Status:           finished
Duration:         3:10:00
Total to scrub:   1.20TiB
Rate:             110.51MiB/s
Error summary:    no errors found

And also a recover but Seems to not being able to find the root block.

btrfs-find-root /dev/vg1000/lv
Superblock thinks the generation is 3391864
Superblock thinks the level is 1
Found tree root at 656900096 gen 3391864 level 1
Well block 642072576(gen: 3391863 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 341000192(gen: 3391845 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 299876352(gen: 3391844 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 190988288(gen: 3391842 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 159334400(gen: 3391841 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 88440832(gen: 3391840 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
........................................... ~50 lines ...
Well block 4243456(gen: 3 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 4194304(gen: 2 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1

The only thing I didn't try is the btrfs check as it seems to be a very dangerous command.

How can I mount all my data to make it accessible? I currently run on a ubuntu 20.04 live-usb key to be able to mount the disk in the machine.

5
  • 1
    With RAID 1 you could try running with one disk, to see if one is bad.
    – harrymc
    Commented Aug 11, 2020 at 9:20
  • @harrymc I tried with every disk, and the problem is exactly the same :(
    – d3cima
    Commented Aug 11, 2020 at 9:27
  • 1
    Did you try on more than one computer? You could also try btrfs-restore.
    – harrymc
    Commented Aug 11, 2020 at 9:50
  • @harrymc tried on another computer, without success, and also tried a restore but only the accessible files were copied.
    – d3cima
    Commented Aug 17, 2020 at 9:09
  • Please include the output of sudo smartctl -A -H /dev/sda (use your two disks) and of btrfs device stats /mnt.
    – harrymc
    Commented Aug 17, 2020 at 9:39

1 Answer 1

0

I had the very same problem, due to a full btrfs.

I couldn't mount the SHR on my fedora 36 workstation. It would throw a invalid root flags error.

I removed both hard drives and installed them in a external 3.5" hdd USB enclosure. Than I used another hard drive (or a spare hdd in a +2 bay system) to reinstall DSM. From there I used the terminal to delete btrfs snapshots.

btrfs sub list .

If you see @sharesnap - they're safe to delete

btrfs sub del \@sharesnap/*/*
btrfs sub del \@sharesnap/*
btrfs sub del \@sharesnap

As soon as there is some space free, btrfs starts to reclaim space anyways.

After that, reinsert the 2 original HDDs to your system and remove the spare one. Boot up, wait for the system to be online, insert spare, format the spare HDD and add it to your SHR.

This is helpfull too.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .