0

I have a Synology NAS DS211j with two 2TB hard-drives mounted in RAID 1 (mirror). (By hard-drive manufacturers’ measures—1GB = 1000MB—the “real” size is closer to 1.8TB.) I rarely access this NAS and only use it for long term backup.

I seem to only be using about 1.3TB, yet the Synology says it is full. Is there is a configuration problem, or is it that RAID 1 is consuming some space for overhead rather than simply mirroring?

I first though that RAID 1 was simply mirroring until I took out one of the discs, put it in my computer and found that it was unreadable unless I used the same RAID controller. In the light of this, I asked myself if the mirroring in RAID 1 uses more space for “securing” the data more properly.

As I only access it rarely, it is not a problem for me to do the mirroring by hand and using no RAID configuration as long as I am sure I will be able to use the full disk size.

Can anyone help?

0

1 Answer 1

2

What you mean, is of of course that a 2 TB hard drive is also a 1.819 TiB hard drive. ;)

A RAID-1 setup requires a bit of disk space for, among other things, identifying the other disks in the RAID and logging what has been written to the disks, so you don't have to compare the whole disk on every boot. This generally sums up to somewhere in the neighbourhood of 10MB, not hundreds of GB.

According to this, the box uses and ext-4 filesystem. One possibility for the disk full message could be that you've run out of inodes. You use one inode for every file on the disk, and the number of inodes is set when the filesystem is created. If you have very many small files (average less than 100 kiB or so) and the filesystem was at some point extended from one disk to two, this could be a real issue. If you can ssh into the box, you could run df -i to check if it is the issue.

A similar concern, although still not likely, is the fact that files on ext-4 can be deleted while they are still open. This causes them to still take up space until they are closed. They would go away and free up space if you restarted the device.

Ext-4 also sets aside a percentage of the disk space (by default 5%, but someone could have decided more would be better) to have a bit of maneuverability if it runs out of space. If you can ssh to the box, you can determine this by running the following as root: dumpe2fs /dev/mapper/vault-liveroot | grep -i "block count". It should give total and reserved number of (probably) 4kiB blocks.

There are also a whole bunch of configuration problems that could cause these problems, among them quota setups and permission problems.

Perhaps you could describe how you determine that you use 1.3 TB, and what tells you it's full. Windows systems used to complain that disks were full pretty much every time it ran into something it couldn't write to, I don't know if that's changed.

1
  • thx Eroen. I validate your answer. I cannot +1 because i'm note reputation 15. The point with the inodes is interesting however the -i option does not exist on my system. After your explanations I tried copying to duplicated a very large folder (400 GB) within the nas server and it is working. So the problem is only "windows" related. But you helped me validate that point. Thank you.
    – HpTerm
    Commented Feb 28, 2012 at 17:46

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .