What you mean, is of of course that a 2 TB hard drive is also a 1.819 TiB hard drive. ;)
A RAID-1 setup requires a bit of disk space for, among other things, identifying the other disks in the RAID and logging what has been written to the disks, so you don't have to compare the whole disk on every boot. This generally sums up to somewhere in the neighbourhood of 10MB, not hundreds of GB.
According to this, the box uses and ext-4 filesystem. One possibility for the disk full message could be that you've run out of inodes. You use one inode for every file on the disk, and the number of inodes is set when the filesystem is created. If you have very many small files (average less than 100 kiB or so) and the filesystem was at some point extended from one disk to two, this could be a real issue. If you can ssh into the box, you could run df -i
to check if it is the issue.
A similar concern, although still not likely, is the fact that files on ext-4 can be deleted while they are still open. This causes them to still take up space until they are closed. They would go away and free up space if you restarted the device.
Ext-4 also sets aside a percentage of the disk space (by default 5%, but someone could have decided more would be better) to have a bit of maneuverability if it runs out of space. If you can ssh to the box, you can determine this by running the following as root: dumpe2fs /dev/mapper/vault-liveroot | grep -i "block count"
. It should give total and reserved number of (probably) 4kiB blocks.
There are also a whole bunch of configuration problems that could cause these problems, among them quota setups and permission problems.
Perhaps you could describe how you determine that you use 1.3 TB, and what tells you it's full. Windows systems used to complain that disks were full pretty much every time it ran into something it couldn't write to, I don't know if that's changed.