5

I am facing an issue where ZFS appears to be missing around 4TB of space on a 16TB volume.

The ZFS pool is reporting the correct size (16.3TB):

nmc@thn-nstor1:/$ zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
nstore1pool1  16.3T  15.2T  1.11T         -    93%  1.00x  ONLINE  -
syspool         29G  9.20G  19.8G         -    31%  1.00x  ONLINE  -

The zfs list command however reports 4TB less:

nmc@thn-nstor1:/$ zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
nstore1pool1               12.4T   770G   116K  /volumes/nstore1pool1
nstore1pool1/.nza-reserve  49.8K   770G  49.8K  none
nstore1pool1/nstore1zvol1  28.4K   770G  28.4K  -
nstore1pool1/veeam         12.4T   770G  12.4T  /volumes/nstore1pool1/veeam

Further to reading this post I also ran zfs list -o space and zfs list -t snapshot to verify that there are no snapshots using space, which confirm the results above:

nmc@thn-nstor1:/$ zfs list -t snapshot
NAME                             USED  AVAIL  REFER  MOUNTPOINT

nmc@thn-nstor1:/$ zfs list -o space
NAME                       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
nstore1pool1                770G  12.4T         0    116K              0      12.4T
nstore1pool1/.nza-reserve   770G  49.8K         0   49.8K              0          0
nstore1pool1/nstore1zvol1   770G  28.4K         0   28.4K              0          0
nstore1pool1/veeam          770G  12.4T         0   12.4T              0          0

** EDIT **

Further to requests for more info, here is the output of zfs status -v and zfs list -t all (abridged for brevity):

nmc@thn-nstor1:/$ zpool status -v
  pool: nstore1pool1
 state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        nstore1pool1               ONLINE       0     0     0
          raidz1-0                 ONLINE       0     0     0
            c0t5000C5003F0DE915d0  ONLINE       0     0     0
            c0t5000C5003F17FA16d0  ONLINE       0     0     0
            c0t5000C500506272F7d0  ONLINE       0     0     0
            c0t5000C50063E7E297d0  ONLINE       0     0     0
            c0t5000C500644D6CE0d0  ONLINE       0     0     0
            c0t5000C500644FBA98d0  ONLINE       0     0     0
            c0t5000C500644FFD61d0  ONLINE       0     0     0
            c0t5000C50064509003d0  ONLINE       0     0     0
            c0t5000C50064AE3241d0  ONLINE       0     0     0
          raidz1-1                 ONLINE       0     0     0
            c0t5000C50064BF602Dd0  ONLINE       0     0     0
            c0t50014EE65BA8D06Dd0  ONLINE       0     0     0
            c0t50014EE6B0FE0EA6d0  ONLINE       0     0     0
          raidz1-2                 ONLINE       0     0     0
            c0t50014EE606A6F0E5d0  ONLINE       0     0     0
            c0t50014EE65BFCD389d0  ONLINE       0     0     0
            c0t50014EE65BFD0761d0  ONLINE       0     0     0
            c0t50014EE65BFD11A3d0  ONLINE       0     0     0
            c0t50014EE6B150B5FBd0  ONLINE       0     0     0
            c0t50014EE6B152CB82d0  ONLINE       0     0     0

errors: No known data errors

nmc@thn-nstor1:/$ zfs list -t all
NAME                             USED  AVAIL  REFER  MOUNTPOINT
nstore1pool1                    12.4T   770G   116K  /volumes/nstore1pool1
nstore1pool1/.nza-reserve       49.8K   770G  49.8K  none
nstore1pool1/nstore1zvol1       28.4K   770G  28.4K  -
nstore1pool1/veeam              12.4T   770G  12.4T  /volumes/nstore1pool1/veeam

I would appreciate any help in understanding where this missing space has gone?

4
  • zpool status -v
    – ewwhite
    Commented Apr 25, 2017 at 9:09
  • Please post the output.
    – ewwhite
    Commented Apr 25, 2017 at 9:42
  • Also zfs list -t all. Commented Apr 25, 2017 at 10:24
  • 1
    I have added the output to the question above, thanks
    – btongeorge
    Commented Apr 25, 2017 at 11:24

1 Answer 1

4

Oh my god... what did you do?!?

In general, RAIDZ1/2/3 will show zpool listings with the full (raw) capacity of the drives, while zfs listings will show the space minus parity...

But what you've shown above is:

A 9-disk RAIDZ1 striped with a 3-disk RAIDZ1 and a 6-disk RAIDZ1.

That's pretty bad if you weren't intending to do that. Are all of the drives the same size?

4
  • The pool was expanded over time to increase capacity, there was no other server that could house the data so that the pool could be destroyed and recreated. Sounds like the reason for the discrepancy is just that I'm comparing the raw capacity with the real capacity - parity? Yes they are all 1TB disks.
    – btongeorge
    Commented Apr 25, 2017 at 12:24
  • 1
    Yeah, the capacity and parity difference is the reason you're seeing this.
    – ewwhite
    Commented Apr 25, 2017 at 12:41
  • 1
    @ewwhite If you have the time, a full explanation of how that's happening here would be of interest. Commented Apr 25, 2017 at 22:22
  • Same here! Was it another crazy "grow up" scenario proposed by a freshman nexenta support techie? :( Commented Apr 27, 2017 at 10:42

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .