1

I have (ZOL) zfs pool (no mirror and raidz) named primaryPool

# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
primaryPool                 54.5G  41.9G    96K  /
primaryPool/ROOT            20.0G  41.9G    96K  none
primaryPool/ROOT/main_root    1.98G  59.9G  1.98G  /
primaryPool/application     1.26G  60.6G  1.26G  /opt
primaryPool/boot              96K  41.9G    96K  /boot
primaryPool/storage          275M  41.9G   275M  /opt/movies
primaryPool/swap            4.25G  43.4G  2.71G  -

I have spare 1TB disk /dev/sdb, is it ok to simply add it to pool or add another pool and assign whole disk to it ? when reading best practices of zfs on raidz or mirror setup its not recommended to use different sized disks but are there Any negative or positive impacts of using unequal size of disks on a pool with single vdev(single disk) ?

4
  • It's not recommended because you'll waste capacity... what are you trying to achieve by adding the 1 TB disk to what looks like a ~100 GB disk? Are you hoping to add it as a mirror?
    – Attie
    Commented Sep 11, 2018 at 14:29
  • @Attie my intention is just to span the disks (no mirroring) Commented Sep 11, 2018 at 14:34
  • You can do it... but I would advise against it. Either migrate this pool to the new 1TB disk, or run them as two separate pools. I'm presuming due to the size that primaryPool is on an SSD? If it is an SSD, then you might be better off using some of it as a cache for the 1TB disk anyway.
    – Attie
    Commented Sep 11, 2018 at 15:30
  • Thanks @Attie . i see its not advisable to use different size disk in a pool here nex7.blogspot.com/2013/03/readme1st.html but cant find the explanation Commented Sep 11, 2018 at 16:54

1 Answer 1

2

Based on your question and the comments below it, you're asking whether it's a good idea to have a pool striped between two unequal sized disks. The short answer is, there's nothing inherently problematic about this if:

  • Your workload isn't performance-critical. If it is, use a uniform disk type across the entire pool. Otherwise, the disks could have different performance characteristics, which can create very subtle performance problems which are difficult to track down. (For instance, let's say you have two 10K RPM disks made by the same vendor in the same year, one which is 1TB and one which is 2TB. No problem, right? Unfortunately, no -- one of those is going to get ~twice the throughput, even though max IOPS will be the same between the drives.)
  • You're ok without additional redundancy. Note that in any striping situation, you're increasing the likelihood of losing all your data, because you went from the probability of one disk failing, to the probability of either disk A or disk B (or both) failing. Even with ZFS keeping multiple metadata copies, with a random ~half of the data missing, you'll have a tough time recovering many complete / usable files from your pool.

That said, there are still unwise ways to set this up. If one of the disks is an SSD and the other is an HDD, striping will ruin the performance gains you got from using an SSD and probably make you quite sad. In that situation, I'd recommend either:

  1. Use the larger HDD as the "main data disk" and then split up the SSD into two partitions: one large partition used as an L2ARC (cache) device to speed up reads of frequently-read data, and one small partition used as a ZIL (log) device to speed up synchronous write latencies. This solution is nice because it'll automatically cache the most beneficial stuff on the SSD, so you don't have to think too hard about balancing it. Also, you'll only lose all your data if you lose the HDD in this case (you could lose up to a few seconds of writes if the SSD dies, but that's much better than all your data, like in the striped case above).
  2. Creating a separate pool for each disk, and manually keeping stuff you want to be fast (OS, executables, libraries, swap, etc) on the SSD, and stuff that's ok being slow (movies, photo albums, etc) on the HDD. This is best if the machine will be rebooted frequently, because data cached in the L2ARC does not persist across reboots. (This is a big weakness in the current L2ARC story for personal computers IMO, but it is being actively worked on.) From a redundancy standpoint, you obviously only lose the stuff that was on the disk that failed.

—- Edit since these disks are virtualized —-

Since this is a VM, unless you’ve specified special parameters for performance of the disks, neither of the performance / redundancy criteria above should prevent you from creating the pool with two mismatched disk sizes. However, it’ll be much easier to manage if you just use your virtualization platform to resize the original disk to the sum of the proposed disk sizes. To use that additional space inside the guest, you’ll have to run zpool online -e <pool> <disk>, and since this is ZoL you may have to fix your partition table first, like in the instructions here.

You should strongly prefer this approach because of the ease of management, but one very minor downside is that when you resize, ZFS can’t change its metaslab size. Metaslabs are an internal data structure used for disk space allocation, and until very recently ZFS always created 200 of them per disk regardless of the disk size (there is ongoing work to improve this). Therefore, when you increase the disk size from very small to very large, you could end up with a very high number of metaslabs, which uses a bit more RAM and a bit more disk space. This is not noticeable unless the disk size changes very dramatically (like 10G -> 1T), and even then only when you are pushing your machine to the limit on performance. The performance impact can usually be worked around by giving your VM a little more RAM.

4
  • the server is in virtualized(vmware) enviroment so i guess the underlying IO would be same for both disks Commented Sep 12, 2018 at 3:26
  • Yeah, performance should be the same between disks in that case, assuming the two disks are created with the same parameters in VMware. Another solution: you can probably expand the existing device if you prefer, since it's virtualized anyway. After making the size larger in VMware, you'll have to tell ZFS to expand the disk with zpool online -e <pool> <disk> inside the guest. Since it's ZoL, you may also have to fix your partitioning scheme first, like in these instructions: madboa.com/blog/2017/05/16/vm-expand-zfs
    – Dan
    Commented Sep 12, 2018 at 4:09
  • Good answer, but I'd press harder on the unequal workload factor - especially if SSD, and especially as one disk is ~10x bigger than the other. In that situation, the (fast) SSD will get approx 9% of the I/O, while the (much slower) HDD will get approx 91% of the I/O - performance is ruined.
    – Attie
    Commented Sep 12, 2018 at 10:25
  • In addition to this, as OP's environment is virtualized, if both "disks" actually share a storage device/pool, then it would be infinitely better to expand the original virtual disk, rather than add a second virtual disk.
    – Attie
    Commented Sep 12, 2018 at 10:26

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .