0

I am using Debian Jessie in VMWare workstation with one virtual hard drive.

On /dev/sda1 I have a /boot on /dev/sda2 I have / and on /dev/sda3 I have swap and /dev/sda5 and /dev/sda6 I had in ZFS mirrored pool (together about 4.7 GB) and have mounted /home on zfs_pool/home all is running OK, but I have following scenario: Capacity on my mirrored pool is running out and I want to increase capacity of mirrored pool, but first: my virtual disks capacity.

ZFS

As shown in the picture above I expand virtual hard drive up to 10GB and with gparted created /dev/sda7 partition, from zfs_pool I detached /dev/sda5, attach /dev/sda7 and resilvering data from /dev/sda6 to /dev/sda7 and then I detached /dev/sda6 too, because now I want to increase capacity of my mirrored tank I need to create (from /dev/sda5 and /dev/sda6) one partition and attach it to /dev/sda7 as mirror (it is recommended way to increase the capacity of mirrored pool?)

Now my pool is not in mirror state, it consists only from /dev/sda7, but if I want to delete /dev/sda5 and /dev/sda6 my partition /dev/sda7 will now have name /dev/sda5 and when I reboot zfs doesnt recognize my pool, here is output of zdb and from fdisk -l

ZDB and fdisk

As you can see path is still /dev/sda7, but I have /dev/sda5, where are (I think) correct data, any solution to replace this path? Or simply create only new mirrored pool with /dev/sda5 and /dev/sda6 from unallocated space?

Thank for your answer and have a nice day.

1
  • 1
    I sincerely hope you’re just experimenting, because that setup is incredible. And by incredible I mean bad. You gain no benefits whatsoever (well, except double the IO) by putting both vdevs of a mirror on the same disk.
    – Daniel B
    Commented Jul 29, 2016 at 16:34

1 Answer 1

1

Best way to Increase Capacity
The best way to increase the capacity of a pool is to add a vdev to the pool. If this isn't and option then you can go about replacing the drives in a resilver -> replace -> resilver fashion.

What Happened in your Config:
Linux sets the partition names based on the number of partitions, in your case you had 7 partitions thus you had names sda1..7. But when you merged the two partitions this decreased your partition count, resulting in names sda1..5. This is why it it a good idea to add drives based on a unique identifier, for in the event that the sda name changes ZFS can still locate it.

How to Fix This Issue:
If you run a zpool import it should be able to find the previously built pool on what was /dev/sda7 and is now /dev/sda5. You will more then likely get an output like the fallowing:

# zpool import

  pool: dozer
  id: 2704475622193776801
  state: ONLINE
  action: The pool can be imported using its name or numeric identifier.
  config:
    dozer       ONLINE
      c1t9d0    ONLINE

I you get this first attempt importing by pool ID with zpool import 2704475622193776801. If this is unsuccessful you may have to rename the pool by importing it as a different name with the command zpool import old_name new_name.

Note:
In your first picture the zpool status shows:

NAME:       STATE     READ  WRITE  CKSUM
zfs_pool    ONLINE       0      0      0
    sda7    ONLINE       0      0      0

In this config there is no mirror vdev. By adding your newly merged partition to this pool it will not be mirroring the drives but rather what is called a dynamic stripe essentially a RAID 0. In order to create a mirror you will probably need to recreate the pool. Good Luck, Hope this Helps!

Source: Importing ZFS Storage Pools

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .