1

If we zfs snapshot -r tank@20220129, all sub-filesystems also get that snapshot.

How to send that single snapshot of tank with all sub-fs's to a new pool (no historical base of snapshots)?

While zfs send -R tank@20220129 will send all sub-fs's, it will also send all snapshots.

(We could later delete all of those snapshots, but that could be a massive amount of extra sending just to delete upon completion.)

There seems to be no zfs send -r functionality.

2
  • Short answer - you can't.
    – drookie
    Commented Jan 29, 2022 at 19:43
  • Obviously one can achieve the effect by using zfs list -r tank | shellscript.sh but rather klunky :(
    – math
    Commented Jan 30, 2022 at 22:04

1 Answer 1

1

Final edit: you are correct you can't recursion not handled well (notat for clones!! Each one in as tree needs making separately) I understand the assertion that inheretence the function is being replicated rather than current value of the inheritance but I struggle with use cases where this the most desired value. I'll be making a feature request. When any modifiable property changed from default all children automatically change to inhery. I have an encryptionroot and gen key ds with many non defaults to the active children. Parents and children should have a property to be able to opt out of getting or passinh on inheritance for or all or a set properties. Then zfs send -R -p would act in expected ways. Your case needs an upstream feature often only single snaps want recursive send and being same to recursively create clones from a recursively created snap is an expected option I was surprised to find not present.

I'm pretty sure you could do a zfs clone or something like this below no my original "If snapshots come across just zfs destroy them all; then do your zfs send | zfs recv of the clone without any snapshots." Which was inelegent unresearched and lazy and you don't need to clone anyway just a for loop like so

for ds in $(zfs list -Ho name -r rpool); do \
zfs send ${ds}@20220129 | zfs recv -d newpool; done

But a clone also would work, with a new zfs snap -r and zfs send -R as only one snapshot in new clone. Bit you can't clone recursively so would need a similar for loop anyway. Or rsync it to a nice clean receiving pool and mounted datasets with desired properties if you don't mind losing all zfs-history.

So I'm expanding my solution because it aint that easy or safe and I'm doing it on a live but not critical system soon. In my case I'll also split my mirror vdevs and make some pool and zfs property changes on zfs recv.

Zpool and zfs recursive operations could do with impoving always been a bug bear. And not clear best practice with multiple zsys bootfs, zfs-mount-generator, zfs-zed.service (which does not get restarted after a systemctl suspend cycle!), persistant dataset mounts that do not refelect the state of zfs-list.cache/pool on boot! Cananoical seem to have finished their push for zfs on root and zsys usability. Its not over just because its an ubuntu installer option.

for zp in rpool bpool vault; do \
zpool trim -w $zp; zpool scrub -w $zp; \
zfs snap -r ${zp}@b4split; \
done
for zp in rpool bpool vault; do \
zpool attach -w $zp /dev/sda /dev/sde; \
zpool attach -w $zp /dev/sdc /dev/sdf; \
zpool split -w $zp ${zp}-offbakup /dev/sdg /dev/sdh; \
zpool initialize -w ${zp}-offbakup; \
zpool scrub -w ${zp}-offbakup; \
zpool export ${zp}-offbakup; \
done; rest

cat << EOF >> /etc/motd
> IMPORTANT NOTE TO SELF. Pool on zfc-receive with encryption, zstd, new dataset struction for boot enivironments \
Out with ubuntu, snapd!!!, grub, ext4, btrfs, lz4, snapd, systemd, docker, x|X* \
> IN like RSYNC void-linux-install, build zfsbootmenu AND s6 from source \
> wayland lxc libvirt pcie passthrough to stripped win11 for mah Civ-6 and Steam 
> EOF

for zp in rpool bpool vault; do \
zfs snap -r $zp@pre-b4move; \
zpool set localhost:PROPERTY-orig $(zpool list -Ho PROPERTY $zp); \
zpool checkpoint $zp;
zpool upgrade $zp (!);
done

for ds in $(zfs -Ho name -r rpool bpool vault); do \
echo "record some original properties for reuse - inherited props belong to parent dataset so revert on recv even with send -R or -p"; \
zfs set localhost:[PROPERTY]_orig=$(zfs -Ho [PROPERTY] $ds); \
done

Work out how to install void linux and zfsbootmenu into this hackery and all the zsys and systemd zfs automounts after a new zfs send/receive recursion. Having consistent inherit, and -o desired on zfs receive so important.

─# zlsz
bpool/BOOT/garuda              com.ubuntu.zsys:last-used           1665060644
bpool/BOOT/kinetic             com.ubuntu.zsys:last-used           1664996078
bpool/BOOT/pve30-cli           com.ubuntu.zsys:last-used           1664973489
bpool/BOOT/pve30-gnm           com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda              com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda              com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda              com.ubuntu.zsys:bootfs              yes
rpool/ROOT/garuda/root         com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/root         com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/root         com.ubuntu.zsys:bootfs              no
rpool/ROOT/garuda/srv          com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/srv          com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/srv          com.ubuntu.zsys:bootfs              no
rpool/ROOT/garuda/var          com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/var          com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/var          com.ubuntu.zsys:bootfs              no
rpool/ROOT/garuda/var/cache    com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/var/cache    com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/var/cache    com.ubuntu.zsys:bootfs              no
rpool/ROOT/garuda/var/lib      com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/var/lib      com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/var/lib      com.ubuntu.zsys:bootfs              no
rpool/ROOT/garuda/var/log      com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/var/log      com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/var/log      com.ubuntu.zsys:bootfs              no
rpool/ROOT/garuda/var/tmp      com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/garuda/var/tmp      com.ubuntu.zsys:last-booted-kernel  vmlinuz-linux-lts
rpool/ROOT/garuda/var/tmp      com.ubuntu.zsys:bootfs              no
rpool/ROOT/kinetic             com.ubuntu.zsys:last-used           1664996078
rpool/ROOT/kinetic             com.ubuntu.zsys:last-booted-kernel  vmlinuz-5.19.0-18-generic
rpool/ROOT/kinetic             com.ubuntu.zsys:bootfs              yes
rpool/ROOT/pve30-cli           com.ubuntu.zsys:last-used           1664973489
rpool/ROOT/pve30-cli           com.ubuntu.zsys:last-booted-kernel  vmlinuz-5.15.53-1-pve
rpool/ROOT/pve30-cli           com.ubuntu.zsys:bootfs              yes
rpool/ROOT/pve30-gnm           com.ubuntu.zsys:last-used           1665060644
rpool/ROOT/pve30-gnm           com.ubuntu.zsys:last-booted-kernel  vmlinuz-5.15.60-1-pve
rpool/ROOT/pve30-gnm           com.ubuntu.zsys:bootfs              yes
rpool/USERDATA/garuda          com.ubuntu.zsys:last-used           1665060644
rpool/USERDATA/garuda          com.ubuntu.zsys:bootfs-datasets     rpool/ROOT/garuda
rpool/USERDATA/kinetic         com.ubuntu.zsys:last-used           1664996078
rpool/USERDATA/kinetic         com.ubuntu.zsys:bootfs-datasets     rpool/ROOT/kinetic
rpool/USERDATA/pve30-cli       com.ubuntu.zsys:last-used           1664973489
rpool/USERDATA/pve30-cli       com.ubuntu.zsys:bootfs-datasets     rpool/ROOT/pve30-cli
rpool/USERDATA/pve30-gnm       com.ubuntu.zsys:last-used           1665060644
rpool/USERDATA/pve30-gnm       com.ubuntu.zsys:bootfs-datasets     rpool/ROOT/pve30-gnm
                           -
└─# zfs list -o name,used,dedup,secondarycache,sharesmb,acltype,overlay,compression,encryption,canmount,mountpoint,mounted
NAME                            USED  DEDUP          SECONDARYCACHE  SHARESMB  ACLTYPE   OVERLAY  COMPRESS        ENCRYPTION   CANMOUNT  MOUNTPOINT               MOUNTED
bpool                          1.94G  on             metadata        off       off       off      lz4             off          off       /bpool                   no
bpool/BOOT                     1.92G  on             metadata        off       off       on       lz4             off          off       none                     no
bpool/BOOT/garuda               250M  on             metadata        off       off       off      zstd-3          off          noauto    /boot                    no
bpool/BOOT/kinetic              782M  on             metadata        off       off       on       lz4             off          noauto    /boot                    no
bpool/BOOT/pve30-cli            273M  on             metadata        off       off       on       lz4             off          noauto    /boot                    no
bpool/BOOT/pve30-gnm            658M  on             metadata        off       off       on       lz4             off          noauto    /boot                    no
bpool/grub                     5.37M  on             metadata        off       off       on       lz4             off          noauto    /boot/grub               no
rpool                           176G  off            metadata        off       posix     off      lz4             off          off       /rpool                   no
rpool/LINUX                     772M  off            metadata        off       posix     off      lz4             off          off       /                        no
rpool/LINUX/opt                 765M  off            metadata        off       posix     off      lz4             off          noauto    /opt                     no
rpool/LINUX/usr-local          6.95M  off            metadata        off       posix     on       lz4             off          noauto    /usr/local               no
rpool/ROOT                     42.4G  off            metadata        off       posix     off      lz4             off          noauto    /rpool/ROOT              no
rpool/ROOT/garuda              19.7G  off            metadata        off       posix     off      zstd-3          off          noauto    /                        no
rpool/ROOT/garuda/root         3.56G  off            metadata        off       posix     off      zstd-3          off          noauto    /root                    no
rpool/ROOT/garuda/srv           208K  off            metadata        off       posix     off      zstd-3          off          noauto    /srv                     no
rpool/ROOT/garuda/var          5.49G  off            metadata        off       posix     off      zstd-3          off          off       /var                     no
rpool/ROOT/garuda/var/cache    5.46G  off            metadata        off       posix     off      zstd-3          off          noauto    /var/cache               no
rpool/ROOT/garuda/var/lib       192K  off            metadata        off       posix     off      zstd-3          off          off       /var/lib                 no
rpool/ROOT/garuda/var/log      10.1M  off            metadata        off       posix     off      zstd-3          off          noauto    /var/log                 no
rpool/ROOT/garuda/var/tmp      15.5M  off            metadata        off       posix     off      zstd-3          off          noauto    /var/tmp                 no
rpool/ROOT/kinetic             7.26G  off            metadata        off       posix     off      lz4             off          noauto    /                        no
rpool/ROOT/pve30-cli           6.18G  off            metadata        off       posix     off      lz4             off          noauto    /                        no
rpool/ROOT/pve30-gnm           9.28G  off            metadata        off       posix     off      lz4             off          noauto    /                        no
rpool/USERDATA                 13.8G  off            metadata        off       posix     on       lz4             off          off       none                     no
rpool/USERDATA/garuda          11.3G  off            metadata        off       posix     off      lz4             off          noauto    /home                    no
rpool/USERDATA/kinetic          791M  off            metadata        off       posix     on       lz4             off          noauto    /home                    no
rpool/USERDATA/pve30-cli       3.43M  off            metadata        off       posix     on       lz4             off          noauto    /home                    no
rpool/USERDATA/pve30-gnm       1.76G  off            metadata        off       posix     on       lz4             off          noauto    /home                    no
rpool/data                     98.9G  off            metadata        off       posix     off      lz4             off          on        /data                    yes
rpool/data/media               4.01G  off            metadata        off       posix     off      lz4             off          on        /data/media              yes
rpool/data/temp                 192K  off            metadata        off       posix     off      lz4             off          on        /data/temp               yes
rpool/data/vm-300-disk-0       29.9G  off            metadata        -         -         -        lz4             off          -         -                        -
rpool/data/vm-300-disk-1        312K  off            metadata        -         -         -        lz4             off          -         -                        -
rpool/data/vm-300-disk-2        128K  off            metadata        -         -         -        lz4             off          -         -                        -
rpool/data/zvol                65.0G  off            metadata        off       posix     off      lz4             off          on        /data/zvol               yes
rpool/data/zvol/vm-101-disk-0  3.15M  off            metadata        -         -         -        lz4             off          -         -                        -
rpool/data/zvol/vm-101-disk-1  65.0G  off            metadata        -         -         -        lz4             off          -         -                        -
rpool/data/zvol/vm-101-disk-2  6.12M  off            metadata        -         -         -        lz4             off          -         -                        -
rpool/pve                      20.2G  off            metadata        off       posix     off      lz4             off          off       /                        no
rpool/pve/var-lib-pve-cluster   912K  off            metadata        off       posix     on       lz4             off          noauto    /var/lib/pve-cluster     no
rpool/pve/var-lib-vz           16.4G  off            metadata        off       posix     on       lz4             off          on        /var/lib/vz              yes
rpool/pve/zfsys                3.73G  off            metadata        off       posix     off      lz4             off          on        /zfsys                   yes
vault                           759G  off            all             off       off       off      lz4             off          off       /vault                   no
vault/devops                    306G  off            all             off       off       off      lz4             off          off       /                        no
vault/devops/PVE               84.1G  off            all             off       off       off      lz4             off          off       /var/lib                 no
vault/devops/PVE/vz            84.1G  off            all             off       off       off      lz4             off          on        /var/lib/vvz             yes
vault/devops/vm                 222G  off            all             off       off       off      lz4             off          off       /vm                      no
vault/devops/vm/vm-502-disk-0    88K  off            all             -         -         -        lz4             off          -         -                        -
vault/devops/vm/vm-502-disk-1  12.7G  off            all             -         -         -        lz4             off          -         -                        -
vault/devops/vm/vm-502-disk-2    64K  off            all             -         -         -        lz4             off          -         -                        -
vault/devops/vm/vm-510-disk-0  3.08M  off            all             -         -         -        lz4             off          -         -                        -
vault/devops/vm/vm-510-disk-1   209G  off            all             -         -         -        lz4             off          -         -                        -
vault/devops/vm/vm-510-disk-2  6.07M  off            all             -         -         -        lz4             off          -         -                        -
vault/media                     453G  off            all             off       off       off      lz4             off          off       /vault/media             no
vault/media/APP                 192G  off            all             off       off       off      lz4             off          off       /share                   no
vault/media/APP/downloads      15.8G  off            all             off       off       off      lz4             off          on        /share/downloads         yes
vault/media/APP/library_pc      176G  off            all             off       off       off      lz4             off          on        /share/library_pc        yes
vault/media/DOCS               26.6G  off            all             off       off       off      lz4             off          off       /share                   no
vault/media/DOCS/personal      26.6G  off            all             off       off       off      lz4             off          noauto    /share/personal          no
vault/media/DOCS/reference       96K  off            all             off       off       off      lz4             off          noauto    /share/reference         no
vault/media/LINUX              1.29G  off            all             off       off       off      lz4             off          off       /share                   no
vault/media/LINUX/lxsteam      1.29G  off            all             off       off       on       lz4             off          on        /home/mike/.local/Steam  yes
vault/media/MUSIC               167G  off            all             off       off       off      lz4             off          off       /share                   no
vault/media/MUSIC/dj_bylabel    167G  off            all             off       off       off      lz4             off          on        /share/dj_bylabel        yes
vault/media/PHOTO               288K  off            all             off       off       off      lz4             off          off       /share                   no
vault/media/PHOTO/albums         96K  off            all             off       off       off      lz4             off          noauto    /share/albums            no
vault/media/PHOTO/public         96K  off            all             off       off       off      lz4             off          noauto    /share/public            no
vault/media/video              66.2G  off            all             off       off       off      lz4             off          off       /share                   no
vault/media/video/library      66.2G  off            all             off       off       off      lz4             off          on        /share/library           yes

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .