How do LVM Snapshots work on the PE level? What is stored where? What data is lost when the snapshot runs out of COW space?
(description + explanation below)
I was experimenting with LVM and snapshots on VirtualBox, and I am noticing some strange behavior. I wanted to see how a system would react under various situations, so I installed Lubuntu 13.04 on to a virtual machine with the LVM option checked. After it installed my system, I then added another 8GB drive to the virtual machine, used vgcreate
to extend the Volume Group lubuntu-vg
to /dev/sdb
, and then took a snapshot of lubuntu-vg/root
with lvcreate
, with size 6.74G creating lubuntu-vg/rootsnap
(note, the lvdisplay
command logged below was ran before I created the snapshot)
user@user-VirtualBox:~$ sudo fdisk /dev/sda Command (m for help): p Disk /dev/sda: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c4cee Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 16775167 8136705 5 Extended /dev/sda5 501760 16775167 8136704 8e Linux LVM Command (m for help): q user@user-VirtualBox:~$ sudo lvdisplay --- Logical volume --- LV Path /dev/lubuntu-vg/root LV Name root VG Name lubuntu-vg LV UUID JeyQ7Z-dtu1-Yr5R-hTTU-6Vya-Dr67-qSXwTf LV Write Access read/write LV Creation host, time lubuntu, 2013-05-02 18:09:41 -0500 LV Status available # open 1 LV Size 6.73 GiB Current LE 1723 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/lubuntu-vg/swap_1 LV Name swap_1 VG Name lubuntu-vg LV UUID ZkyAxG-mFB0-zhDH-GjfK-CHlz-RbMc-ilumbj LV Write Access read/write LV Creation host, time lubuntu, 2013-05-02 18:09:41 -0500 LV Status available # open 2 LV Size 1020.00 MiB Current LE 255 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 user@user-VirtualBox:~$ sudo vgdisplay --- Volume group --- VG Name lubuntu-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size 15.75 GiB PE Size 4.00 MiB Total PE 4033 Alloc PE / Size 1978 / 7.73 GiB Free PE / Size 2055 / 8.03 GiB VG UUID 2ZEhCz-Q988-oBAc-nE14-MdUs-j7un-2oicHD user@user-VirtualBox:~$ sudo pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name lubuntu-vg PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 1986 Free PE 8 Allocated PE 1978 PV UUID OYCQrn-p7PH-4D52-4xRR-xphi-9DyL-Klys3t --- Physical volume --- PV Name /dev/sdb VG Name lubuntu-vg PV Size 8.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 2047 Free PE 2047 Allocated PE 0 PV UUID ErizVU-o1Vf-73GO-Pkwf-PeM9-xoWo-snSmm2
I then downloaded a few updates to fill up /var/cache/apt/archives
, and then powered down the system. To simulate a drive failure, I removed /dev/sdb from the VirtualBox settings, and booted the machine back up. It failed to mount because it couldn't find lubuntu-vg/root
. At this point, I tried to figure out what the LE and PE configuration "looked like" for my system.
I tried to imagine what the layout of my LVM setup would look like; On /dev/sda
, it would have allocated a few PEs at the beginning for the swap, then allocated the rest of the PEs to lubuntu-vg/root
. Then I extended lubuntu-vg
to /dev/sdb
, and took a snapshot, which I imagined would allocate most of the PEs on /dev/sdb
. My guess is that lubuntu-vg/rootsnap
now uses the same PEs that lubuntu-vg/root
did originally, and now lubuntu-vg/root
(withe the cached .debs in /var/cache/apt/archives
) uses a mixture of the old PEs, and the newly allocated PEs on /dev/sdb
for COW purposes. So it made sense to me that when I removed /dev/sdb
(supposedly with the COW PEs) the machine failed to boot because it couldn't find lubuntu-vg/root
.
I then re-added /dev/sdb
, booted up, and deleted the snapshot. At this point, I expected that if I were to remove /dev/sdb
again the system would fail to boot, since the COW PEs were on that drive. However, when I tried this, the system is able to boot up successfully and the .debs were still in /var/cache/apt/archives
, despite /dev/sdb
not being attached.
How is this possible? I thought that the COW PEs were on /dev/sdb
, which I removed. Did LVM move the COW PEs to /dev/sda
when I removed the snapshot, or does some other fancy movements when I create the snapshot?
One thought I had was that maybe when LVM allocates PEs, it does virtually and doesn't actually physically reserve them when a LV is created, allowing LV's PEs to be interleaved between each other. If that is the case, why wasn't the system able to boot when /dev/sdb
was removed? Wouldn't the COW PEs be on /dev/sda
then?