0

This is not the same as importing a pool from a crashed system as the system has not crashed. The OS is fine (on a different pool/set of drives/sata bus).

I'd like to copy some key files before rebooting (and having it not come up clean, with no nearby smart hands in an client's underserviced remote datacenter).

In this case a USB caddy went away and then reattached (power fail/glitch), but zfs thinks the pool remains mounted, but all IO to it wedges - sdd and sde have become sdf and sdg now.

The pool was mounted by /dev/disk/by-id but of course these IDs are the same (and in /dev now pointing to sd[fg]) and the old pool has not been exported.

Every zpool command wedges as I assume it touches /dev/sdd and sde which then hangs the whole shell (now on my 10th bash shell in screen windows..).

The array works however -- dd if=/dev/sdf1 of=/dev/null works fine and iostat shows me IO on that drive (same for sdg). So the drive is readable without wedging.

But any zpool command, even zpool import -Nd /dev/sdf1 poolname newpoolname touches something somewhere in the sd[de] world and wedges.

What zpool import command can I run so that it absolutely does not attempt to touch any other drive? zpool import -d /dev/sdf1 -N newname (or 'oldname newname') just wedges.

A last resort could be to dd the whole drive to another system as a raw image, then mess with zpool there (via loop devices) but sending 4TB will take forever.

2
  • Normally I should have gotten "/dev/sdf1: not a directory" from the -d option, not sure why. If one uses -d /dev, then the whole dev is scanned, /dev/sdd and sde are touched and things wedge. Could just mkdir ~/poolname; cd ~/poolname; ln -s /dev/sdf1; ln -s /dev/sdg1; zfs import -d . poolname newpoolname, but it says "cannot import poolname: no such pool available" - as in the pool is still mounted/imported. Exporting doesnt work because of processes stuck on the existing mountpoint for poolname (which also cant be killed with -9 or otherwise). So that's no help.
    – math
    Commented Feb 24 at 23:38
  • (Copying the 4tb did in fact work despite very slow copy. zfs import -d /dev -f poolname once /dev/loop0 was setup.)
    – math
    Commented Feb 24 at 23:39

0

You must log in to answer this question.

Browse other questions tagged .