This is not the same as importing a pool from a crashed system as the system has not crashed. The OS is fine (on a different pool/set of drives/sata bus).
I'd like to copy some key files before rebooting (and having it not come up clean, with no nearby smart hands in an client's underserviced remote datacenter).
In this case a USB caddy went away and then reattached (power fail/glitch), but zfs thinks the pool remains mounted, but all IO to it wedges - sdd and sde have become sdf and sdg now.
The pool was mounted by /dev/disk/by-id but of course these IDs are the same (and in /dev now pointing to sd[fg]) and the old pool has not been exported.
Every zpool command wedges as I assume it touches /dev/sdd and sde which then hangs the whole shell (now on my 10th bash shell in screen windows..).
The array works however -- dd if=/dev/sdf1 of=/dev/null
works fine and iostat shows me IO on that drive (same for sdg). So the drive is readable without wedging.
But any zpool command, even zpool import -Nd /dev/sdf1 poolname newpoolname
touches something somewhere in the sd[de] world and wedges.
What zpool import command can I run so that it absolutely does not attempt to touch any other drive? zpool import -d /dev/sdf1 -N newname
(or 'oldname newname') just wedges.
A last resort could be to dd the whole drive to another system as a raw image, then mess with zpool there (via loop devices) but sending 4TB will take forever.