Consider the following /etc/exports
:
/verbatim 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)
/sandisk 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)
/verbatim
and /sandisk
are mountpoints of external hard drives defined in /etc/fstab
as follows:
/dev/disk/by-uuid/06b24834-a749-4d93-b0d5-a6da71eaf224 /verbatim ext4 defaults 0 1
/dev/disk/by-uuid/d7dbea69-0332-4d12-b905-b9a116e28422 /sandisk ext4 defaults 0 1
Now, if any of these hard drives fails to mount - for example when it's powered off or uplugged - NFS server fails to start. These are what I believe the relevant entries in journal:
Sep 23 09:45:26 lilo systemd[1]: dev-disk-by\x2duuid-06b24834\x2da749\x2d4d93\x2db0d5\x2da6da71eaf224.device: Job dev-disk-by\x2duuid-06b24834\x2da749\x2d4d93\x2db0d5\x2da6da71eaf224.device/start timed out.
Sep 23 09:45:26 lilo systemd[1]: Timed out waiting for device SAMSUNG_HD204UI 1.
Sep 23 09:45:26 lilo systemd[1]: Dependency failed for File System Check on /dev/disk/by-uuid/06b24834-a749-4d93-b0d5-a6da71eaf224.
Sep 23 09:45:26 lilo systemd[1]: Dependency failed for /verbatim.
Sep 23 09:45:26 lilo systemd[1]: Dependency failed for NFS server and services.
Sep 23 09:45:26 lilo systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
Sep 23 09:45:26 lilo systemd[1]: verbatim.mount: Job verbatim.mount/start failed with result 'dependency'.
Sep 23 09:45:26 lilo systemd[1]: systemd-fsck@dev-disk-by\x2duuid-06b24834\x2da749\x2d4d93\x2db0d5\x2da6da71eaf224.service: Job systemd-fsck@dev-disk-by\x2duuid-06b24834\x2da749\x2d4d93\x2db0d5\x2da6da71eaf224.service/start failed with result 'dependency'.
Sep 23 09:45:26 lilo systemd[1]: dev-disk-by\x2duuid-06b24834\x2da749\x2d4d93\x2db0d5\x2da6da71eaf224.device: Job dev-disk-by\x2duuid-06b24834\x2da749\x2d4d93\x2db0d5\x2da6da71eaf224.device/start failed with result 'timeout'.
Is there any way I can configure NFS server to start and serve the other drives (sandisk) even if some drives can't be mounted (verbatim)?