I recently performed a rsync from one aws instance to another. This resulted in the root file system going into read-only mode.
I can remount the file system with read-write permission but after rebooting the system it will revert back to read only:
mount
...
/dev/xvda1 on / type xfs (ro,relatime,attr2,inode64,noquota)
...
sudo mount -o remount,rw /dev/xvda1
mount
...
/dev/xvda1 on / type xfs (rw,relatime,attr2,inode64,noquota)
...
reboot
mount
...
/dev/xvda1 on / type xfs (ro,relatime,attr2,inode64,noquota)
...
This is a CentOS instance.
Couldn't find a similar post but please redirect me if I have missed one. Any help is appreciated.
Update
journalctl
...
Jul 23 11:48:36 ip-xxx.compute.internal systemd-remount-fs[1773]: mount: can't find LABEL=root
Jul 23 11:48:36 ip-xxx.compute.internal systemd-remount-fs[1773]: /bin/mount for / exited with exit status 1.
Jul 23 11:48:36 ip-xxx.compute.internal systemd[1]: systemd-remount-fs.service: main process exited, code=exited, status=1/FAILURE
Jul 23 11:48:36 ip-xxx.compute.internal systemd[1]: Failed to start Remount Root and Kernel File Systems.
Jul 23 11:48:36 ip-xxx.compute.internal systemd[1]: Unit systemd-remount-fs.service entered failed state.
Jul 23 11:48:36 ip-xxx.compute.internal systemd[1]: systemd-remount-fs.service failed.
...
cat /etc/fstab | head -n 1
LABEL=root / xfs defaults,relatime 1 1
Solution Please see the accepted answer from nKn. However in my case I needed to take a couple of extra steps:
As this was root/boot filesystem, I needed to attach the volume to another instance and then relabel the filesystem before re-attaching to the original instance. This was in AWS and can be done by stopping two instances and going to Volumes, Actions > Attach volume > Select second instance.
As my system was xfs I need to use: xfs_admin -L "root" /dev/sdb (https://docs.oracle.com/cd/E37670_01/E37355/html/ol_admin_xfs.html) once attached to the second instance.
ro
. Typejournalctl
and try to find any related information.