Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
FWIW I've been running a build server for five years and it only made a 4% dent in the wear indicator, despite the SSDs being the bottleneck for most of the time.
FWIW 845 Mbps is already suspiciously low. I mean, I collect old PCs, and everything built in the last ten years can saturate an 1 Gbps link with no trouble.
Exactly -- I have lots of data in the RAID6 anyway, and I've set up a small volume to follow the SSD. The (battery backed) write cache makes the scattered write performance bearable, and I can hot-swap the SSD when it fails.
Another useful configuration on Linux is a software RAID1 between an SSD and a RAID6 volume, with the --write-mostly flag applied to the RAID6, this gives you the performance of the SSD, but you still have very good fault tolerance.
I mean, both sdb3 and sdc3 have a PV on them, which is assigned to a volume group, and there are about 100 GB used on each and less than 100 GB free, so there isn't enough free space to keep all the data on just one. The dd command you suggested would overwrite the contents of one PV with that of the other, destroying data in the process. You need an additional storage medium with enough space. I think you should be able to use LVM to copy the data, which would solve the size difference problem, but I'm not entirely sure how it reacts to being unable to update metadata.
That is the same for any correct implementation of su -- if the login shell is set to /bin/false, it is supposed to be impossible to log in, but users should be allowed to use a different shell if they want, so there is a list of valid shells that users can use directly.