8

If we have three SSDs installed in a system, with a spare partition for the unused space on the SSD (/dev/sda2, /dev/sdb2 and /dev/sdc2). Can bcache be configured to use all three of those partitions to cache a single backing device (such as /dev/md10 which is laid on top of /dev/sd[defghij]1).

From what I have read elsewhere, it is possible that a particular SSD device can be used by bcache to cache multiple hard drives or RAID arrays. But nothing explains whether a backing device can be cached by multiple SSDs at the same time.

For instance, you might have three 100GB SSDs instead of a single large 300GB SSD, and you want to use those as caching devices for a single 12TB array.

3
  • Not an answer, but have you considered using zfs (and its log and zil devices for SSDs and efficient builtin RAID)? Commented Aug 27, 2014 at 12:26
  • 1
    I'm fairly certain it's one backer per disk. That will not stop you from concatenating those disks by some other means before formatting the newly joined disks as a bcache back. That may have changed since I set mine own up a year ago though. Certainly you can do one back to many, but, if I recall correctly, the opposite is not true.
    – mikeserv
    Commented Aug 27, 2014 at 12:27
  • I'm pretty sure that you are right, mikeserv. One caching device can service multiple backing devices, but you can't use multiple cache device with one backend. However, if I have (5) SSDs, I could setup a RAID-10 or RAID-6 array across four of them (use the 5th) as a spare and use that device as the cache device for multiple backing devices.
    – tgharold
    Commented Aug 28, 2014 at 18:15

2 Answers 2

4

The bcache documentation clearly states that you can use one caching device for multiple backends but not vice versa (at least not yet). But you are free to arrange your SSDs in a RAID-0, RAID-1, or RAID-5, initialize the caching volume on this volume set and attach your backends to it.

Keep in mind that you may want to have at least one mirror or parity in your SSD RAID if you prefer reliability over pure speed. I figure you want to go with the reliable choice if you are backing a 12 TB data volume with it.

Please take into account that introducing a storage layer like LVM oder MD between bcache and the hardware can or probably will modify write guarantees for bcache - thus in that case you should not use write-back mode of bcache as that setup can lead to servere structural problems within bcache at reboot, shutdown, and clearly powerloss, and you don't want to have outstanding writes in that case. I suggest putting the SSDs into a battery backed hardware RAID before using write-back mode. And while you are there: Such a setup usually allows using SSDs as a CacheCade layer through the RAID controller, making bcache superfluous.

1
  • hmmm. I don't think this statement about LVM layer over MD is truthful. If MD returns success status on every write() without confirming the corresponding slave volume has actually written the data then MD is breaking consistency rules. I dont think kernel devs would release such thing. It would not work at all
    – Nulik
    Commented Jan 16, 2021 at 5:54
0

Why don't you just try?

From the doc it looks like it's feasible:

cache<0..n> Symlink to each of the cache devices comprising this cache set.

At the same time it says:

Cache devices are managed as sets; multiple caches per set isn't supported yet but will allow for mirroring of metadata and dirty data in the future.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .