0

I'm creating dm-cache device using my script: http://pastebin.com/KTSzL6EA

Effectively it's running those commands:

dmsetup create suse-cache-metadata --table 0 15872 linear /dev/mapper/suse-cache 0
dmsetup create suse-cache-blocks --table 0 125813248 linear /dev/mapper/suse-cache 15872
dmsetup create storagecached --table 0 2930266112 cache /dev/mapper/suse-cache-metadata /dev/mapper/suse-cache-blocks /dev/mapper/storage 512 1 writethrough default 0
dmsetup resume storagecached

on 60gb SSD LVM volume for caching 1.5TB USB 2.0 HDD. I'm mounting cached dm device with:

mount -t btrfs -o noatime,autodefrag,compress=lzo,space_cache /dev/mapper/storagecached /mnt/storage

However it doesn't seem to work at all. I've noticed that external HDD spins up EVERY time I'm accessing any content on cached device and every time some bot is accessing my website, even it should be cached I guess and it's quite annoying and finally leads to I/O errors after about one week because external HDD can't handle continuous spin ups and spin downs.

I've decided to actually perform some benchmarks and copying 8gb file to /dev/null with dd command achieves only 40MB/s. It's the same speed as non-cached HDD. Always. Both on dry run with wiped cache, as well as third or fourth read which I think should be cached. SSD used for cache achieves 92MB/s on root partition. Of course after every bench i was wiping linux RAM cache to eliminate RAM caching performance impact.

I'm actually using this script on 2 PCs and neither of them seem to work. I know writethrough won't speed up writing but I'm more concerned on read anyways.

EDIT:

After examining dmsetup status logs I've noticed that I'm getting terribly low cache hit ratios. Can it be btrfs fault?

2 Answers 2

0

dm cache takes some time to promote blocks to cache device. Unlike linux ram cache, default dmcache policy requires at least few reads of certain data in order to promote it to SSD, usually a lot of reads, more than 10. Paired with relatively big amount of spare ram in machine, it can take a lot of time to "train" dmcache. If cache size is similar to amount of spare ram and frequently used data takes plenty of space, it might be unable to work decently at all.

0

Use lvmcache(7) for the setup, you will be much happier. Also the manpage is very helpful for getting up and running. Mind the writeback/ writethrough policies and the smq caching policy, which is default on only on newer (post kernel 4.2) releases. Also this is default on RHEL 7.2+ or so.

You can watch my talk: https://www.youtube.com/watch?v=6W_xK5Ks-Lw or read the slides: https://www.linuxdays.cz/2017/video/Adam_Kalisz-SSD_cache_testing.pdf

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .