I'm creating dm-cache device using my script: http://pastebin.com/KTSzL6EA
Effectively it's running those commands:
dmsetup create suse-cache-metadata --table 0 15872 linear /dev/mapper/suse-cache 0
dmsetup create suse-cache-blocks --table 0 125813248 linear /dev/mapper/suse-cache 15872
dmsetup create storagecached --table 0 2930266112 cache /dev/mapper/suse-cache-metadata /dev/mapper/suse-cache-blocks /dev/mapper/storage 512 1 writethrough default 0
dmsetup resume storagecached
on 60gb SSD LVM volume for caching 1.5TB USB 2.0 HDD. I'm mounting cached dm device with:
mount -t btrfs -o noatime,autodefrag,compress=lzo,space_cache /dev/mapper/storagecached /mnt/storage
However it doesn't seem to work at all. I've noticed that external HDD spins up EVERY time I'm accessing any content on cached device and every time some bot is accessing my website, even it should be cached I guess and it's quite annoying and finally leads to I/O errors after about one week because external HDD can't handle continuous spin ups and spin downs.
I've decided to actually perform some benchmarks and copying 8gb file to /dev/null with dd
command achieves only 40MB/s. It's the same speed as non-cached HDD. Always. Both on dry run with wiped cache, as well as third or fourth read which I think should be cached. SSD used for cache achieves 92MB/s on root partition. Of course after every bench i was wiping linux RAM cache to eliminate RAM caching performance impact.
I'm actually using this script on 2 PCs and neither of them seem to work. I know writethrough won't speed up writing but I'm more concerned on read anyways.
EDIT:
After examining dmsetup status
logs I've noticed that I'm getting terribly low cache hit ratios. Can it be btrfs fault?