1

Does it make sense to mount any of Docker directories under /var/lib/docker as tmpfs to speed things up and reduce SSD wear?

~ # l /var/lib/docker/
total 56
drwx------  2 root root  4096 Jul 28 02:02 builder
drwx--x--x  4 root root  4096 Jul 28 02:02 buildkit
drwx------ 10 root root  4096 Aug  2 23:08 containers
drwx------  3 root root  4096 Jul 28 02:02 image
drwxr-x---  3 root root  4096 Jul 28 02:02 network
drwx------ 96 root root 12288 Aug  2 23:08 overlay2
drwx------  4 root root  4096 Jul 28 02:02 plugins
drwx------  2 root root  4096 Aug  2 07:20 runtimes
drwx------  2 root root  4096 Jul 28 02:02 swarm
drwx------  2 root root  4096 Aug  2 07:20 tmp
drwx------  2 root root  4096 Jul 28 02:02 trust
drwx------  2 root root  4096 Jul 28 02:02 volumes

Having to rebuild containers is considerable but data los is not.

1 Answer 1

2

You would lose all images, containers, and volumes with any reboot. In addition your entire build cache would be lost. The result is significantly more delay waiting for images to download, and for local builds the base images still need to be downloaded. And then those builds would start from the beginning after a reboot. If you push the images, since the build cache isn't available, you'll push new layers to the registry and cause more delays for others using those images.

More importantly, you need to ensure the graph driver supports tmpfs as the backing filesystem, and with overlay2, along with most all other graph drivers, that's not the case. The only filesystem listed there is vfs which isn't any form of overlay filesystem at all, it's an entire copy of the parent plus changes for every layer, which means you'll have n copies of every file where n is the number of layers that file is included within. That could mean many times the memory usage as you currently see for disk usage, and slower performance for all of the copy operations needed to create a layer.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .