3

I’m running the latest version of Docker (1.8.2) on Mac OS X 10.10.5 (Yosemite) with the latest docker-machine (0.4.1). I have just two vanilla CentOS 6 containers running on a stock docker-machine host. There is no special configuration, the containers are actually not running anything at all… just bash shells.

Over time I see the disk utilization reported by df -h rising to 100% on both containers and the host machine (i.e. the docker-machine). I can also hear the fan on my Mac coming on and speeding up until I shutdown the containers.

I thought maybe some rogue process was causing the local filesystem to grow inside the containers but du -hs / reports only a few hundred MB.

I’m relatively new to Docker and I can’t seem to track down the source of this problem. Any idea what could cause the disk utilization to grow out of control like this?

Edit 1: add outputs of df -h and df -i

Disk usage from a container

[root@99e23f7c4ae6 /]# df -h
Filesystem      Size  Used Avail Use% Mounted on
none             19G   18G     0 100% /
tmpfs           499M     0  499M   0% /dev
shm              64M     0   64M   0% /dev/shm
tmpfs           499M     0  499M   0% /sys/fs/cgroup
/dev/sda1        19G   18G     0 100% /etc/hosts
tmpfs           499M     0  499M   0% /proc/kcore
tmpfs           499M     0  499M   0% /proc/timer_stats

[root@99e23f7c4ae6 /]# df -i
Filesystem      Inodes IUsed   IFree IUse% Mounted on
none           1218224 28199 1190025    3% /
tmpfs           127518    17  127501    1% /dev
shm             127518     1  127517    1% /dev/shm
tmpfs           127518    11  127507    1% /sys/fs/cgroup
/dev/sda1      1218224 28199 1190025    3% /etc/hosts
tmpfs           127518    17  127501    1% /proc/kcore
tmpfs           127518    17  127501    1% /proc/timer_stats

[root@99e23f7c4ae6 /]# du -hs /
du: cannot access '/proc/348/task/348/fd/3': No such file or directory
du: cannot access '/proc/348/task/348/fdinfo/3': No such file or directory
du: cannot access '/proc/348/fd/4': No such file or directory
du: cannot access '/proc/348/fdinfo/4': No such file or directory
610M    /

Disk usage from host

docker@default:~$ df -h
Filesystem                Size      Used Available Use% Mounted on
tmpfs                   896.6M    115.3M    781.3M  13% /
tmpfs                   498.1M     72.0K    498.0M   0% /dev/shm
/dev/sda1                18.2G     18.2G         0 100% /mnt/sda1
cgroup                  498.1M         0    498.1M   0% /sys/fs/cgroup
none                    464.8G    224.6G    240.2G  48% /Users
/dev/sda1                18.2G     18.2G         0 100% /mnt/sda1/var/lib/docker/aufs
none                     18.2G     18.2G         0 100% /mnt/sda1/var/lib/docker/aufs/mnt/99e23f7c4ae608b2354c9375a0e3a7513692b44297c24d143a6b92dd73dae611
df: /var/run/docker/netns/99e23f7c4ae6: Permission denied

docker@default:~$ df -i
Filesystem              Inodes      Used Available Use% Mounted on
tmpfs                   124.5K      4.4K    120.2K   3% /
tmpfs                   124.5K         3    124.5K   0% /dev/shm
/dev/sda1                 1.2M     27.5K      1.1M   2% /mnt/sda1
cgroup                  124.5K        11    124.5K   0% /sys/fs/cgroup
none                      1000         0      1000   0% /Users
/dev/sda1                 1.2M     27.5K      1.1M   2% /mnt/sda1/var/lib/docker/aufs
none                      1.2M     27.5K      1.1M   2% /mnt/sda1/var/lib/docker/aufs/mnt/99e23f7c4ae608b2354c9375a0e3a7513692b44297c24d143a6b92dd73dae611
df: /var/run/docker/netns/99e23f7c4ae6: Permission denied
0

1 Answer 1

0

Every time you start a new container with docker run, you essentially branch off from that original image you started from, so depending on what you do you may have a bunch of very similar images that are taking up space for no reason. You also have to remember that on Mac, Docker is hosted by a VirtualBox based VM so all Docker images share the total disk space.

I just started with Docker so there must be a better way of dealing with this problem, but this is how I resolved it below. You may not necesserily want to remove all of your containers but this will give you some insight into the state of your docker:

  • Check your initialized containers: docker ps -a (-a because some of them aren't running)
  • Stop all of the existing containers: docker stop `docker ps -a|cut -f 1 -d ' '|xargs`

  • Remove all existing containers: docker rm `docker ps -a|cut -f 1 -d ' '|xargs`

  • Remove all "unnamed" (<none>) images. I've been playing with creating my own images and most of them were unused: docker rmi `docker images|grep -i none|sed -e 's/ \{1,\}/ /g'| cut -d ' ' -f 3|xargs`

After that, I got my space back.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .