Kubernetes device mapper out of space - pod stuck in ContainerCreating state
If you have a pod in ContainerCreating state, follow these steps to held determine the issue:
- carry out a wide pod listing (kubectl get pods -o wide) to determine which node your erroneous pod is hosted on
- ssh to the node in question
- run (as root)
journalctl --unit kubelet --no-pager | grep <pod name>
In this particular case the following message was present
Mar 15 12:35:49 jupiter.stack1.com kubelet[3146]: E0315 12:35:49.215454 3146 pod_workers.go:186] Error syncing pod ec706dba-2847-11e8-b8d8-0676d5f18210 ("kube-proxy-nwp6t_kube-system(ec706dba-2847-11e8-b8d8-0676d5f18210)"), skipping: failed to "CreatePodSandbox" for "kube-proxy-nwp6t_kube-system(ec706dba-2847-11e8-b8d8-0676d5f18210)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-proxy-nwp6t_kube-system(ec706dba-2847-11e8-b8d8-0676d5f18210) \" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-proxy-nwp6t\": Error response from daemon: devmapper: Thin Pool has 152128 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior"
This is stating that the Docker device mapper is out of space
(as set in /etc/sysconfig/docker via option --storage-opt dm.loopdatasize=40G) .
This can usually be resolved by removing containers that have exited, dangling images and dangling volumes using the following steps:
Cleanup exited processes:
docker rm $(docker ps -q -f status=exited)
Cleanup dangling volumes:
docker volume rm $(docker volume ls -qf dangling=true)
Cleanup dangling images:
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
Now run docker info to check your Device Mapper available space (meta and data)