23

Configuration for cgroup driver is right in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

I also checked the Environment with cli

$ systemctl show --property=Environment kubelet | cat
Environment=KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf\x20--require-kubeconfig=true KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests\x20--allow-privileged=true KUBELET_NETWORK_ARGS=--network-plugin=cni\x20--cni-conf-dir=/etc/cni/net.d\x20--cni-bin-dir=/opt/cni/bin KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10\x20--cluster-domain=cluster.local KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook\x20--client-ca-file=/etc/kubernetes/pki/ca.crt KUBELET_CADVISOR_ARGS=--cadvisor-port=0 KUBELET_CGROUP_ARGS=--cgroup-driver=systemd

KUBELET_CGROUP_ARGS=--cgroup-driver=systemd

How to reproduce it:

  • yum install -y docker-1.12.6
  • systemctl enable docker && systemctl start docker
  • setenforce 0
  • yum install -y kubelet kubeadm
  • systemctl enable kubelet && systemctl start kubelet
  • systemctl daemon-reload
  • systemctl restart kubelet
  • kubelet log

Environment:

  • Kubernetes version (use kubectl version): 1.7.3
  • Cloud provider or hardware configuration**: 4 core 16G RAM
  • OS (e.g. from /etc/os-release): CentOS Linux 7 (Core)
  • Kernel (e.g. uname -a): Linux 10-8-108-92 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm

11 Answers 11

9

On my environment it only worked the other way around. Setting systemd results always in an error. Here is my current setup

OS: CentOS 7.6.1810 
Minikube Version v1.0.0
Docker Version  18.06.2-ce

The solution for me was: Check /etc/docker/daemon.json and change systemd to cgroupfs

{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}

Then reload systemctl systemctl daemon-reload Kill the previous minikub config minikube delete and start the minikube again minikube start --vm-driver=none

Now check the command line the output should find cgroupfs in both outputs

docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

In the end you should see

   kubectl is now configured to use "minikube"
=   Done! Thank you for using minikube!

Simple solution: Start your minikube with the Extra config parameter

--extra-config=kubelet.cgroup-driver=systemd

The complete command to start up minikube is the next line

minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd

All the best and have fun

7

It may be better to do the reverse and make kubelet to use systemd

In Kubernetes site, they recommend using systemd https://kubernetes.io/docs/setup/production-environment/container-runtimes/ More details here

And you can change kubelet to use systemd by following https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ adding this to /etc/sysconfig/kubelet


cat /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
1
  • This is what finally worked for me. After hours of searching found this at the bottom of a stackoverflow answer that finally made things work. Commented Apr 13, 2021 at 10:45
7

This is caused by miscofiguration during the initial startup. For example forgeting to change docker cgroup driver before executing kubeadm init command.

To remedy this under CentOS, open /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf or locate the file under your operating system. Locate the entry with EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env. Open this file and change the value of --cgroup-driver to systemd or to be the same as docker cgroup driver. Old Content:

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1

New Content:

KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1
0
6

Possible cause

kubelet 1.7.3 not reading config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf #50748

Solution

Troubleshooting kubeadm

If you are using CentOS and encounter difficulty while setting up the master node, verify that your Docker cgroup driver matches the kubelet config:

docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

If the Docker cgroup driver and the kubelet config don’t match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is --cgroup-driver. If it’s already set, you can update like so:

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

This can be replaced with:

CG=$(sudo docker info 2>/dev/null | sed -n 's/Cgroup Driver: \(.*\)/\1/p')
sed -i "s/cgroup-driver=systemd/cgroup-driver=$CG/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
3

To get minicube running on a CentOS 7, I need to start it with a --extra-config=kubelet.cgroup-driver=systemd as suggested in https://github.com/kubernetes/minikube/issues/2192.

1
  • hi , thanks , it worked for me using minikube on rhel7.5
    – Ijaz Ahmad
    Commented Mar 29, 2019 at 15:51
3

I followed below steps in Ubuntu 18.04 LTS with Kubernetes v1.22.6 and latest version of Docker CE and Containerd.

I have changed the docker service file to change it to systemd. In the older version of kubeadm, kubectl and kubelet till 1.21.1 there was no problem.

And going forward docker service should use systemd by default.

Step 1: Stop docker service

    `systemctl stop docker`

Step 2: change on files /etc/systemd/system/multi-user.target.wants/docker.service and /usr/lib/systemd/system/docker.service

Note: the file /usr/lib/systemd/system/docker.service is not avilable in my system.

From :

    `ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock`

TO:

    `ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd`

Step 3: Start docker service and kubelet service

    `systemctl daemon-reload`
    `systemctl start docker`
    

Step 4: Since I had run kubeadm reset, I had to run kubeadm init. But it worked.

Step 0: Before all these, several commands need to run to enable containerd to use systemd.

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF

sudo modprobe overlay sudo modprobe br_netfilter

Setup required sysctl params, these persist across reboots.

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF

Apply sysctl params without reboot

sudo sysctl --system

Kubernetes Container Runtime

2

It looks like kubelet process did not load the right settings from the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf telling from the error message.

After getting more info from the chat, I think there are several possible ways to go:

  1. Switch both kubelet and docker cgroup driver to cgroupfs Download docker from the repo below which uses cgroupfs by default.

    [dockerrepo] 
    name=Docker Repository 
    baseurl=https://yum.dockerproject.org/repo/main/centos/7 
    enabled=1 
    gpgcheck=1 
    gpgkey=https://yum.dockerproject.org/gpg
    

    And change the cgroup driver in kubelet conf as well. Check whether the error happens again and what kubelet loads from its conf.

  2. Add more logs in kubelet code to debug it

    This is the logic kubelet uses to get conf from both sides

9
  • I don't think so, the error information is saying that the cgroup-driver of kubelet is already cgroupfs, which is diff from the docker.
    – Yuwen Yan
    Commented Aug 16, 2017 at 8:33
  • @YuwenYan What is your docker's cgroup driver ? sudo docker info | grep Cgroup
    – ichbinblau
    Commented Aug 16, 2017 at 9:05
  • I am asking cos by default docker 1.12.6 use cgroupfs in Centos 7.
    – ichbinblau
    Commented Aug 16, 2017 at 9:09
  • it's systemd, I didn't changed it before, so I think the default config of docker is systemd too
    – Yuwen Yan
    Commented Aug 16, 2017 at 9:13
  • It depends on which repo you used to download docker, I think. I am using the same environment as you do (same k8s version, same docker version, same OS). I downloaded docker from repo yum.dockerproject.org/repo/main/centos/7 with cgroupfs pre-configured. I think it worth a try to change both settings to cgroupfs to locate the cause.
    – ichbinblau
    Commented Aug 16, 2017 at 9:50
2

OS: Centos 7.4 As kubernetes 1.23.1 recommend to use cgroup systemd, and docker 20.10.20 use cgroup cgroupfs. So, you have to change docker service file.

step1: Stop docker service

systemctl stop docker

step2: change on files /etc/systemd/system/multi-user.target.wants/docker.service and /usr/lib/systemd/system/docker.service

From :

`ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock`

TO:

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd

step3: start docker service and kubelet

systemctl start docker
kubeadm init phase kubelet-start
2

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/

Using the cgroupfs driver As this guide explains using the cgroupfs driver with kubeadm is not recommended.

To continue using cgroupfs and to prevent kubeadm upgrade from modifying the KubeletConfiguration cgroup driver on existing setups, you must be explicit about its value. This applies to a case where you do not wish future versions of kubeadm to apply the systemd driver by default.

See the below section on "Modify the kubelet ConfigMap" for details on how to be explicit about the value.

If you wish to configure a container runtime to use the cgroupfs driver, you must refer to the documentation of the container runtime of your choice

Go to change config docker cgroupfs to systemd

edit: /etc/docker/daemon.json

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

restart service docker

systemctl daemon-reload && systemctl restart docker && systemctl restart kubelet

show the config in

docker info |grep Cgroup
0

Edit this file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf change systemd to cgroupfs". then restart the kubelet systemctl restart kubelet.

0

Changing dockers cgroup driver as mentioned in this answer worked for me.

Not the answer you're looking for? Browse other questions tagged or ask your own question.