k8s reset之后彻底清除上次初始化

您所在的位置:网站首页 rm彻底删除 k8s reset之后彻底清除上次初始化

k8s reset之后彻底清除上次初始化

2024-02-24 15:40| 来源: 网络整理| 查看: 265

k8s reset之后彻底清除上次初始化kubeadm reset

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -Xsystemctl stop kubeletsystemctl stop dockerrm -rf /var/lib/cni/*rm -rf /var/lib/kubelet/*rm -rf /etc/cni/*ifconfig cni0 downifconfig flannel.1 downifconfig docker0 downip link delete cni0ip link delete flannel.1systemctl start docker之后重新kubeadm init

3. journalctl -u kubelet 查看kubectl日志发现报错如下

Kubernetes启动报错

kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

 

错误原因:

docker和k8s使用的cgroup不一致导致

解决办法:

修改二者一致,统一使用systemd或者cgroupfs进行资源管理。由于k8s官方文档中提示使用cgroupfs管理docker和k8s资源,而使用systemd管理节点上其他进程资源在资源压力大时会出现不稳定,因此推荐修改docker和k8s统一使用systemd管理资源。

Cgroup drivers

When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. It’s possible to configure your container runtime and the kubelet to use cgroupfs. Using cgroupfs alongside systemd means that there will then be two different cgroup managers.

Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the available and in-use resources. When we have two managers we end up with two views of those resources. We have seen cases in the field where nodes that are configured to use cgroupfs for the kubelet and Docker, and systemd for the rest of the processes running on the node becomes unstable under resource pressure.

docker修改方法:

修改或创建/etc/docker/daemon.json,加入下面的内容:

cat > /etc/docker/daemon.json



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3