K8s集群初始化报错 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘

简介

安装K8s时,在集群及节点初始化过程中出现[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.报错。

处理步骤

#在安装K8s初始化主节点过程中,出现如下报错:
queena@queena-Lenovo:~$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16  --kubernetes-version=v1.22.3
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local queena-lenovo] and IPs [10.96.0.1 192.168.31.245]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost queena-lenovo] and IPs [192.168.31.245 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost queena-lenovo] and IPs [192.168.31.245 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

......
    Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

查看docker驱动是否与kubelet驱动一致即可

docker info | grep Cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
WARNING: No swap limit support

#查看kubelet驱动.
$ sudo cat /var/lib/kubelet/config.yaml | grep cgroup
cgroupDriver: systemd

#修改docker驱动,查看/etc/docker/daemon.json文件,没有的话,手动创建,添加以下内容:
$ vim /etc/docker/daemon.json
#在该文件中加入,   "exec-opts": ["native.cgroupdriver=systemd"]
{
  "registry-mirrors": ["https://dpxn2pal.mirror.aliyuncs.com"],
  "exec-opts": [ "native.cgroupdriver=systemd" ]
}

#重启docker 
$ systemctl daemon-reload
$ systemctl restart docker
#重启kubelet
$ systemctl restart kubelet
$ sudo kubeadm reset  #此处重置,没关系,反正原来的也起不来.

#下面这两行用来验证cgroupdriver 修改生效,都得出systemd.
$ docker info -f {{.CgroupDriver}}
systemd
$ docker info | grep -i cgroup
 Cgroup Driver: systemd
 Cgroup Version: 1
WARNING: No swap limit support

再次执行K8s初始化主节点,即可成功.

K8s集群初始化报错 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘插图

版权声明 1 本网站名称:诺言博客
2 本站永久网址:https://nuoyo.cn
3 本网站的文章部分内容可能来源于网络,仅供大家学习与参考,如有侵权,请联系站长 QQ2469329338进行删除处理。
4 本站一切资源不代表本站立场,并不代表本站赞同其观点和对其真实性负责。
5 本站一律禁止以任何方式发布或转载任何违法的相关信息,访客发现请向站长举报
6 本站资源大多存储在云盘,如发现链接失效,请联系我们我们会第一时间更新。
7 如无特别声明本文即为原创文章仅代表个人观点,版权归《诺言》所有,欢迎转载,转载请保留原文链接。
THE END
分享
二维码
打赏
海报
K8s集群初始化报错 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘
简介 安装K8s时,在集群及节点初始化过程中出现[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get……
<<上一篇
下一篇>>