k8s, etcd 多节点集群部署问题排查记录
目录
文章目录
部署环境
双节点 IP 配置:
# cat /etc/hosts
192.168.1.5 vmnote0
192.168.1.12 vmnote1
部署文档:https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-centos.html
1. etcd 集群启动失败
etcd 双节点配置之一:
[root@k8s-master ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--name vmnode0 \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.1.5:2380 \
--listen-peer-urls https://192.168.1.5:2380 \
--listen-client-urls https://192.168.1.5:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.1.5:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster vmnode0=https://192.168.1.5:2380,vmnode1=https://192.168.1.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
etcd 双节点配置之二:
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--name vmnode1 \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.1.12:2380 \
--listen-peer-urls https://192.168.1.12:2380 \
--listen-client-urls https://192.168.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.1.12:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster vmnode0=https://192.168.1.5:2380,vmnode1=https://192.168.1.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
问题:
Dec 14 12:30:30 k8s-node2.localdomain etcd[2560]: warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
Dec 14 12:30:30 k8s-node2.localdomain etcd[2560]: member 218d8bfb33a29c6 has already been bootstrapped
Dec 14 12:30:30 k8s-node2.localdomain systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE
Dec 14 12:30:30 k8s-node2.localdomain systemd[1]: Failed to start Etcd Server.
解决
主要问题在 member 218d8bfb33a29c6 has already been bootstrapped,原因:
One of the member was bootstrapped via discovery service. You must remove the previous data-dir to clean up the member information. Or the member will ignore the new configuration and start with the old configuration. That is why you see the mismatch.
2. etcd 健康状态检查失败
[root@k8s-master ~]# etcdctl \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
> --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> cluster-health
2018-12-14 13:13:15.280712 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-12-14 13:13:15.281964 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
failed to check the health of member 218d8bfb33a29c6 on https://192.168.1.12:2379: Get https://192.168.1.12:2379/health: net/http: TLS handshake timeout
member 218d8bfb33a29c6 is unreachable: [https://192.168.1.12:2379] are all unreachable
failed to check the health of member 499bc1bc6765950c on https://192.168.1.5:2379: Get https://192.168.1.5:2379/health: net/http: TLS handshake timeout
member 499bc1bc6765950c is unreachable: [https://192.168.1.5:2379] are all unreachable
cluster is unhealthy
解决
主要原因:https://192.168.1.12:2379/health: net/http: TLS handshake timeout,等待超时,一般来说可能是因为设置了 http_proxy/https_proxy 做代理翻墙导致不能访问自身和内网。 所以配置 no_proxy:
PROXY_HOST=127.0.0.1
export all_proxy=http://$PROXY_HOST:8118
export ftp_proxy=http://$PROXY_HOST:8118
export http_proxy=http://$PROXY_HOST:8118
export https_proxy=http://$PROXY_HOST:8118
export no_proxy='localhost,192.168.1.5,192.168.1.12'
3. kube-apiserver 启动失败
[root@k8s-master kubernetes]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Service
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sat 2018-12-15 03:07:12 UTC; 4s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 3659 ExecStart=/usr/local/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=1/FAILURE)
Main PID: 3659 (code=exited, status=1/FAILURE)
Dec 15 03:07:11 k8s-master.localdomain systemd[1]: kube-apiserver.service: main process exited, code=exited, status=1/FAILURE
Dec 15 03:07:11 k8s-master.localdomain systemd[1]: Failed to start Kubernetes API Service.
Dec 15 03:07:11 k8s-master.localdomain systemd[1]: Unit kube-apiserver.service entered failed state.
Dec 15 03:07:11 k8s-master.localdomain systemd[1]: kube-apiserver.service failed.
Dec 15 03:07:12 k8s-master.localdomain systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Dec 15 03:07:12 k8s-master.localdomain systemd[1]: start request repeated too quickly for kube-apiserver.service
Dec 15 03:07:12 k8s-master.localdomain systemd[1]: Failed to start Kubernetes API Service.
Dec 15 03:07:12 k8s-master.localdomain systemd[1]: Unit kube-apiserver.service entered failed state.
Dec 15 03:07:12 k8s-master.localdomain systemd[1]: kube-apiserver.service failed.
看不出什么问题,回过头来看 service 的 systemd 配置:
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
手动拼装出 ExecStart 来执行:
[root@k8s-master ~]# /usr/local/bin/kube-apiserver --logtostderr=true --v=0 --advertise-address=192.168.1.5 --bind-address=192.168.1.5 --insecure-bind-address=192.168.1.5 --insecure-port=8080 --insecure-bind-address=127.0.0.1 --etcd-servers=https://192.168.1.5:2379,https://192.168.1.12:2379 --port=8080 --kubelet-port=10250 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota --authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h
/usr/local/bin/kube-apiserver: line 30: /lib/lsb/init-functions: No such file or directory
问题出现了:/usr/local/bin/kube-apiserver: line 30: /lib/lsb/init-functions: No such file or directory。
安装 redhat-lsb 包:
yum install redhat-lsb -y
执行还是错误:
[root@k8s-master ~]# /usr/local/bin/kube-apiserver --logtostderr=true --v=0 --advertise-address=192.168.1.5 --bind-address=192.168.1.5 --insecure-bind-address=192.168.1.5 --insecure-port=8080 --insecure-bind-address=127.0.0.1 --etcd-servers=https://192.168.1.5:2379,https://192.168.1.12:2379 --port=8080 --kubelet-port=10250 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota --authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h
/opt/bin/kube-apiserver not present or not executable [FAILED]
还是错误,这里进入了一个无限循环的大坑。我在 CentOS 上拉了一个 Ubuntu 的 kube-apiserver 执行文件。所以很多依赖程序并不存在,最后决定重新拉取执行文件。
解决
$ wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes
$ tar -xzvf kubernetes-src.tar.gz
$ cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
4. kubelet 启动失败
[root@k8s-master opt]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sat 2018-12-15 08:09:53 UTC; 1s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 12585 ExecStart=/usr/local/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_POD_INFRA_CONTAINER $KUBELET_ARGS (code=exited, status=200/CHDIR)
Main PID: 12585 (code=exited, status=200/CHDIR)
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: kubelet.service failed.
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: kubelet.service holdoff time over, scheduling restart.
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: Stopped Kubernetes Kubelet Server.
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: start request repeated too quickly for kubelet.service
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: Failed to start Kubernetes Kubelet Server.
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:09:53 k8s-master.localdomain systemd[1]: kubelet.service failed.
监控日志输出:
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: Started Kubernetes Kubelet Server.
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=200/CHDIR
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: kubelet.service failed.
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: kubelet.service holdoff time over, scheduling restart.
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: Stopped Kubernetes Kubelet Server.
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: Started Kubernetes Kubelet Server.
Dec 15 08:25:30 k8s-master.localdomain systemd[13491]: Failed at step CHDIR spawning /usr/local/bin/kubelet: No such file or directory
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=200/CHDIR
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:25:30 k8s-master.localdomain systemd[1]: kubelet.service failed.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service holdoff time over, scheduling restart.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Stopped Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Started Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=200/CHDIR
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service failed.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service holdoff time over, scheduling restart.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Stopped Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Started Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=200/CHDIR
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service failed.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service holdoff time over, scheduling restart.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Stopped Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Started Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=200/CHDIR
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service failed.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service holdoff time over, scheduling restart.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Stopped Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: start request repeated too quickly for kubelet.service
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Failed to start Kubernetes Kubelet Server.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: Unit kubelet.service entered failed state.
Dec 15 08:25:31 k8s-master.localdomain systemd[1]: kubelet.service failed.
Failed at step CHDIR spawning /usr/local/bin/kubelet: No such file or directory 非常可以,查看这个文件的路径是否正确。奇怪的是这个文件路径是正确的,于是继续看 systemd 配置文件:
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
文件路径还有 /var/lib/kubelet,查看一番,的确没有。
解决
mkdir /var/lib/kubelet
5. Approved CSR 后获取 nodes 失败
[root@k8s-master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-bv37w 19m kubelet-bootstrap Approved
csr-bwlxd 1m kubelet-bootstrap Approved
csr-f8w38 33m kubelet-bootstrap Approved
csr-g8927 47m kubelet-bootstrap Approved
csr-h7wph 4m kubelet-bootstrap Approved
csr-hpl81 50m kubelet-bootstrap Approved
csr-qxxsh 40m kubelet-bootstrap Approved
csr-r7vzl 51m kubelet-bootstrap Approved
csr-w1ccb 21m kubelet-bootstrap Approved
[root@k8s-master ~]# kubectl get nodes
No resources found.
查看日志:
[root@k8s-master ~]# journalctl -xe -u kube* | grep error
Dec 15 08:50:15 k8s-master.localdomain kube-scheduler[14917]: E1215 08:50:15.200937 14917 leaderelection.go:229] error retrieving resource lock kube-system/kube-scheduler: Get http://192.168.1.5:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler: dial tcp 192.168.1.5:8080: getsockopt: connection refused
kube-controller-manager 访问 kube-apiserver 的 192.168.1.5:8080 失败了,那么查看一下端口信息:
[root@k8s-master ~]# netstat -lpntu
...
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 14885/kube-apiserve
8080 端口绑定的 ip 是 127.0.0.1 而不是 192.168.1.5,可能是配置文件有问题?查看一下:
KUBE_API_ADDRESS="--advertise-address=192.168.1.5 --bind-address=192.168.1.5 --insecure-bind-address=192.168.1.5 --insecure-port=8080 --insecure-bind-address=127.0.0.1"
的确绑定到了 127.0.0.1。
解决
修改 /etc/kubernetes/apiserver 配置文件:
KUBE_API_ADDRESS="--advertise-address=192.168.1.5 --bind-address=192.168.1.5 --insecure-bind-address=192.168.1.5 --insecure-port=8080 --insecure-bind-address=192.168.1.5"
重启,解决。
[root@k8s-master kubernetes]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-96lj4 37s kubelet-bootstrap Approved,Issued
csr-bv37w 23m kubelet-bootstrap Approved,Issued
csr-bwlxd 5m kubelet-bootstrap Approved,Issued
csr-dpqgm 37s kubelet-bootstrap Approved,Issued
csr-f8w38 37m kubelet-bootstrap Approved,Issued
csr-g8927 52m kubelet-bootstrap Approved,Issued
csr-h7wph 9m kubelet-bootstrap Approved,Issued
csr-hpl81 55m kubelet-bootstrap Approved,Issued
csr-qxxsh 44m kubelet-bootstrap Approved,Issued
csr-r7vzl 56m kubelet-bootstrap Approved,Issued
csr-w1ccb 26m kubelet-bootstrap Approved,Issued
[root@k8s-master kubernetes]# kubectl get nodes
NAME STATUS AGE VERSION
192.168.1.12 NotReady 10s v1.6.0
192.168.1.5 Ready 7s v1.6.0
6. 访问 pod app 失败
root@k8s-master ~]# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx --port=80
deployment "nginx" created
[root@k8s-master ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposed
[root@k8s-master ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 2 2 2 0 27s
[root@k8s-master ~]# kubectl describe svc example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.254.109.60
Port: <unset> 80/TCP
NodePort: <unset> 30019/TCP
Endpoints: 172.17.0.2:80
Session Affinity: None
Events: <none>
[root@k8s-master ~]# curl "10.254.109.60:80"
^C
[root@k8s-master ~]# curl "172.17.0.2:80"
^C
解决
第一反应,可能还是 no_proxy 代理的问题,试一试。
export no_proxy='localhost,192.168.1.5,192.168.1.12,10.254.109.60,172.17.0.2'
再试一次就可以了。
[root@k8s-master kubernetes]# source /etc/profile
[root@k8s-master kubernetes]# curl "10.254.109.60:80"
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master ~]# curl "172.17.0.2:80"
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
k8s, etcd 多节点集群部署问题排查记录的更多相关文章
- zookeeper集群部署问题排查记录
今天在三台虚拟机搭建zookeeper集群,一直连不通,然后进行了几个小时的斗争,做个记录. 具体部署方式网上有很多, 不在赘述.产生连接不同的问题主要有以下几个方面: 1.仔细检查配置文件. 是否有 ...
- 浅入深出ETCD之【集群部署与golang客户端使用】
前言 之前说了etcd的简介,命令行使用,一些基本原理.这次来说说现实一点的集群部署和golang版本的客户端使用.因为在实际使用过程中,etcd的节点肯定是需要2N+1个进行部署的,所以有必要说明一 ...
- Windows下ELK环境搭建(单机多节点集群部署)
1.背景 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安全性,从而及时 ...
- k8s单节点集群部署应用
之所以用k8s来部署应用,就是因为k8s可以灵活的控制集群规模,进行扩充或者收缩.生产上我们要配置的参数较多,命令行的方式显然不能满足需求,我们应该使用基于配置文件的方式.接下来做一个部署的demo: ...
- Elasticsearch单机双节点集群部署实战
一.安装第一个ElasticSearch(主节点) 1.创建es用户,启动es不能使用root用户 useradd es passwd es12 root用户进入/home/es目录下 2.获取Ela ...
- 这一篇 K8S(Kubernetes)集群部署 我觉得还可以!!!
点赞再看,养成习惯,微信搜索[牧小农]关注我获取更多资讯,风里雨里,小农等你,很高兴能够成为你的朋友. 国内安装K8S的四种途径 Kubernetes 的安装其实并不复杂,因为Kubernetes 属 ...
- RabbitMQ集群部署、高可用和持久化
RabbitMQ 安装和使用 1.安装依赖环境 在 http://www.rabbitmq.com/which-erlang.html 页面查看安装rabbitmq需要安装erlang对应的版本 在 ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...
随机推荐
- QT Qdialog的对话框模式以及其关闭
模式对话框 描述 阻塞同一应用程序中其它可视窗口输入的对话框.模式对话框有自己的事件循环,用户必须完成这个对话框中的交互操作,并且关闭了它之后才能访问应用程序中的其它任何窗口.模式对话框仅阻止访问与对 ...
- c++ 初学者的画图库EasyX
EasyX 什么是easyx? EasyX 是针对 C++ 的图形库,可以帮助 C++语言初学者快速上手图形和游戏编程.其实就是c++的一个图形库让初学者不用只在控制台输出代码,而是在图形界面进行开发 ...
- Dubbo 02 微信开发
Dubbo 02 微信开发 Dubbo Admin https://github.com/apache/dubbo-admin 原系统微服务改造 mvc层排除数据源检查 Application 入口程 ...
- php中限制ip段访问、禁止ip提交表单的代码
在需要禁止访问或提交表单的页面添加下面的代码进行判断就可以了. 注意:下边只是一个PHP限制IP的实例代码,如果您打算应用到CMS中,请自行修改. <?php /加IP访问限制 if(geten ...
- ffmpeg函数03__av_seek_frame()
当需要把视频跳转到N秒的时候可以使用下面的方法:int64_t timestamp = N * AV_TIME_BASE; av_seek_frame(fmtctx, index_of_video, ...
- JavaScript入门学习之一——初级语法
JavaScript是前端编辑的一种编程语言(不同于html,html是一种标记语言),所以和其他的编程语言一样,我们将会从下面几点学习 基础语法 数据类型 函数 面向对象 JavaScript的组成 ...
- HDU - 6704 K-th occurrence (后缀数组+主席树/后缀自动机+线段树合并+倍增)
题意:给你一个长度为n的字符串和m组询问,每组询问给出l,r,k,求s[l,r]的第k次出现的左端点. 解法一: 求出后缀数组,按照排名建主席树,对于每组询问二分或倍增找出主席树上所对应的的左右端点, ...
- :last-child的坑-CSS3选择器
CSS3选择器之:last-child - Eric 真实经历 最近开发项目的时候发现了一个这么多年忽略的问题,和大家分享一下.项目使用的是Antd组件库,有一个搜索框是这样的: 为了保证下拉框的内容 ...
- Tushare基础调用及处理
创建索引: db.daily.createIndex({code:1,date:1,'index':1}) mongodb查看表有几列: map = function() { for (var key ...
- 随机验证码生成和join 字符串
函数:string.join() Python中有join()和os.path.join()两个函数,具体作用如下: join(): 连接字符串数组.将字符串.元组.列表中的元素以指定的字符(分隔符) ...