环境介绍

角色 操作系统 IP 主机名 Docker版本
master,node CentOS 7.4 192.168.0.210 node210 17.11.0-ce
node CentOS 7.4 192.168.0.211 node211 17.11.0-ce
node CentOS 7.4 192.168.0.212 node212 17.11.0-ce

1.基础环境配置(所有服务器执行)
a.SELinux关闭

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
setenforce 0

b.Docker安装

根据官方文档,Docker分为Community Edition (CE)Enterprise Edition (EE)两个版本,我们作为学习和个人使用,当然选择的是Community Edition (CE),安装步骤如下:

// 步骤1 - 为了确保没有安装过老的Docker版本,先将已经安装的Docker从宿主机上删除(如果是在使用中的正式服务器,请谨慎执行此步):
$ sudo yum remove docker \
docker-common \
docker-selinux \
docker-engine // 步骤2 - 安装Docker所需的包:
$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2 // 步骤3 - 配置到稳定的Docker CE安装库:
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo // 步骤4 - 安装Docker CE:
$ sudo yum install docker-ce // 步骤5 - 启动Docker服务:
$ sudo systemctl start docker // 步骤6 - 测试是否安装成功:
// 可以通过查看版本的形式确认安装是否成功:
$ docker --version
// 也可以通过直接运行hello-world容器来确认安装是否成功:
$ docker run hello-world

c.配置国内Docker镜像加速器

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://e2a6d434.m.daocloud.io

d.开启Docker开机自动启动

systemctl enable docker.service
systemctl restart docker

2.kubernetes证书准备(master执行)
a.为将文件复制到Node节点,节省部署时间,我这里做ssh信任免密复制

ssh-genkey -t rsa
ssh-copy-id 192.168.0.211
ssh-copy-id 192.168.0.212

b.下载证书生成工具

yum -y install wget
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

c.CA证书制作
#目录准备

mkdir /root/ssl
cd /root/ssl

#创建CA证书配置
vim ca-config.json

{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}

#创建CA证书请求文件
vim ca-csr.json

{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JIANGXI",
"L": "NANCHANG",
"O": "k8s",
"OU": "System"
}
]
}

#生成CA证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#创建kubernetes证书签名请求
vim kubernetes-csr.json

{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.0.210", #修改成自己主机的IP
"192.168.0.211", #修改成自己主机的IP
"192.168.0.212", #修改成自己主机的IP
"10.254.0.1",
"kubernetes",
"node210", #修改成自己主机的主机名
"node211", #修改成自己主机的主机名
"node212", #修改成自己主机的主机名
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JIANGXI",
"L": "JIANGXI",
"O": "k8s",
"OU": "System"
}
]
}

#生成kubernetes证书及私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

#创建admin证书签名请求
vim admin-csr.json

{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JIANGXI",
"L": "JIANGXI",
"O": "system:masters",
"OU": "System"
}
]
}

#生成admin证书及私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#创建 kube-proxy 证书签名请求
vim kube-proxy-csr.json

{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JIANGXI",
"L": "JIANGXI",
"O": "k8s",
"OU": "System"
}
]
}

#生成证书及私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#分发证书

mkdir -p /etc/kubernetes/ssl
cp -r *.pem /etc/kubernetes/ssl cd /etc
scp -r kubernetes/ 192.168.0.211:/etc/
scp -r kubernetes/ 192.168.0.212:/etc/

3.etcd集群安装及配置
a.下载etcd,并分发至节点

wget https://github.com/kubernetes/kubernetes/archive/v1.8.5.tar.gz
tar zxf etcd-v3.2.11-linux-amd64.tar.gz
mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin
scp -r /usr/local/bin/etc* 192.168.0.211:/usr/local/bin/
scp -r /usr/local/bin/etc* 192.168.0.212:/usr/local/bin/

b.创建etcd服务启动文件
vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster infra1=https://192.168.0.210:2380,infra2=https://192.168.0.211:2380,infra3=https://192.168.0.212:2380 \
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

c.创建必要的目录

mkdir -p /var/lib/etcd/
mkdir /etc/etcd

d.编辑etcd的配置文件
vim /etc/etcd/etcd.conf
node210的配置文件/etc/etcd/etcd.conf为

# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.210:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.210:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.210:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.210:2379"

node211的配置文件/etc/etcd/etcd.conf为

# [member]
ETCD_NAME=infra2
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.211:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.211:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.211:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.211:2379"

node212的配置文件/etc/etcd/etcd.conf为

# [member]
ETCD_NAME=infra3
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.212:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.212:2379" #[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.212:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.212:2379"

#在所有节点执行,启动etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

如果报错,就需要查看/var/log/messages文件进行排错

e.测试集群是否正常

验证ETCD是否成功启动
etcdctl \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
cluster-health

4.配置kubernetes参数
a.下载kubernetes编译好的二进制文件并进行分发

wget https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gz
tar zxf kubernetes-server-linux-amd64.tar.gz
cp -rf kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kubectl,kubefed,kubelet,kube-proxy,kube-scheduler} /usr/local/bin/
scp -r kubernetes/server/bin/{kubelet,kube-proxy} 192.168.0.211:/usr/local/bin/
scp -r kubernetes/server/bin/{kubelet,kube-proxy} 192.168.0.212:/usr/local/bin/

#查看kubernetes最新版,可到https://github.com/kubernetes/kubernetes/releases
然后进入 CHANGELOG-x.x.md就可限制二进制的下载地址

b.创建 TLS Bootstrapping Token

cd /etc/kubernetes

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

c.创建 kubelet bootstrapping kubeconfig 文件

cd /etc/kubernetes
export KUBE_APISERVER="https://192.168.0.210:6443"

#设置集群参数

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

#设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

#设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

d.创建 kube-proxy kubeconfig 文件
export KUBE_APISERVER="https://192.168.0.210:6443"
#设置集群参数

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

#设置客户端认证参数

kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

#设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

#设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

e.创建 kubectl kubeconfig 文件
export KUBE_APISERVER="https://192.168.0.210:6443"
#设置集群参数

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}

#设置客户端认证参数

kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem

#设置上下文参数

kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin

#设置默认上下文
kubectl config use-context kubernetes

f.将2个bootstrap.kubeconfig kube-proxy.kubeconfig文件分发至其余服务器

scp -r *.kubeconfig 192.168.0.211:/etc/kubernetes/
scp -r *.kubeconfig 192.168.0.212:/etc/kubernetes/

5.MASTER安装及配置
a.apiserver安装配置
#apiserver服务启动文件
vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

#配置kubernetes默认配置
vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
KUBE_MASTER="--master=http://192.168.0.210:8080"

#配置apiserver参数
vim /etc/kubernetes/apiserver

###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com"
KUBE_API_ADDRESS="--advertise-address=192.168.0.210 --bind-address=192.168.0.210 --insecure-bind-address=192.168.0.210"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kuberne
#如果出现错误,查看/var/log/messages
tes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --
event-ttl=1h"

#启动apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

#如果出现错误,查看/var/log/messages

b.controller-manager服务配置
#controller-manager服务启动文件
vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

#配置controller-manager服务配置文件
vim /etc/kubernetes/controller-manager

#The following values are used to configure the kubernetes controller-manager

#defaults from config and apiserver should be adequate

#Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

#启动controller-manager服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

c.scheduler服务安装及配置
#配置scheduler服务启动文件
vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

#配置scheduler服务配置文件
vim /etc/kubernetes/scheduler

#kubernetes scheduler config
#default config should be adequate
#Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

#启动scheduler服务

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler

d.测试master是否正常
kubectl get componentstatuses
#结果如下说明正常

NAME                 STATUS    MESSAGE              ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}

6.node安装(所有节点)
a.flannel安装及配置(容器网络我们采用flannel)
#yum安装flannel
yum install -y flannel
#检查node节点证书情况
ls /etc/kubernetes/ssl
#修改flannel.service配置文件如下
vi /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service [Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
-etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-prefix=${ETCD_PREFIX} \
$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure [Install]
WantedBy=multi-user.target
RequiredBy=docker.service

#修改flannel配置文件
vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
ETCD_ENDPOINTS="https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379" # etcd config key. This is the configuration key that flannel queries
# For address range assignment
ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

#在etcd中创建网络配置

etcdctl --endpoints=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kube-centos/network etcdctl --endpoints=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

#flannel服务启动
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

b.配置Docker服务启动文件,集成flannel
vim /usr/lib/systemd/system/docker.service

在ExecStart上增加
EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/docker_opts.env
EnvironmentFile=-/run/flannel/subnet.env
修改如下
ExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}

效果如下:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target [Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/docker_opts.env
EnvironmentFile=-/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s [Install]
WantedBy=multi-user.target

#重启启动Docker服务

systemctl daemon-reload
systemctl restart docker
systemctl status docker

切记要先启动flannel,再启动Docker

c.查询etcd是否分配网络

etcdctl --endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
ls /kube-centos/network/subnets

结果大致如下

/kube-centos/network/subnets/172.30.1.0-24
/kube-centos/network/subnets/172.30.54.0-24
/kube-centos/network/subnets/172.30.99.0-24
etcdctl --endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
get /kube-centos/network/config

结果大致如下

{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

d.安装及配置kubelet
#创建kubelet服务启动文件
vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service [Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure [Install]
WantedBy=multi-user.target

#kubelet认证配置文件
vim /etc/kubernetes/kubelet.kubeconfig

apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://192.168.0.210:8080
name: local
contexts:
- context:
cluster: local
name: local
current-context: local

#kubelet配置文件
vim /etc/kubernetes/kubelet

node210下/etc/kubernetes/kubelet内容如下

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.0.210"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.210"
#
## location of the api-server
#KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080"
KUBELET_API_SERVER=" "
#
## pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"
#
## Add your own!
#KUBELET_ARGS="--cgroup-driver=cgroupfs --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
KUBELET_ARGS="--cgroup-driver=cgroupfs --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false"

node211里配置文件如下

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.0.211"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.211"
#
## location of the api-server
#KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080"
KUBELET_API_SERVER=" "
#
## pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"
#
## Add your own!
#KUBELET_ARGS="--cgroup-driver=cgroupfs --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
KUBELET_ARGS="--cgroup-driver=cgroupfs --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false"

node212里配置文件如下

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.0.212"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.212"
#
## location of the api-server
#KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080"
KUBELET_API_SERVER=" "
#
## pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"
#
## Add your own!
#KUBELET_ARGS="--cgroup-driver=cgroupfs --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
KUBELET_ARGS="--cgroup-driver=cgroupfs --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false"

#启动kubelet服务

mkdir -p /var/lib/kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

#这里很容易出错,出错时查看/var/log/messages看日志进行排错

#检查kubelet服务是否正常

kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.0.210 Ready <none> 14h v1.8.5
192.168.0.211 Ready <none> 14h v1.8.5
192.168.0.212 Ready <none> 14h v1.8.5

c.安装及配置kube-proxy
#配置kube-proxy服务启动文件
vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target [Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

#kube-proxy配置文件如下:
node210:
vim /etc/kubernetes/proxy

###
# kubernetes proxy config # default config should be adequate # Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.210 --hostname-override=192.168.0.210 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node211:
vim /etc/kubernetes/proxy

###
# kubernetes proxy config # default config should be adequate # Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.211 --hostname-override=192.168.0.211 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node212:
vim /etc/kubernetes/proxy

###
# kubernetes proxy config # default config should be adequate # Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.212--hostname-override=192.168.0.212 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

#启动kube-proxy服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

d.在所有节点默认开启forward为accept
vim /usr/lib/systemd/system/forward.service

[Unit]
Description=iptables forward
Documentation=http://iptables.org/
After=network.target docker.service [Service]
Type=forking
ExecStart=/usr/sbin/iptables -P FORWARD ACCEPT
ExecReload=/usr/sbin/iptables -P FORWARD ACCEPT
ExecStop=/usr/sbin/iptables -P FORWARD ACCEPT
PrivateTmp=true [Install]
WantedBy=multi-user.target

#启动forward服务

systemctl daemon-reload
systemctl enable forward
systemctl start forward
systemctl status forward

7.测试集群是否工作正常
a.创建一个deploy
kubectl run nginx --replicas=2 --labels="run=nginx-service" --image=nginx --port=80
b.映射服务到外网可访问
kubectl expose deployment nginx --type=NodePort --name=nginx-service
c.查看服务状态

kubectl describe svc example-service
Name: nginx-service
Namespace: default
Labels: run=nginx-service
Annotations: <none>
Selector: run=nginx-service
Type: NodePort
IP: 10.254.84.99
Port: <unset> 80/TCP
NodePort: <unset> 30881/TCP
Endpoints: 172.30.1.2:80,172.30.54.2:80
Session Affinity: None
Events: <none>

d.查看pods启动情况

kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-2317272628-nsfrr 1/1 Running 0 1m
nginx-2317272628-qbbgg 1/1 Running 0 1m

e.在外网通过
http://192.168.0.210:30881
http://192.168.0.211:30881
http://192.168.0.212:30881
都可以访问nginx页面

若无法访问,可通过iptables -nL查看forward链是否开启

本文转自 rong341233 51CTO博客,原文链接:http://blog.51cto.com/fengwan/2049124

CentOS 7.4搭建Kubernetes 1.8.5集群的更多相关文章

  1. 使用Kubeadm搭建Kubernetes(1.12.2)集群

    Kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,在2018年将进入GA状态,说明离生产环境中使用的距离越来 ...

  2. 使用国内的镜像源搭建 kubernetes(k8s)集群

    1. 概述 老话说的好:努力学习,提高自己,让自己知道的比别人多,了解的别人多. 言归正传,之前我们聊了 Docker,随着业务的不断扩大,Docker 容器不断增多,物理机也不断增多,此时我们会发现 ...

  3. ubuntu18.04搭建 kubernetes(k8s)集群

    下面使用kubeadm来创建k8s cluster1. 所有主机节点上都需要安装docker # sudo apt-get update # sudo apt-get install \ apt-tr ...

  4. 使用kubeadm搭建Kubernetes(1.10.2)集群(国内环境)

    目录 目标 准备 主机 软件 步骤 (1/4)安装 kubeadm, kubelet and kubectl (2/4)初始化master节点 (3/4) 安装网络插件 (4/4)加入其他节点 (可选 ...

  5. ubuntu 使用阿里云镜像源快速搭建kubernetes 1.15.2集群

    一.概述 搭建k8s集群时,需要访问google,下载相关镜像以及安装软件,非常麻烦. 正好阿里云提供了k8s的更新源,国内用户就可以直接使用了. 二.环境介绍 操作系统 主机名 IP地址 功能 配置 ...

  6. Ubuntu16.04搭建kubernetes v1.11.2集群

    1.节点介绍         master      cluster-1      cluster-2      cluster-3 hostname        k8s-55      k8s-5 ...

  7. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  8. keepalived工作原理和配置说明 腾讯云VPC内通过keepalived搭建高可用主备集群

    keepalived工作原理和配置说明 腾讯云VPC内通过keepalived搭建高可用主备集群 内网路由都用mac地址 一个mac地址绑定多个ip一个网卡只能一个mac地址,而且mac地址无法改,但 ...

  9. Linux平台上搭建apache+tomcat负载均衡集群

    传统的Java Web项目是通过tomcat来运行和发布的.但在实际的企业应用环境中,采用单一的tomcat来维持项目的运行是不现实的.tomcat 处理能力低,效率低,承受并发小(1000左右).当 ...

随机推荐

  1. 爬虫之动态HTML处理(Selenium与PhantomJS )动态页面模拟点击

    动态页面模拟点击 #!/usr/bin/env python # -*- coding:utf-8 -*- # python的测试模块 import unittest from selenium im ...

  2. nwafu - java实习 JDBC练习 - 学生信息系统界面

    学生信息系统界面的实现 - JDBC writer:pprp 登录界面的实现: 分为两个部分: 1.LoginFrame.java : 用windowbuilder进行快速搭建界面,构建好登录的界面, ...

  3. How to implement multiple constructor with different parameters in Scala

    Using scala is just another road, and it just like we fall in love again, but there is some pain you ...

  4. Codeforces Round #359 (Div. 2) C. Robbers' watch 鸽巢+stl

    C. Robbers' watch time limit per test 2 seconds memory limit per test 256 megabytes input standard i ...

  5. php获取经纬度

    <?php header("content-type:text/html;charset=utf-8"); function ipjwd() { $getIp=$_SERVE ...

  6. 在activity之间通过静态变量传递数据

    在activity之间通过静态变量传递数据 一.简介 主要作用:解决intent不能传递非序列化的对象 评价:简单方便,但是容易发生内存泄露,所以要及时回收内存 二.具体操作 1.在传输数据的页面弄好 ...

  7. python脚本10_打印斐波那契数列的第101项

    #打印斐波那契数列的第101项 a = 1 b = 1 for count in range(99): a,b = b,a+b else: print(b) 方法2: #打印斐波那契数列的第101项 ...

  8. Android----- MD5加密(登录注册得到与IOS相同的加密值,并且解决两个平台汉字加密不相同问题)

    最近开发项目中遇到一个这样的问题,注册和登录时需要对信息MD5加密生成一个Token传给后台, 后台会对信息进行比较加密是否相同,才表示你登录或者注册成功,所以,IOS和Android两个平台的tok ...

  9. HIVE学习(待更新)

    1 安装hive 下载 http://mirrors.shu.edu.cn/apache/hive/hive-1.2.2/,红框中的不需要编译. 由于hive是默认将元数据保存在本地内嵌的 Derby ...

  10. nyi63——树

    #include<bits/stdc++.h> using namespace std; int cnt; struct node { int data; int flag; node * ...