一、环境说明

节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明
k8s-master 192.168.56.11 docker、kubeadm、kubectl、kubelet
10.244.0.0/16
10.96.0.0/12
Centos 7.4
k8s-node01 192.168.56.12 docker、kubeadm、kubelet
10.244.0.0/16
10.96.0.0/12
Centos 7.4
k8s-node02 192.168.56.13 docker、kubeadm、kubelet
10.244.0.0/16
10.96.0.0/12
Centos 7.4

(1)配置源

[root@k8s-master ~]# cd /etc/yum.repos.d/

配置阿里云的源:https://opsx.alibaba.com/mirror

[root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  #配置dokcer源

[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo  #配置kubernetes源
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=
> gpgcheck=
> repo_gpgcheck=
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF [root@k8s-master yum.repos.d]# yum repolist #查看可用源

将源拷贝到node01和node02节点

[root@k8s-master yum.repos.d]# scp kubernetes.repo docker-ce.repo k8s-node1:/etc/yum.repos.d/
kubernetes.repo % .1KB/s :
docker-ce.repo % .7MB/s :
[root@k8s-master yum.repos.d]# scp kubernetes.repo docker-ce.repo k8s-node2:/etc/yum.repos.d/
kubernetes.repo % .9KB/s :
docker-ce.repo % .7MB/s :

(2)安装docker、kubelet、kubeadm、还有命令行工具kubectl

[root@k8s-master yum.repos.d]# yum install -y docker-ce kubelet kubeadm kubectl 
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

启动docker,docker需要到自动到docker仓库中所依赖的镜像文件,这些镜像文件会因为在国外仓库而下载无法完成,所以最好预先下载镜像文件,kubeadm也可以支持本地私有仓库进行获取镜像文件。

(3)预先下载镜像

在master节点上使用docker pull拉取镜像,再通过tag打标签
docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1
docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker pull xiyangxixia/k8s-scheduler:v1.11.1
docker tag xiyangxixia/k8s-scheduler:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1
docker pull xiyangxixia/k8s-controller-manager:v1.11.1
docker tag xiyangxixia/k8s-controller-manager:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
docker pull xiyangxixia/k8s-apiserver-amd64:v1.11.1
docker tag xiyangxixia/k8s-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
docker pull xiyangxixia/k8s-etcd:3.2.
docker tag xiyangxixia/k8s-etcd:3.2. k8s.gcr.io/etcd-amd64:3.2.
docker pull xiyangxixia/k8s-coredns:1.1.
docker tag xiyangxixia/k8s-coredns:1.1. k8s.gcr.io/coredns:1.1.
docker pull xiyangxixia/k8s-pause:3.1
docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1
docker pull xiyangxixia/k8s-flannel:v0.10.0-s390x
docker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390x
docker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64le
docker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64l
docker pull xiyangxixia/k8s-flannel:v0.10.0-arm
docker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-arm
docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 node节点上拉取的镜像
docker pull xiyangxixia/k8s-pause:3.1
docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1
docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1
docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

(4)kubeadm初始化集群

[root@k8s-master ~]# vim /etc/sysconfig/kubelet   #修改kubelet禁止提示swap警告
KUBELET_EXTRA_ARGS="--fail-swap-on=false" #如果配置了swap不然提示出错信息
更改kubelet配置,不提示swap警告信息,最好关闭swap
[root@k8s-master ~]# swapoff -a  #关闭swap [root@k8s-master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/ --service-cidr=10.96.0.0/ --ignore-preflight-errors=Swap   #初始化
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0821 ::22.223765 kernel_validator.go:] Validating kernel version
I0821 ::22.223894 kernel_validator.go:] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 51.033696 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: dx7mko.j2ug1lqjra5bf6p2
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root: kubeadm join 192.168.56.11: --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf

如果是普通用户部署,要使用kubectl,需要配置kubectl的环境变量,这些命令也是kubeadm init输出的一部分:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是使用root用户部署,可以使用export进行定义环境变量

[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

此处也需要记录输出的kubeadm join命令,后面的node节点加入到集群就需要用到此命令:

kubeadm join 192.168.56.11: --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf

该令牌用于主节点和加入节点之间的相互认证。这里包含的令牌是秘密的。保持安全,因为拥有此令牌的任何人都可以向集群添加经过身份验证的节点。可以使用该kubeadm token命令列出,创建和删除这些令牌。到此,集群的初始化已经完成,可以使用kubectl get cs进行查看集群的健康状态信息:

[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd- Healthy {"health": "true"}
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 43m v1.11.2

从上面的结果可以看到,master的组件controller-manager、scheduler、etcd都处于正常状态。那么apiserver到哪去了?要知道kubectl是通过apiserver进行通信,从而在etcd中获取到集群的状态信息,所以可以获取到集群的状态信息,即表示apiserver是处于正常运行的状态。使用kubectl get node获取节点信息,可以看到master节点的状态是NotReady,这是因为还没有部署好Pod网络。

(5)安装网络插件CNI

安装Pod网络插件,用于保证Pod之间的相互通信。在每个集群当中只能有一个Pod网络,在部署flannel之前需要更改的内核参数将桥接的IPv4的流量进行转发给iptables链,这是CNI插件的运行的前提要求。

[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables

[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created [root@k8s-master ~]# kubectl get node  #再查看master节点的状态信息,就是已经是Ready状态了
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h v1.11.2

(6)node01节点加入集群

[root@k8s-node01 ~]# yum install -y docker kubeadm kubelet
[root@k8s-node01 ~]# systemctl enable dokcer kubelet
[root@k8s-node01 ~]# systemctl start docker 这里需要提前拉取好镜像,如果docker配置了代理另说,按本次方法,提前拉取node节点所需要的镜像。 [root@k8s-node01 ~]# kubeadm join 192.168.56.11: --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf [root@k8s-master ~]# kubectl get node  
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 7h v1.11.2
k8s-node01 NotReady <none> 3h v1.11.2

加入集群后,查看节点状态信息,看到node01节点的状态为NotReady,是因为node01节点上还没有镜像或者是还在拉取镜像。等待拉取完镜像就会启动对应的Pod。

[root@k8s-node01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5502c29b43df f0fad859c909 "/opt/bin/flanneld -…" minutes ago Up minutes k8s_kube-flannel_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_1
db1cc0a6fec4 d5c25579d0ff "/usr/local/bin/kube…" minutes ago Up minutes k8s_kube-proxy_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0
bc54ad3399e8 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0
cbfca066b71d k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_0 [root@k8s-master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-78fcdf6894-nmcmz / Running 1d 10.244.0.3 k8s-master
coredns-78fcdf6894-p5pfm / Running 1d 10.244.0.2 k8s-master
etcd-k8s-master / Running 1d 192.168.56.11 k8s-master
kube-apiserver-k8s-master / Running 1d 192.168.56.11 k8s-master
kube-controller-manager-k8s-master / Running 1d 192.168.56.11 k8s-master
kube-flannel-ds-n5c86 / Running 1d 192.168.56.11 k8s-master
kube-flannel-ds-pgpr7 / Running 1d 192.168.56.12 k8s-node01
kube-proxy-rxlt7 / Running 1d 192.168.56.11 k8s-master
kube-proxy-vxckf / Running 1d 192.168.56.12 k8s-node01
kube-scheduler-k8s-master / Running 1d 192.168.56.11 k8s-master [root@k8s-master ~]# kubectl get node  #此时再查看状态已经变成Ready
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 1d v1.11.2
k8s-node01 Ready <none> 1d v1.11.2

(7)增加集群的节点node02

[root@k8s-node02 ~]# yum install -y docker kubeadm kubelet
[root@k8s-node02 ~]# systemctl enable docker kubelet
[root@k8s-node02 ~]# systemctl start docker

同样预先拉取好node节点所需镜像,在此处犯错的还有,根据官方说明tonken的默认有效时间为24h,由于时间差,导致这里的token失效,可以使用kubeadm token list查看token,发现之前初始化的tonken已经失效了。

[root@k8s-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS dx7mko.j2ug1lqjra5bf6p2 <invalid> --22T18::-: authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

那么此处需要重新生成token,生成的方法如下:

[root@k8s-master ~]# kubeadm token create
1vxhuq.qi11t7yq2wj20cpe

如果没有值--discovery-token-ca-cert-hash,可以通过在master节点上运行以下命令链来获取:

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der >/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //' 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

此时,再运行kube join命令将node02加入到集群当中,此处的--discovery-token-ca-cert-hash依旧可以使用初始化时的证书

[root@k8s-node02 ~]# kubeadm join 192.168.56.11: --token 1vxhuq.qi11t7yq2wj20cpe --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf

[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 1d v1.11.2
k8s-node01 Ready <none> 1d v1.11.2
k8s-node02 Ready <none> 2h v1.11.2

如果在集群安装过程中有遇到其他问题,可以使用以下命令进行重置:

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel. down && ip link delete flannel.
$ rm -rf /var/lib/cni/

Kubernetes学习之路(八)之Kubeadm部署集群的更多相关文章

  1. Kubernetes Kubeadm部署集群

    Kubernetes高可用架构 Kubenetes 2个高可用核心 apiserver.etcd etcd:集群数据中心,需要保持高可用,用于存放集群的配置信息.状态信息及Pod等信息.如果数据丢失集 ...

  2. Hadoop学习之路(五)Hadoop集群搭建模式和各模式问题

    分布式集群的通用问题 当前的HDFS和YARN都是一主多从的分布式架构,主从节点---管理者和工作者 问题:如果主节点或是管理者宕机了.会出现什么问题? 群龙无首,整个集群不可用.所以在一主多从的架构 ...

  3. Storm 学习之路(四)—— Storm集群环境搭建

    一.集群规划 这里搭建一个3节点的Storm集群:三台主机上均部署Supervisor和LogViewer服务.同时为了保证高可用,除了在hadoop001上部署主Nimbus服务外,还在hadoop ...

  4. HBase 学习之路(四)—— HBase集群环境配置

    一.集群规划 这里搭建一个3节点的HBase集群,其中三台主机上均为Regin Server.同时为了保证高可用,除了在hadoop001上部署主Master服务外,还在hadoop002上部署备用的 ...

  5. Hadoop 学习之路(五)—— Hadoop集群环境搭建

    一.集群规划 这里搭建一个3节点的Hadoop集群,其中三台主机均部署DataNode和NodeManager服务,但只有hadoop001上部署NameNode和ResourceManager服务. ...

  6. Redis学习之路(二)Redis集群搭建

    一.Redis集群搭建说明 基于三台虚拟机部署9个节点,一台虚拟机三个节点,创建出4个master.4个slave的Redis集群. Redis 集群搭建规划,由于集群至少需要6个节点(3主3从模式) ...

  7. kubernetes学习笔记(三)——利用kubeadm部署集群

    文章目录 (一)安装前准备 (二)master安装 1.安装组件 2.排错 (三)node安装 1.安装组件 2.加入master 3.排错 (四)网络安装 (五)dashboard安装 (一)安装前 ...

  8. HBase学习之路 (二)HBase集群安装

    前提 1.HBase 依赖于 HDFS 做底层的数据存储 2.HBase 依赖于 MapReduce 做数据计算 3.HBase 依赖于 ZooKeeper 做服务协调 4.HBase源码是java编 ...

  9. HBase学习之路 (三)HBase集群Shell操作

    进入HBase命令行 在你安装的随意台服务器节点上,执行命令:hbase shell,会进入到你的 hbase shell 客 户端 [hadoop@hadoop1 ~]$ hbase shell S ...

随机推荐

  1. 转:APPlication,Session和Cookie的区别

    方法 信息量大小 保存时间 应用范围 保存位置 Application 任意大小 整个应用程序的生命期 所有用户 服务器端 Session 小量,简单的数据 用户活动时间+一段延迟时间(一般为20分钟 ...

  2. 15. DML, DDL, LOGON 触发器

    触发器可以理解为由特定事件触发的存储过程, 和存储过程.函数一样,触发器也支持CLR,目前SQL Server共支持以下几种触发器: 1. DML触发器, 表/视图级有效,可由DML语句 (INSER ...

  3. 用CHTCollectionViewWaterfallLayout写瀑布流

    用CHTCollectionViewWaterfallLayout写瀑布流 实现的瀑布流效果图: 源码: WaterfallCell.h 与 WaterfallCell.m // // Waterfa ...

  4. wxpython 窗口排版- proportion/flag/border参数说明

    新学习wxpython,一直纠结于窗口控件的排版,经过几天的查资料.试验,总结如下. 1.需求实例 来个实例,窗口有3行控件 第一行是文本提示(大小不变,文字左对齐,控件居左). 第二行依次为文本提示 ...

  5. win10想说爱你不容易——安装.net3.5也是一个坑(已有完美解决方法)

    最终完美解决方法:经过多次波折,终于找到无法正常安装.net3.5的原因了,是因为已删除的用户还有注册表残留导致的,而且这个问题还会影响一个win10更新的安装,导致每天更新失败,撤销更新... 详见 ...

  6. mysqlDOS命令

    MySQL : 1.安装mysql服务:mysqld install 2.删除mysql服务:sc delete mysql 3.启动mysql服务:net start mysql 4.初始化设置密码 ...

  7. Spring 源码阅读之BeanFactory

    1. BeanFactory 的结构体系如下: 2. XmlBeanFactory ,装载Spring配置信息 package org.springframework.beans.factory.xm ...

  8. Java 输入输出流总结

    1. 运用BufferedInputStream 读取文件流和BufferedOutputStream写文件流: protected static void writeFile2(String inp ...

  9. ubuntu 12.04 64位 安装wps

    1.去wps官网下载linux版的软件 http://community.wps.cn/download/ 我这里下载的是Alpha版的kingsoft-office_9.1.0.4280~a12p4 ...

  10. 2754. [SCOI2012]喵星球上的点名【后缀数组】

    Description a180285幸运地被选做了地球到喵星球的留学生.他发现喵星人在上课前的点名现象非常有趣.   假设课堂上有N个喵星人,每个喵星人的名字由姓和名构成.喵星球上的老师会选择M个串 ...