k8s 1.15.2 部署
目录
一、环境准备
| IP地址 | 节点角色 | CPU | Memory | Hostname | Docker version |
|---|---|---|---|---|---|
| 192.168.56.110 | master | >=2c | >=2G | k8s-master | 19.03 |
| 192.168.56.120 | node | >=2c | >=2G | k8s-node01 | 19.03 |
| 192.168.56.130 | node | >=2c | >=2G | k8s-node02 | 19.03 |
所有节点以下操作:
1、设置各主机的主机名,管理节点为k8s-master
# hostnamectl set-hostname k8s-master
# hostnamectl set-hostname k8s-node01
# hostnamectl set-hostname k8s-node02
2、编辑/etc/hosts文件,添加域名解析
cat <<EOF >> /etc/hosts
192.168.56.110 k8s-master
192.168.56.120 k8s-node01
192.168.56.130 k8s-node02
EOF
3、关闭防火墙、selinux、swap
# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0
# sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# swapoff -a
# sed -i 's/.*swap.*/#&/' /etc/fstab
4、配置内核参数,将桥接的ipv4流量进行转发到iptables
# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl -p
5、配置国内的YUM源
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum clean all && yum makecache
6、配置国内Kubernetes源和docker源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# cd /etc/yum.repos.d/ && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
二、软件安装
注:在所有节点上进行如下操作
1、安装docker
# yum list docker-ce.x86_64 --showduplicates |sort -r #查看docker的版本
# yum install docker-ce #安装默认最新版本
# yum install docker-ce-18.09.8.ce-3.el7 #安装指定版本
# systemctl enable docker && systemctl start docker
# docker -version
2、安装kubeadm、kubelet、kubectl
# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet
修改cgroups,在末尾加上"--cgroup-driver=cgroupfs"
# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"
三、部署master节点
1、在master节点上进行Kubernetes集群初始化
定义pod的网段为:10.244.0.0/16,api-server为本机ip地址。由于国内无法访问国外的镜像,这里通过--image-repository来指定阿里云镜像仓库地址。
[root@k8s-master ~]# kubeadm init --kubernetes-version=1.15.2 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.110]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.014258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: klo2o3.77512ufwsjxzp9ws
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
--discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e
这里需要记录好其他节点加入Kubernetes集群的命令!
root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
2、配置kubectl工具
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
#此处如果没有声明环境变量,是没有加载管理k8s集群的权限的,此时去查看集群,会提示拒绝了该请求。如下:The connection to the server localhost:8080 was refused - did you specify the right host or port?
#或者采用上面提示的方案:
[root@k8s-master ~]# mkdir -p /root/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m10s v1.15.2
3、部署flannel网络
由于无法访问国外的镜像,而阿里云的仓库需要登录,这里找到另外一个站点进行下载镜像
# mkdir k8s && cd k8s
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
# docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
# kubectl apply -f kube-flannel.yml
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-ghfrp 1/1 Running 0 129m
coredns-bccdc95cf-h4tch 1/1 Running 0 129m
etcd-k8s-master 1/1 Running 0 128m
kube-apiserver-k8s-master 1/1 Running 0 128m
kube-controller-manager-k8s-master 1/1 Running 0 128m
kube-flannel-ds-amd64-r2hmf 1/1 Running 0 111m
kube-flannel-ds-amd64-zwt6l 1/1 Running 0 36m
kube-proxy-czjzf 1/1 Running 0 129m
kube-proxy-ts4nf 1/1 Running 0 36m
kube-scheduler-k8s-master 1/1 Running 0 128m
看到以上的pod都处于Running状态,集群状态即为正常运行,这里需要注意的是,由于master节点在集群初始化,是带有污点的,不允许pod进行调度到master节点之上,相关的信息如下:Taints: node-role.kubernetes.io/master:NoSchedule
四、部署node节点
在所有node节点上操作
这里需要注意的是node节点上也需要部署flannel、pause、kube-proxy的pod,所以需要预先进行下载镜像,其中需要的镜像分别为:k8s.gcr.io/kube-proxy-amd64:v1.15.2 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1
# kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
--discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e
五、集群状态检测
在master上操作
1、在master上进行检查集群状态,返回如下结果则正常。重点查看STATUS内容为Ready时,则说明集群状态正常。
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 17h v1.15.2
k8s-node01 Ready <none> 16h v1.15.2
k8s-node02 Ready <none> 11s v1.15.2
2、创建Pod,验证集群
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-lw4jw 1/1 Running 0 2m54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 139m
service/nginx NodePort 10.110.217.32 <none> 80:30282/TCP 2m42s
[root@k8s-master ~]# curl http://192.168.56.110:30282/
k8s 1.15.2 部署的更多相关文章
- Kubernetes学习之路(27)之k8s 1.15.2 部署
目录 一.环境准备 二.软件安装 三.部署master节点 四.部署node节点 五.集群状态检测 一.环境准备 IP地址 节点角色 CPU Memory Hostname Docker versio ...
- K8S(17)二进制的1.15版本部署hpa自动伸缩
K8S(17)二进制部署的K8S(1.15)部署hpa功能 目录 K8S(17)二进制部署的K8S(1.15)部署hpa功能 零.参考文件: 一.生成metrics-proxy证书 二.修改apise ...
- 使用Kubeadm创建k8s集群之部署规划(三十)
前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...
- Kubernetes K8S之通过helm部署metrics-server与HPA详解
Kubernetes K8S之通过helm部署metrics-server与 Horizontal Pod Autoscaling (HPA)详解 主机配置规划 服务器名称(hostname) 系统版 ...
- 这一篇 K8S(Kubernetes)集群部署 我觉得还可以!!!
点赞再看,养成习惯,微信搜索[牧小农]关注我获取更多资讯,风里雨里,小农等你,很高兴能够成为你的朋友. 国内安装K8S的四种途径 Kubernetes 的安装其实并不复杂,因为Kubernetes 属 ...
- Hadoop生态圈-通过CDH5.15.1部署spark1.6与spark2.3.0的版本兼容运行
Hadoop生态圈-通过CDH5.15.1部署spark1.6与spark2.3.0的版本兼容运行 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在我的CDH5.15.1集群中,默 ...
- 【解决】k8s 1.15.2 镜像下载方案
k8s 国内镜像下载方案 众所周知,国内是不太容易下载k8s.gcr.io站点的镜像的 一.第一种方案:Azure中国镜像站 http://mirror.azure.cn/help/gcr-proxy ...
- 基于 K8S 集群安装部署 istio-1.2.4
使用云平台可以为组织提供丰富的好处.然而,不可否认的是,采用云可能会给 DevOps 团队带来压力.开发人员必须使用微服务以满足应用的可移植性,同时运营商管理了极其庞大的混合和多云部署.Istio 允 ...
- K8S集群安装部署
K8S集群安装部署 参考地址:https://www.cnblogs.com/xkops/p/6169034.html 1. 确保系统已经安装epel-release源 # yum -y inst ...
随机推荐
- Cobbler安装和配置
1.yum国内源的安装与更新 1.1 备份原repo文件 cd /etc/yum.repo.d/ mkdir repo_bak mv *.repo repo_bak 1.2 在centos中配置网易和 ...
- 《深入理解Java虚拟机》之(三、虚拟机性能监控与故障处理工具)
一.JDK的命令行工具 1.jps:虚拟机进程状况工具 功能:可以列出正在运行的虚拟机进程,并显示虚拟机执行朱磊名称以及这些进程的本地虚拟机唯一ID. 2.jstat:虚拟机统计信息监控工具 Jsta ...
- 调用jquery.Jcrop.min.js 切割图片 实例
需求是:上传一个图片,然后将上传的这个图片进行切割........ 首先是jsp页面.页面需要引入js <script src="${fileUrlPrx}/scripts/wap/ ...
- CSS测试题Ⅰ
1.CSS 指的是? A. Computer Style Sheets B. Cascading Style Sheets C. Creative Style Sheets D. Colorf ...
- php上传大文件的解决方案
1.使用PHP的创始人 Rasmus Lerdorf 写的APC扩展模块来实现(http://pecl.php.net/package/apc) APC实现方法: 安装APC,参照官方文档安装,可以使 ...
- JSP大文件分片上传
核心原理: 该项目核心就是文件分块上传.前后端要高度配合,需要双方约定好一些数据,才能完成大文件分块,我们在项目中要重点解决的以下问题. * 如何分片: * 如何合成一个文件: * 中断了从哪个分片开 ...
- [Luogu] 次小生成树
https://www.luogu.org/problemnew/show/P4180#sub 严格次小生成树,即不等于最小生成树中的边权之和最小的生成树 首先求出最小生成树,然后枚举所有不在最小生成 ...
- POJ 1236 Network of Schools —— (缩点的应用)
题目大意:有N个学校和一些有向边将它们连结,求: 1.最少需要向几个学校发放软件,使得他们中的每一个学校最终都能够获得软件. 2.最少需要增加几条有向边使得可以从任意一个学校发放软件,使得每一个学校最 ...
- AT3576 Popping Balls
AT3576 Popping Balls 好题!一种以前没怎么见过的思路! %%ywy 以什么方式,什么位置统计本质不同的方案,才能不重不漏是处理所有计数问题的主心骨. 本题难以容斥.难以DP. 所以 ...
- [JZOJ6346]:ZYB和售货机(拓扑+基环内向森林)
题目描述 可爱的$ZYB$来到一个售货机前. 售货机里有一共有$N(\leqslant 10^5)$个物品,每个物品有$A_i$个.自然,还有$N$个购买按钮.正常情况下,按下第$i$个按钮,需要支付 ...