k8s 1.15.2 部署
目录
一、环境准备
| IP地址 | 节点角色 | CPU | Memory | Hostname | Docker version |
|---|---|---|---|---|---|
| 192.168.56.110 | master | >=2c | >=2G | k8s-master | 19.03 |
| 192.168.56.120 | node | >=2c | >=2G | k8s-node01 | 19.03 |
| 192.168.56.130 | node | >=2c | >=2G | k8s-node02 | 19.03 |
所有节点以下操作:
1、设置各主机的主机名,管理节点为k8s-master
# hostnamectl set-hostname k8s-master
# hostnamectl set-hostname k8s-node01
# hostnamectl set-hostname k8s-node02
2、编辑/etc/hosts文件,添加域名解析
cat <<EOF >> /etc/hosts
192.168.56.110 k8s-master
192.168.56.120 k8s-node01
192.168.56.130 k8s-node02
EOF
3、关闭防火墙、selinux、swap
# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0
# sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# swapoff -a
# sed -i 's/.*swap.*/#&/' /etc/fstab
4、配置内核参数,将桥接的ipv4流量进行转发到iptables
# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl -p
5、配置国内的YUM源
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum clean all && yum makecache
6、配置国内Kubernetes源和docker源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# cd /etc/yum.repos.d/ && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
二、软件安装
注:在所有节点上进行如下操作
1、安装docker
# yum list docker-ce.x86_64 --showduplicates |sort -r #查看docker的版本
# yum install docker-ce #安装默认最新版本
# yum install docker-ce-18.09.8.ce-3.el7 #安装指定版本
# systemctl enable docker && systemctl start docker
# docker -version
2、安装kubeadm、kubelet、kubectl
# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet
修改cgroups,在末尾加上"--cgroup-driver=cgroupfs"
# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"
三、部署master节点
1、在master节点上进行Kubernetes集群初始化
定义pod的网段为:10.244.0.0/16,api-server为本机ip地址。由于国内无法访问国外的镜像,这里通过--image-repository来指定阿里云镜像仓库地址。
[root@k8s-master ~]# kubeadm init --kubernetes-version=1.15.2 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.110]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.014258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: klo2o3.77512ufwsjxzp9ws
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
--discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e
这里需要记录好其他节点加入Kubernetes集群的命令!
root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
2、配置kubectl工具
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
#此处如果没有声明环境变量,是没有加载管理k8s集群的权限的,此时去查看集群,会提示拒绝了该请求。如下:The connection to the server localhost:8080 was refused - did you specify the right host or port?
#或者采用上面提示的方案:
[root@k8s-master ~]# mkdir -p /root/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m10s v1.15.2
3、部署flannel网络
由于无法访问国外的镜像,而阿里云的仓库需要登录,这里找到另外一个站点进行下载镜像
# mkdir k8s && cd k8s
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
# docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
# kubectl apply -f kube-flannel.yml
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-ghfrp 1/1 Running 0 129m
coredns-bccdc95cf-h4tch 1/1 Running 0 129m
etcd-k8s-master 1/1 Running 0 128m
kube-apiserver-k8s-master 1/1 Running 0 128m
kube-controller-manager-k8s-master 1/1 Running 0 128m
kube-flannel-ds-amd64-r2hmf 1/1 Running 0 111m
kube-flannel-ds-amd64-zwt6l 1/1 Running 0 36m
kube-proxy-czjzf 1/1 Running 0 129m
kube-proxy-ts4nf 1/1 Running 0 36m
kube-scheduler-k8s-master 1/1 Running 0 128m
看到以上的pod都处于Running状态,集群状态即为正常运行,这里需要注意的是,由于master节点在集群初始化,是带有污点的,不允许pod进行调度到master节点之上,相关的信息如下:Taints: node-role.kubernetes.io/master:NoSchedule
四、部署node节点
在所有node节点上操作
这里需要注意的是node节点上也需要部署flannel、pause、kube-proxy的pod,所以需要预先进行下载镜像,其中需要的镜像分别为:k8s.gcr.io/kube-proxy-amd64:v1.15.2 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1
# kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
--discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e
五、集群状态检测
在master上操作
1、在master上进行检查集群状态,返回如下结果则正常。重点查看STATUS内容为Ready时,则说明集群状态正常。
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 17h v1.15.2
k8s-node01 Ready <none> 16h v1.15.2
k8s-node02 Ready <none> 11s v1.15.2
2、创建Pod,验证集群
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-lw4jw 1/1 Running 0 2m54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 139m
service/nginx NodePort 10.110.217.32 <none> 80:30282/TCP 2m42s
[root@k8s-master ~]# curl http://192.168.56.110:30282/
k8s 1.15.2 部署的更多相关文章
- Kubernetes学习之路(27)之k8s 1.15.2 部署
目录 一.环境准备 二.软件安装 三.部署master节点 四.部署node节点 五.集群状态检测 一.环境准备 IP地址 节点角色 CPU Memory Hostname Docker versio ...
- K8S(17)二进制的1.15版本部署hpa自动伸缩
K8S(17)二进制部署的K8S(1.15)部署hpa功能 目录 K8S(17)二进制部署的K8S(1.15)部署hpa功能 零.参考文件: 一.生成metrics-proxy证书 二.修改apise ...
- 使用Kubeadm创建k8s集群之部署规划(三十)
前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...
- Kubernetes K8S之通过helm部署metrics-server与HPA详解
Kubernetes K8S之通过helm部署metrics-server与 Horizontal Pod Autoscaling (HPA)详解 主机配置规划 服务器名称(hostname) 系统版 ...
- 这一篇 K8S(Kubernetes)集群部署 我觉得还可以!!!
点赞再看,养成习惯,微信搜索[牧小农]关注我获取更多资讯,风里雨里,小农等你,很高兴能够成为你的朋友. 国内安装K8S的四种途径 Kubernetes 的安装其实并不复杂,因为Kubernetes 属 ...
- Hadoop生态圈-通过CDH5.15.1部署spark1.6与spark2.3.0的版本兼容运行
Hadoop生态圈-通过CDH5.15.1部署spark1.6与spark2.3.0的版本兼容运行 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在我的CDH5.15.1集群中,默 ...
- 【解决】k8s 1.15.2 镜像下载方案
k8s 国内镜像下载方案 众所周知,国内是不太容易下载k8s.gcr.io站点的镜像的 一.第一种方案:Azure中国镜像站 http://mirror.azure.cn/help/gcr-proxy ...
- 基于 K8S 集群安装部署 istio-1.2.4
使用云平台可以为组织提供丰富的好处.然而,不可否认的是,采用云可能会给 DevOps 团队带来压力.开发人员必须使用微服务以满足应用的可移植性,同时运营商管理了极其庞大的混合和多云部署.Istio 允 ...
- K8S集群安装部署
K8S集群安装部署 参考地址:https://www.cnblogs.com/xkops/p/6169034.html 1. 确保系统已经安装epel-release源 # yum -y inst ...
随机推荐
- Filter和interceptor比较
作为一个备忘,有时间补充 https://www.cnblogs.com/learnhow/p/5694876.html 先说一个题外话,Filter是过滤器,interceptor是拦截器.前者基于 ...
- SPI使用笔记ADS1259+AD5676
SPI的通信速率通常比较快.目前用到的ADS1259芯片,可以达到2-4MHz,可能可以更加快.一般spi都是从慢速开始调试,但是具体到某个芯片,应该核对芯片时序图,比如ti的ds1259,数据手册上 ...
- Unity 截图选择框,中间全透明,边缘半透明
效果:点击白色框可拖拽选择区域 代码: using System.Collections; using System.Collections.Generic; using UnityEngine; u ...
- Python3-元祖
# Tuple(元组) # 元组(tuple)与列表类似,不同之处在于元组的元素不能修改.元组写在小括号(())里,元素之间用逗号隔开. # 元组中的元素类型也可以不相同 tuple = ('abcd ...
- 状压dp做题笔记
CodeChef Factorial to Square (分块决策) Description 给定一个n,要求在[1,n]中删除一些数,并使剩下的数的乘积是一个完全平方数,同时要求乘积最大,求删除方 ...
- mysql: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
https://www.cnblogs.com/jpfss/p/9734487.html (mysql.sock错误解决方案)
- Python地理信息数据可视化
地图 基础铺垫 定义 地图(map):是指按一定的比例运用符号.颜色.文字标记等描绘显示地球表面的自然地理.行政区域.社会经济状况的图形. 地图绘制步骤 绘制需要展示的地图,获取地图对象,获取每个区域 ...
- 什么是nProtect?
nProtect是设计用于保护个人电脑终端不被病毒和黑客程序感染的新概念的基于网络的反黑客和反病毒的工具. 他帮助确保所有输入个人电脑终端的信息在网络上不落入黑客手中. 在最终用户在执行电子贸易时,可 ...
- nuxt使用教程
1 引言 Nuxt 是基于 Vue 的前端开发框架,这次我们通过 Introduction toNuxtJS 视频了解框架特色以及前端开发框架的基本要素. nuxt 与 next 结构很像,可以结合在 ...
- Hadoop配置多个HDFS入口
为了验证存在不同的hdfs之间的hive的互操作(归根结底还是为了解决BUG) 需要在两个不同的hadoop集群的HDFS 能够在Hiveserver2上进行路由转发绕过一些坑. 就需要将某hdfs ...