一、环境准备

IP地址 节点角色 CPU Memory Hostname Docker version
192.168.56.110 master >=2c >=2G k8s-master 19.03
192.168.56.120 node >=2c >=2G k8s-node01 19.03
192.168.56.130 node >=2c >=2G k8s-node02 19.03

所有节点以下操作:

1、设置各主机的主机名,管理节点为k8s-master

# hostnamectl set-hostname k8s-master
# hostnamectl set-hostname k8s-node01
# hostnamectl set-hostname k8s-node02

2、编辑/etc/hosts文件,添加域名解析

cat <<EOF >> /etc/hosts
192.168.56.110 k8s-master
192.168.56.120 k8s-node01
192.168.56.130 k8s-node02
EOF

3、关闭防火墙、selinux、swap

# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0
# sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# swapoff -a
# sed -i 's/.*swap.*/#&/' /etc/fstab

4、配置内核参数,将桥接的ipv4流量进行转发到iptables

# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl -p

5、配置国内的YUM源

# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum clean all && yum makecache

6、配置国内Kubernetes源和docker源

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# cd /etc/yum.repos.d/ && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

二、软件安装

注:在所有节点上进行如下操作

1、安装docker

# yum list docker-ce.x86_64  --showduplicates |sort -r  #查看docker的版本
# yum install docker-ce #安装默认最新版本
# yum install docker-ce-18.09.8.ce-3.el7 #安装指定版本
# systemctl enable docker && systemctl start docker
# docker -version

2、安装kubeadm、kubelet、kubectl

# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet 修改cgroups,在末尾加上"--cgroup-driver=cgroupfs"
# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"

三、部署master节点

1、在master节点上进行Kubernetes集群初始化

定义pod的网段为:10.244.0.0/16,api-server为本机ip地址。由于国内无法访问国外的镜像,这里通过--image-repository来指定阿里云镜像仓库地址。


[root@k8s-master ~]# kubeadm init --kubernetes-version=1.15.2 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.110]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.014258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: klo2o3.77512ufwsjxzp9ws
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
--discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e 这里需要记录好其他节点加入Kubernetes集群的命令! root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

2、配置kubectl工具

[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
#此处如果没有声明环境变量,是没有加载管理k8s集群的权限的,此时去查看集群,会提示拒绝了该请求。如下:The connection to the server localhost:8080 was refused - did you specify the right host or port?
#或者采用上面提示的方案: [root@k8s-master ~]# mkdir -p /root/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config [root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m10s v1.15.2

3、部署flannel网络

由于无法访问国外的镜像,而阿里云的仓库需要登录,这里找到另外一个站点进行下载镜像

# mkdir k8s && cd k8s
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
# docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
# kubectl apply -f kube-flannel.yml # kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-ghfrp 1/1 Running 0 129m
coredns-bccdc95cf-h4tch 1/1 Running 0 129m
etcd-k8s-master 1/1 Running 0 128m
kube-apiserver-k8s-master 1/1 Running 0 128m
kube-controller-manager-k8s-master 1/1 Running 0 128m
kube-flannel-ds-amd64-r2hmf 1/1 Running 0 111m
kube-flannel-ds-amd64-zwt6l 1/1 Running 0 36m
kube-proxy-czjzf 1/1 Running 0 129m
kube-proxy-ts4nf 1/1 Running 0 36m
kube-scheduler-k8s-master 1/1 Running 0 128m

看到以上的pod都处于Running状态,集群状态即为正常运行,这里需要注意的是,由于master节点在集群初始化,是带有污点的,不允许pod进行调度到master节点之上,相关的信息如下:Taints: node-role.kubernetes.io/master:NoSchedule

四、部署node节点

在所有node节点上操作

这里需要注意的是node节点上也需要部署flannel、pause、kube-proxy的pod,所以需要预先进行下载镜像,其中需要的镜像分别为:k8s.gcr.io/kube-proxy-amd64:v1.15.2 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1

# kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
--discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e

五、集群状态检测

在master上操作

1、在master上进行检查集群状态,返回如下结果则正常。重点查看STATUS内容为Ready时,则说明集群状态正常。

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 17h v1.15.2
k8s-node01 Ready <none> 16h v1.15.2
k8s-node02 Ready <none> 11s v1.15.2

2、创建Pod,验证集群

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-lw4jw 1/1 Running 0 2m54s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 139m
service/nginx NodePort 10.110.217.32 <none> 80:30282/TCP 2m42s
[root@k8s-master ~]# curl http://192.168.56.110:30282/

Kubernetes学习之路(27)之k8s 1.15.2 部署的更多相关文章

  1. Kubernetes学习之路(26)之kubeasz+ansible部署集群

    目录 1.环境说明 2.准备工作 3.分步骤安装 3.1.创建证书和安装准备 3.2.安装etcd集群 3.3.安装docker 3.4.安装master节点 3.5.安装node节点 3.6.部署集 ...

  2. Kubernetes学习之路目录

    Kubernetes基础篇 环境说明 版本说明 系统环境 Centos 7.2 Kubernetes版本 v1.11.2 Docker版本 v18.09 Kubernetes学习之路(一)之概念和架构 ...

  3. Kubernetes学习之路(二十三)之资源指标和集群监控

    目录 1.资源指标和资源监控 2.Weave Scope监控集群 (1)Weave Scope部署 (2)使用 Scope (3)拓扑结构 (4)实时资源监控 (5)在线操作 (6)强大的搜索功能 2 ...

  4. Kubernetes学习之路(一)之概念和架构解析和证书创建和分发

    1.Kubernetes的重要概念 转自:CloudMan老师公众号<每天5分钟玩转Kubernetes>https://item.jd.com/26225745440.html Clus ...

  5. Kubernetes学习之路(十八)之认证、授权和准入控制

    API Server作为Kubernetes网关,是访问和管理资源对象的唯一入口,其各种集群组件访问资源都需要经过网关才能进行正常访问和管理.每一次的访问请求都需要进行合法性的检验,其中包括身份验证. ...

  6. Kubernetes学习之路(二十)之K8S组件运行原理详解总结

    目录 一.看图说K8S 二.K8S的概念和术语 三.K8S集群组件 1.Master组件 2.Node组件 3.核心附件 四.K8S的网络模型 五.Kubernetes的核心对象详解 1.Pod资源对 ...

  7. Kubernetes学习之路(六)之创建K8S应用

    一.Deployment的概念 K8S本身并不提供网络的功能,所以需要借助第三方网络插件进行部署K8S中的网络,以打通各个节点中容器的互通. POD,是K8S中的一个逻辑概念,K8S管理的是POD,一 ...

  8. Kubernetes学习之路(二十五)之Helm程序包管理器

    目录 1.Helm的概念和架构 2.部署Helm (1)下载helm (2)部署Tiller 3.helm的使用 4.chart 目录结构 5.chart模板 6.定制安装MySQL chart (1 ...

  9. Kubernetes学习之路(十五)之Ingress和Ingress Controller

    目录 一.什么是Ingress? 1.Pod 漂移问题 2.端口管理问题 3.域名分配及动态更新问题 二.如何创建Ingress资源 三.Ingress资源类型 1.单Service资源型Ingres ...

随机推荐

  1. selenium--页面元素是否可见和可操作

    判断元素是否可见 from selenium import webdriver import unittest class Test_Display(unittest.TestCase): def t ...

  2. [NOI2010]超级钢琴 主席树

    [NOI2010]超级钢琴 链接 luogu 思路 和12省联考的异或粽子一样. 堆维护n个左端点,每次取出来再放回去次 代码 #include <bits/stdc++.h> #defi ...

  3. 出现:Microsoft Visual C++ 14.0 is required 的解决方案

    以安装pandas为例: 如:pip install scrapy 时出现: error: Microsoft Visual C++ 14.0 is required. Get it with “Mi ...

  4. jvm参数设置实例

  5. .net core + xunit 集成测试

    xunit地址:https://github.com/xunit/xunit 一.利用请求来测试接口,主要是测试webapi控制器方法 ①添加xunit项目 ,然后引用我们的主项目,nuget: Mi ...

  6. Ansible14:Playbook条件语句

    目录 简介 when关键字 1. when基本使用 2. 比较运算符 3. 逻辑运算符 条件判断与tests 判断变量 判断执行结果 判断路径 判断字符串 判断整除 其他tests 条件判断与bloc ...

  7. css伪类:before及:after除了插入文字内容还能做点儿啥?画图

    全手打原创,转载请标明出处:https://www.cnblogs.com/dreamsqin/p/11181416.html 1.什么时候用伪类:before和:after? 结合实际开发情况,说一 ...

  8. Java多线程分批发送消息的小例子

    需求: 假设有10万个用户,现在节假日做活动,需要给每个用户发送一条活动短信,为了提高程序的效率,建议使用多线程分批发送. 这里值得注意的是: 每开一个线程都会占用CPU的资源,所以线程根据所需要的条 ...

  9. java中什么是包

    一.什么是包 包允许将类组合成较小的单元(类似文件夹),使其易于找到和使用相应的类文件 包有助于避免命名冲突.在使用许多类时,类和方法的名称很难决定.有时需要使用与其他类相同的名称.包基本上隐藏了类并 ...

  10. java中什么是接口

    一.什么是接口 接口就是一个规范,类似于硬件上面的接口,在电脑主板上的PCI插槽的规范就类似于Java接口,只要是遵循PCI接口的卡,不过是什么牌子的都可以插入到PCI插槽中.所以接口就是一个规范.接 ...