环境:
master,etcd 172.16.1.5
node1 172.16.1.6
node2 172.16.1.7
前提:
1.基于主机名通信,/etc/hosts
2.时间同步
3.关闭firewalld和iptables.services
安装配置步骤:
1.etcd cluster,仅master节点
2.flannel,集群所有节点
3.k8s-master节点
apiserver,scheduler,controlle-manager
4.配置k8s的node节点
先设定docker,kube-proxy,kubelet

kubeadm
1.master和node:安装kubelet,docker,kubeadm
2.master:kubeadm init初始化master节点
3.nodes:kubeadm join
初始化参考地址:
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

[root@node1 ~]#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.5 master.xiaolizi.com master
172.16.1.6 node1.xiaolizi.com node1
172.16.1.7 node2.xiaolizi.com node2

kubernetes镜像源:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
https://mirrors.aliyun.com/kubernetes/apt/doc/yum-key.gpg
docker镜像源:wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce
yum repolist

安装docker,kubeadm,kubectl,kubelet

yum install docker-ce kubeadm kubectl kubelet -y
systemctl enable kubelet

由于k8s安装有很多镜像国内下载不到,因为编辑如下的配置文件可以找到需要的镜像,启动docker前,在Service配置段里定义环境变量,Environment,表示通过这个代理去加载k8s所需的镜像,加载完成后,可以注释掉,仅使用国内的加速器来拉取非k8s的镜像,后续需要使用时,再开启。

# 配置这个代理地址的时候,是根据自己电脑的代理来设置的
vim /usr/lib/systemd/system/docker.service
[Services]
Environment="HTTPS_PROXY=http://192.168.2.208:10080" # 镜像是从国外拉取得,这里写的地址和端口是代理服务的,有些是将事先拉好的镜像推到自己的本地仓库
Environment="HTTP_PROXY=http://192.168.2.208:10080"
Environment="NO_PROXY=127.0.0.0/8,192.168.2.0/25" #保存退出后,执行
systemctl daemon-reload
#确保如下两个参数值为1,默认为1。
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
#如果结果不是1,需要执行
vim /usr/lib/sysctl.d/-system.conf
bridge-nf-call-iptables =
bridge-nf-call-ip6tables =
sysctl --system
#启动docker-ce
systemctl start docker
#设置开机启动
systemctl enable docker.service
# 启动之前查看,安装了那些文件
[root@master ~]#rpm -ql kubelet
/etc/kubernetes/manifests # 清单目录
/etc/sysconfig/kubelet # 配置文件
/usr/bin/kubelet # 主程序
/usr/lib/systemd/system/kubelet.service # unit file # 早期版本不让启动swap,如果修改的话,在此配置文件定义参数
[root@master ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 启动kubelet
systemctl start kubelet # 此时kubelet并未启动成功,master节点还没有初始化完成
systemctl stop kubelet
systemctl enable kubelet

在master节点上使用kubeadm init进行初始化,该命令有很多参数
--apiserver-bind-port # apiserver监听的端口,默认是6443
--apiserver-advertise-address # apiserver监听的地址,默认是0.0.0.0
--cert-dir # 加载证书的相关目录,默认是/etc/kubernetes/pki
--config # kubeadm程序自身的配置文件路径
--ignore-preflight-errors # 预检查时,遇到错误忽略掉,忽略什么自己指定,Example: 'IsPrivilegedUser,Swap'
--kubernetes-version # k8s的版本是什么
--pod-network-cidr # 指定pod所属的网络
--service-cidr

kubeadm init \
--kubernetes-version=v1.15.1 \
--ignore-preflight-errors=Swap \
--pod-network-cidr=10.244.0.0/ \
--service-cidr=10.96.0.0/ [root@master ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 weeks ago 207MB
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d weeks ago .4MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da weeks ago .1MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 weeks ago 159MB
k8s.gcr.io/coredns 1.3. eb516548c180 months ago .3MB
k8s.gcr.io/etcd 3.3. 2c4adeb21b4f months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1

master节点初始化内容

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[WARNING Hostname]: hostname "master" could not be reached
[WARNING Hostname]: hostname "master": lookup master on 223.5.5.5:: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.0.5 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.0.0.5 127.0.0.1 ::]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.5]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.503552 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xfmp2o.rg9vt1jojg8rcb01
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.5: --token xfmp2o.rg9vt1jojg8rcb01 \
--discovery-token-ca-cert-hash sha256:8ce2a857cb3383cb3bf509335de43c78e8d569e091caadd74865e2179d625bbc

master上执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get -help # 查看帮助
kubectl get cs # 查看组件状态信息 componentstatus
kubectl get nodes # 查看节点信息

node上执行

kubeadm join 10.0.0.5: --token xfmp2o.rg9vt1jojg8rcb01 \
--discovery-token-ca-cert-hash sha256:8ce2a857cb3383cb3bf509335de43c78e8d569e091caadd74865e2179d625bbc \
--ignore-preflight-errors=Swap [root@node1 ~]# docker image ls # 出现以下信息,完成
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d weeks ago .4MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago .6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB

安装flannel网络插件
下载地址:
https://github.com/coreos/flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# docker image ls
# 下面这个镜像拉下来了算是下载完成了
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago .6MB [root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 66m v1.15.1 [root@master ~]# kubectl get pods -n kube-system # 在kube-system这个命名空间下的pod
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-cg2rw / Running 66m
coredns-5c98db65d4-qqd2v / Running 66m
etcd-master / Running 65m
kube-apiserver-master / Running 65m
kube-controller-manager-master / Running 66m
kube-flannel-ds-amd64-wszr5 / Running 2m37s
kube-proxy-xw9gm / Running 66m
kube-scheduler-master / Running 65m [root@master ~]# kubectl get ns # 查看命名空间 namespace
NAME STATUS AGE
default Active 72m
kube-node-lease Active 73m
kube-public Active 73m
kube-system Active 73m

基于kubeamd初始化安装kubernetes集群的更多相关文章

  1. Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)

    背景: 由于工作发生了一些变动,很长时间没有写博客了. 概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. 一.Centos7 ...

  2. VirtualBox上使用kubeadm安装Kubernetes集群

    之前一直使用minikube练习,为了更贴近生产环境,使用VirtualBox搭建Kubernetes集群. 为了不是文章凌乱,把在搭建过程中遇到的问题及解决方法记在了另一篇文章:安装Kubernet ...

  3. CentOS 7.5 使用 yum 安装 Kubernetes 集群(二)

    一.安装方式介绍 1.yum 安装 目前CentOS官方已经把Kubernetes源放入到自己的默认 extras 仓库里面,使用 yum 安装,好处是简单,坏处也很明显,需要官方更新 yum 源才能 ...

  4. 安装Kubernetes集群时遇到的问题及解决方法

    在搭建Kubernetes集群时遇到一些问题,记录在这里. 搭建过程在另一篇文章:VirtualBox上使用kubeadm安装Kubernetes集群 1. 虚拟机安装完CentOS7登录时遇到war ...

  5. 从0到1使用Kubernetes系列(三):使用Ansible安装Kubernetes集群

    前两期的文章介绍了Kubernetes基本概念和架构,用Kubeadm+Ansible搭建Kubernetes集群所需要的工具及其作用.本篇介绍怎么使用Ansible安装Kubernetes集群. 启 ...

  6. 基于Python+Django的Kubernetes集群管理平台

    ➠更多技术干货请戳:听云博客 时至今日,接触kubernetes也有一段时间了,而我们的大部分业务也已经稳定地运行在不同规模的kubernetes集群上,不得不说,无论是从应用部署.迭代,还是从资源调 ...

  7. Centos7上安装Kubernetes集群部署docker

    一.安装前准备1.操作系统详情需要三台主机,都最小化安装 centos7.3,并update到最新 [root@master ~]# (Core) 角色 主机名 IPMaster master 192 ...

  8. 二进制文件方式安装kubernetes集群

    所有操作全部用root使用者进行,高可用一般建议大于等于3台的奇数,我们使用3台master来做高可用 练习环境说明: 参考GitHub master: kube-apiserver,kube-con ...

  9. centos7使用kubeadm安装kubernetes集群

    参考资料:官方文档 一.虚拟机安装 配置说明: windows下使用vbox,centos17.6 min版,kubernetes的版本是1.14.1, 安装如下三台机器: 192.168.56.15 ...

随机推荐

  1. WPF之图片处理系列(19/590)

    https://www.cnblogs.com/Big-Head/p/12068230.html

  2. ArcGIS Server 10.2忘记用户名密码的解决方案

    忘记了ArcGIS Server Manager的密码,可以采用以下方法进行重置. 1.找到ArcGIS Server的安装目录 D:\Program Files\ArcGIS\Server\tool ...

  3. 【问题】Could not locate PropertySource and the fail fast property is set, failing

    这是我遇到的问题 Could not locate PropertySource and the fail fast property is set, failing springcloud的其他服务 ...

  4. PHP防止刷微信红包方法

    PHP防止刷微信红包方法1 输入验证码2授权登陆后 领取红包记录下 openid ip 第二次用openid或者ip(ip)连接同一个路由器是一样的 所以用ip 判断最好是判断有没有6个以上 判断有没 ...

  5. HBase 详解

    1.HBase 架构 ============================================ 2. HBase Shell 操作 2.1. 基本操作 进入HBase客户端命令行:bi ...

  6. Rust基础

    一:编译器 Rust的编译器叫rustc,类似javac一样,负责将源代码编译成可执行文件或者库文件(.a/.so/.lib/.dll等) 二:核心库和标准库 Rust语言由核心库和标准库组成,核心库 ...

  7. [转帖]Java升级那么快,多个版本如何灵活切换和管理?

    Java升级那么快,多个版本如何灵活切换和管理? https://segmentfault.com/a/1190000021037771 前言 近两年,Java 版本升级频繁,感觉刚刚掌握 Java8 ...

  8. [Windows] - 在 Windows Server 2019 找不到无线网卡 之解决

    硬件:Intel® Dual Band Wireless-AC 3165系统:Windows Server 2019 问题:新系统安装完成后,无法找到无线网卡 尝试:适用于 Windows Serve ...

  9. Orika的使用姿势,Orika java bean copy 框架

    ### 这个java bean copy 比较好用 https://www.jianshu.com/p/271cf6976a3d

  10. Mybatis自动生成代码工具

    项目结构如下 一:在POM中添加mybatis-generator-maven-plugin 插件 <plugins> <plugin> <groupId>org. ...