使用kubeadm安装kubernetes,并使用kube-router代替kube-proxy+calico网络。

即:kube-router providing service proxy, firewall and pod networking.

版本:kubernetes v1.20.0

安装kubernetes集群

升级linux系统内核,关闭swap,关闭防火墙,调整内核参数等自己做。

主要安装命令

yum install  kubectl-1.20.0-0.x86_64 kubeadm-1.20.0-0.x86_64 kubelet-1.20.0-0.x86_64

kubeadm init --kubernetes-version=1.20.0 --apiserver-advertise-address=192.168.100.80 --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16

[root@k8s-master ~]# kubeadm init --kubernetes-version=1.20.0 --apiserver-advertise-address=192.168.100.80 --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.100.96:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.80]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.100.80 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.100.80 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.002683 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 68mv1r.5ljn91n2yms71dth
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.100.80:6443 --token 68mv1r.5ljn91n2yms71dth \
--discovery-token-ca-cert-hash sha256:2a29deaa942a7eacb055f608caa686d9c59cb34abb0365b32c22d959b2327dc8
[root@k8s-master ~]# rm -rf .kube/
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get node

  

安装完成后

安装网络

源文件地址:https://github.com/cloudnativelabs/kube-router/blob/master/daemonset/kubeadm-kuberouter.yaml

kubectl apply -f kubeadm-kuberouter.yaml

安装服务测试

kubectl run nginx --image=nginx --expose --port=80 --image-pull-policy='IfNotPresent'

此时kube-router只提供网络功能,kube-proxy提供的service和防火墙策略

kube-router providing service proxy, firewall and pod networking.

参考地址:https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md

kubectl delete -f kubeadm-kuberouter.yaml

kubectl apply -f kubeadm-kuberouter-all-features.yaml

kubectl -n kube-system delete ds kube-proxy

docker run --privileged  -v /lib/modules:/lib/modules --net=host registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0 kube-proxy --cleanup

kube-router模式使用ipvs rr策略

最后在重启服务器

测试服务

格式化集群

kubectl drain k8s-master --delete-local-data --force --ignore-daemonsets

kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets

kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets

kubectl delete node k8s-master

rm -rf /etc/cni/net.d/10-kuberouter.conflist

kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

ipvsadm -C
reboot

kube-router代替kube-proxy+calico的更多相关文章

  1. 《kubernetes + .net core 》dev ops部分

    目录 1.kubernetes 预备知识 1.1 集群资源 1.1.1 role 1.1.2 namespace 1.1.3 node 1.1.4 persistent volume 1.1.5 st ...

  2. k8s集群部署(2)

    一.利用ansible部署kubernetes准备阶段 1.集群介绍 基于二进制方式部署k8s集群和利用ansible-playbook实现自动化:二进制方式部署有助于理解系统各组件的交互原理和熟悉组 ...

  3. k8s1.20环境搭建部署(二进制版本)

    1.前提知识 1.1 生产环境部署K8s集群的两种方式 kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群 ...

  4. kubernetes入门之skydns

    部署kubernetes dns服务 kubernetes可以为pod提供dns内部域名解析服务.其主要作用是为pod提供可以直接通过service的名字解析为对应service的ip的功能. 部署k ...

  5. CNCF CloudNative Landscape

    cncf landscape CNCF Cloud Native Interactive Landscape 1. App Definition and Development 1. Database ...

  6. kubectl version报did you specify the right host or port

    现象: [root@localhost shell]# kubectl version Client Version: version.Info{Major:", GitVersion:&q ...

  7. centos7.5单机yum安装kubernetes

    1.系统配置 centos7.5 docker 1.13.1 centos7下安装docker 2.关闭防火墙,selinux,swapoff systemctl disable firewalld ...

  8. 二进制方式部署Kubernetes 1.6.0集群(开启TLS)

    本节内容: Kubernetes简介 环境信息 创建TLS加密通信的证书和密钥 下载和配置 kubectl(kubecontrol) 命令行工具 创建 kubeconfig 文件 创建高可用 etcd ...

  9. kubernetes 集群安全配置

    版本:v1.10.0-alpha.3 openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -subj & ...

  10. kubernetes1.5.2集群部署过程--安全模式

    使用https安全模式部署kubernetes集群,能保证集群通讯安全.有效限制非授权用户访问.但部署比非安全模式复杂的多. 本文为etcd.kubernetes集群中各个组件配置证书认证,所有组件通 ...

随机推荐

  1. SpringBoot整合开发

    1.SpringBoot分模块 分模块就是将一个项目分成多个模块,即maven项目. 1)首先创建一个springboot的项目: 第一步:选择springboot的项目 第二步:填写项目的相关信息, ...

  2. 从零开始使用 webpack5 搭建 react 项目

    本文的示例项目源码可以点击 这里 获取 一.前言 webpack5 也已经发布一段时间了,其模块联邦.bundle 缓存等新特性值得在项目中进行使用.经过笔者在公司实际项目中的升级结果来看,其提升效果 ...

  3. 研究了一下 Webpack 打包原理,顺手挣了个 AirPods Pro

    这些年,Webpack 基本成了前端项目打包构建的标配.关于它的原理和用法的文章在网上汗牛充栋,大家或多或少都看过一些.我也一样,大概了解过它的构建过程以及常用 loader 和 plugin 的配置 ...

  4. Wireshark使用记录

    TCP/IP协议族里的协议众多 要一一精通比较困难,在一些紧急急需要分析主机.客户端的流量场景时,不懂协议也要上!下面就是用到哪里就记录到哪,有错误欢迎评论指出,多谢. wireshark这玩意相当于 ...

  5. 你不知道的Scheduled定时任务骚操作

    目录 一.什么是定时任务 二.项目依赖 三.注解式定时任务 3.1 cron 3.2 fixedDelay 3.3 fixedDelayString 3.4 fixedRate 3.5 fixedRa ...

  6. [msys2]集成到右键菜单

    集成到右键菜单 在资源管理器中,空白处右键(right-clicking on folder backround in Windows Explorer)会弹出菜单,其中有如"在此处打开cm ...

  7. 如何自学成 Python 大神?这里有些建议

    人生苦短,我用 Python.为什么?简单明了的理由当然是开发效率高.但是学习 Python 的初学者往往会面临以下残酷的现状:网上充斥着大量的学习资源.书籍.视频教程和博客,但是大部分都是讲解基础知 ...

  8. Webpack 基石 tapable 揭秘

    Webpack 基于 tapable 构建了其复杂庞大的流程管理系统,基于 tapable 的架构不仅解耦了流程节点和流程的具体实现,还保证了 Webpack 强大的扩展能力:学习掌握tapable, ...

  9. C# 应用 - 封装类访问 Mysql 数据库

    个人经历的项目主要都是用 Postgresql 或 Oracle 数据库,本文非原创,从他处整理而来. 1. 库类 mysql.data.dll using MySql.Data.MySqlClien ...

  10. Python基础(1)——变量和数据类型[xiaoshun]

    目录 一.变量 1.概述 Variables are used to store information to be referenced(引用)and manipulated(操作) in a co ...