Kubernetes集群的安装部署
此文参照https://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html,并根据实操过程略作修改。
1、环境介绍及准备:
1.1 物理机操作系统
物理机操作系统采用Centos7.3 64位,细节如下。
[root@k8s-master ~]# uname -a
Linux k8s-master 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
1.2 主机信息
本文准备了三台机器用于部署k8s的运行环境,细节如下:
|
节点及功能 |
主机名 |
IP |
|
Master、etcd、registry |
K8s-master |
192.168.44.60 |
|
Node1 |
K8s-slave01 |
192.168.44.61 |
|
Node2 |
K8s-slave02 |
192.168.44.62 |
另外三台机器做了ssh免密登录,免密登录示例
并且做如下配置(三台机器都需要)
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.44.60 etcd
192.168.44.60 registry 192.168.44.60 k8s-master
192.168.44.61 k8s-slave01
192.168.44.62 k8s-slave02
1.3 关闭三台机器上的防火墙
systemctl disable firewalld.service
systemctl stop firewalld.service
2、部署etcd
k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:
[root@k8s-master ~]# yum install etcd -y
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:
[root@k8s-master ~]# vim /etc/etcd/etcd.conf # [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
启动并验证状态
[root@k8s-master ~]# systemctl start etcd
[root@k8s-master ~]# etcdctl set testdir/testkey0 0
0
[root@k8s-master ~]# etcdctl get testdir/testkey0
0
[root@k8s-master ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
3、部署master
3.1 安装Docker
[root@k8s-master ~]# yum install docker
配置Docker配置文件,使其允许从registry中拉取镜像。
[root@k8s-master ~]# vim /etc/sysconfig/docker # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled=false --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'
ps:上面这个配置(OPTIONS='--insecure-registry registry:5000'),是配置使用本地镜像库,本地镜像库的搭建和使用可参照这篇文章:Docker私有仓库的搭建及使用
设置开机自启动并开启服务
[root@k8s-master ~]# chkconfig docker on
[root@k8s-master ~]# service docker start
3.2 安装kubernets
[root@k8s-master ~]# yum install kubernetes
3.3 配置并启动kubernetes
在kubernetes master上需要运行以下组件:
Kubernets API Server
Kubernets Controller Manager
Kubernets Scheduler
相应的要更改以下几个配置中带颜色部分信息:
3.3.1 /etc/kubernetes/apiserver
[root@k8s-master ~]# vim /etc/kubernetes/apiserver ###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
# # The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on.
KUBE_API_PORT="--port=8080" # Port minions listen on
# KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" # Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" # Add your own!
KUBE_API_ARGS=""
3.3.2 /etc/kubernetes/config
[root@k8s-master ~]# vim /etc/kubernetes/config ###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
启动服务并设置开机自启动
[root@k8s-master ~]# systemctl enable kube-apiserver.service
[root@k8s-master ~]# systemctl start kube-apiserver.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
[root@k8s-master ~]# systemctl start kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
[root@k8s-master ~]# systemctl start kube-scheduler.service
4、部署node(注意,两台slave的node机器都需要操作一遍)
4.1 安装docker
参见3.1
4.2 安装kubernets
两台slave的node机器上分别yum安装
yum install kubernetes
4.3 配置并启动kubernetes
在kubernetes node上需要运行以下组件:
Kubelet
Kubernets Proxy
相应的要更改以下几个配置文中带颜色部分信息:
4.3.1 /etc/kubernetes/config
[root@K8s-slave01 ~]# vim /etc/kubernetes/config ###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
4.3.2 /etc/kubernetes/kubelet
[root@K8s-slave01 ~]# vim /etc/kubernetes/kubelet ###
# kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on
# KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname 注意修改成自己的节点名称
KUBELET_HOSTNAME="--hostname-override=k8s-slave01" # location of the api-server 修改成自己的主节点名称
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080" # pod infrastructure container 记住这个地方,后面会对此讲解
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own!
KUBELET_ARGS=""
启动服务并设置开机自启动
[root@k8s-master ~]# systemctl enable kubelet.service
[root@k8s-master ~]# systemctl start kubelet.service
[root@k8s-master ~]# systemctl enable kube-proxy.service
[root@k8s-master ~]# systemctl start kube-proxy.service
4.4 查看状态
在master上查看集群中节点及节点状态
[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME STATUS AGE
k8s-slave01 Ready 39s
k8s-slave02 Ready 45s
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
k8s-slave01 Ready 50s
k8s-slave02 Ready 56s
至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。
5、创建覆盖网络——Flannel
5.1 安装Flannel
在master、node上均执行如下命令,进行安装
[root@k8s-master ~]# yum install flannel
5.2 配置Flannel
master、node上均编辑/etc/sysconfig/flanneld,修改红色部分
[root@k8s-master ~]# vi /etc/sysconfig/flanneld # Flanneld configuration options # etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379" # etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass
#FLANNEL_OPTIONS=""
5.3 配置etcd中关于flannel的key
Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,值里面的ip可以参照ifconfig列出的docker0一项的ip,错误的话启动就会出错)
值参照如下
[root@k8s-slave01 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1472
inet 172.17.78.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:d9ff:fe56:982c prefixlen 64 scopeid 0x20<link>
.....
执行下面命令
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.17.0.1/16" }'
{ "Network": "172.17.0.1/16" }
5.4 启动
启动Flannel之后,需要依次重启docker、kubernete。
在master执行:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
在node上执行:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
至此集群基本搭建完毕,但是一般企业里面都需要一个web的ui页面,所以下文讲解如何在 集群的基础上搭建ui界面。
Kubernetes集群的安装部署的更多相关文章
- kubernetes 集群的安装部署
本文来自我的github pages博客http://galengao.github.io/ 即www.gaohuirong.cn 摘要: 首先kubernetes得官方文档我自己看着很乱,信息很少, ...
- Istio(二):在Kubernetes(k8s)集群上安装部署istio1.14
目录 一.模块概览 二.系统环境 三.安装istio 3.1 使用 Istioctl 安装 3.2 使用 Istio Operator 安装 3.3 生产部署情况如何? 3.4 平台安装指南 四.Ge ...
- Ganglia监控Hadoop集群的安装部署[转]
Ganglia监控Hadoop集群的安装部署 一. 安装环境 Ubuntu server 12.04 安装gmetad的机器:192.168.52.105 安装gmond的机 器:192.168.52 ...
- (转)linux下weblogic12c集群的安装部署
本文介绍linux下weblogic12c集群的安装部署,版本12c,其他版本操作会有所不同,但其大体操作基本都是一样的 关于weblogic的集群,在此就不多做介绍了,如果有不了解的朋友可以百度搜索 ...
- Apache Hadoop集群离线安装部署(三)——Hbase安装
Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apac ...
- Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装
Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apac ...
- Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS、YARN、MR)安装
虽然我已经装了个Cloudera的CDH集群(教程详见:http://www.cnblogs.com/pojishou/p/6267616.html),但实在太吃内存了,而且给定的组件版本是不可选的, ...
- K8S从入门到放弃系列-(16)Kubernetes集群Prometheus-operator监控部署
Prometheus Operator不同于Prometheus,Prometheus Operator是 CoreOS 开源的一套用于管理在 Kubernetes 集群上的 Prometheus 控 ...
- 二,kubernetes集群的安装初始化
目录 部署 集群架构示意图 部署环境 kubernetes集群部署步骤 基础环境 基础配置 安装基础组件 配置yum源 安装组件 初始化 master 设置docker和kubelet为自启动(nod ...
随机推荐
- SharePoint Development - Custom Content Type using Visual Studio 2010 based SharePoint 2010
博客地址 http://blog.csdn.net/foxdave 本文所述均来自之前实际的项目模块 首先再论述一下SharePoint ContentType内容类型 SharePoint的列表和文 ...
- 归并排序算法-Java实现
简介: 归并(Merge)排序法是将两个(或两个以上)有序表合并成一个新的有序表,即把待排序序列分为若干个子序列,每个子序列是有序的.然后再把有序子序列合并为整体有序 基本思想: 将一个无序数组,利用 ...
- 关于MSSQL中IS NULL和IS NOT NULL的问题
在SQL语句中我们一般会避免写IS NULL和IS NOT NULL,因为这样优化器不会使用索引. 但经过一系列测试发现这句话并不完全对,因为有时候也会使用索引. 语句: select * from ...
- Leetcode 1021. Remove Outermost Parentheses
括号匹配想到用栈来做: class Solution: def removeOuterParentheses(self, S: str) -> str: size=len(S) if size= ...
- POI2015题解
POI2015题解 吐槽一下为什么POI2015开始就成了破烂波兰文题目名了啊... 咕了一道3748没写打表题没什么意思,还剩\(BZOJ\)上的\(14\)道题. [BZOJ3746][POI20 ...
- ACM学习历程—HDU2476 String painter(动态规划)
http://acm.hdu.edu.cn/showproblem.php?pid=2476 题目大意是给定一个起始串和一个目标串,然后每次可以将某一段区间染成一种字符,问从起始串到目标串最少需要染多 ...
- USB相关的sysfs文件
主要来自driver/usb/core/sysfs.c: 1.bConfigurationValue RW,W时调用了usb_set_configuration()实时设置配置.根据USB规范(例如第 ...
- 获取 graphql schema 信息
模块 npm install -g get-graphql-schema get-graphql-schema GRAPHQL_URL > schema.graphql 简单使用 使用prism ...
- cratedb nodejs 试用
安装cratedb docker run -d -p 4200:4200 crate nodejs 项目初始化 yarn yarn init -y 添加依赖 yarn add node-crate 基 ...
- prisma graphql 工具基本使用
项目使用docker-compose mysql 运行 安装 npm insatll -g prisma or yarn global add prisma 创建代码 项目结构 ├── README. ...