Kubernetes 1.5.1 部署
> kubernetes 1.5.0 , 配置文档
# 1 初始化环境
## 1.1 环境:
| 节 点 | I P |
|--------|-------------|
|node-1|10.6.0.140|
|node-2|10.6.0.187|
|node-3|10.6.0.188|
## 1.2 设置hostname
hostnamectl --static set-hostname hostname
| I P | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|
## 1.3 配置 hosts
```
vi /etc/hosts
```
| I P | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|
# 2.0 部署 kubernetes master
## 2.1 添加yum
# 使用我朋友的 yum 源,嘿嘿 cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[mritdrepo]
name=Mritd Repository
baseurl=https://yum.mritd.me/centos/7/x86_64
enabled=
gpgcheck=
gpgkey=https://cdn.mritd.me/keys/rpm.public.key
EOF yum makecache yum install -y socat kubelet kubeadm kubectl kubernetes-cni
## 2.2 安装docker
wget -qO- https://get.docker.com/ | sh systemctl enable docker
systemctl start docker
## 2.3 安装 etcd 集群
yum -y install etcd # 创建etcd data 目录 mkdir -p /opt/etcd/data chown -R etcd:etcd /opt/etcd/ # 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数: ETCD_NAME=etcd1
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"
# 修改 etcd 启动文件 sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service
# 启动 etcd systemctl enable etcd systemctl start etcd systemctl status etcd # 查看集群状态 etcdctl cluster-health
## 2.4 下载镜像
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done ``` ```
# 如果速度很慢,可配置一下加速 docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io" ```
## 2.4 启动 kubernetes
```
systemctl enable kubelet
systemctl start kubelet
```
## 2.5 创建集群
```
kubeadm init --api-advertise-addresses=10.6.0.140 \
--external-etcd-endpoints=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379 \
--use-kubernetes-version v1.5.1 \
--pod-network-cidr 10.244.0.0/ ``` ```
Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "c53ef2.d257d49589d634f0"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 15.299235 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.002937 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.502881 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140 ```
## 2.6 记录 token
You can now join any number of machines by running the following on each node: kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140
## 2.7 配置网络
```
# 建议先下载镜像,否则容易下载不到 docker pull quay.io/coreos/flannel-git:v0.6.1--g5dde68d-amd64 # 或者这样 docker pull jicki/flannel-git:v0.6.1--g5dde68d-amd64
docker tag jicki/flannel-git:v0.6.1--g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1--g5dde68d-amd64
docker rmi jicki/flannel-git:v0.6.1--g5dde68d-amd64 ``` ```
# http://kubernetes.io/docs/admin/addons/ 这里有多种网络模式,选择一种 # 这里选择 Flannel 选择 Flannel init 时必须配置 --pod-network-cidr kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ```
## 2.8 检查 kubelet 状态
systemctl status kubelet
# 3.0 部署 kubernetes node
## 3.1 安装docker
```
wget -qO- https://get.docker.com/ | sh systemctl enable docker
systemctl start docker
```
## 3.2 下载镜像
```
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done ```
## 3.3 启动 kubernetes
systemctl enable kubelet
systemctl start kubelet
## 3.4 加入集群
kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
## 3.5 查看集群状态
[root@k8s-node- ~]#kubectl get node
NAME STATUS AGE
k8s-node- Ready,master 27m
k8s-node- Ready 6s
k8s-node- Ready 9s
## 3.6 查看服务状态
[root@k8s-node- ~]#kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy--qrp68 / Running 1h
kube-system kube-apiserver-k8s-node- / Running 1h
kube-system kube-controller-manager-k8s-node- / Running 1h
kube-system kube-discovery--g2lpc / Running 1h
kube-system kube-dns--xbhv4 / Running 1h
kube-system kube-flannel-ds-39g5n / Running 1h
kube-system kube-flannel-ds-dwc82 / Running 1h
kube-system kube-flannel-ds-qpkm0 / Running 1h
kube-system kube-proxy-16c50 / Running 1h
kube-system kube-proxy-5rkc8 / Running 1h
kube-system kube-proxy-xwrq0 / Running 1h
kube-system kube-scheduler-k8s-node- / Running 1h
# 4.0 设置 kubernetes
## 4.1 其他主机控制集群
```
# 备份master节点的 配置文件 /etc/kubernetes/admin.conf # 保存至 其他电脑, 通过执行配置文件控制集群 kubectl --kubeconfig ./admin.conf get nodes ```
## 4.2 配置dashboard
```
#下载 yaml 文件, 直接导入会去官方拉取images curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml #编辑 yaml 文件 vi kubernetes-dashboard.yaml image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0 修改为 image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0 imagePullPolicy: Always 修改为 imagePullPolicy: IfNotPresent ``` ```
kubectl create -f ./kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
``` ```
# 查看 NodePort ,既外网访问端口 kubectl describe svc kubernetes-dashboard --namespace=kube-system NodePort: <unset> /TCP ``` ```
# 访问 dashboard http://10.6.0.140:31736 ```
# 5.0 kubernetes 应用部署
## 5.1 部署一个 nginx rc
> 编写 一个 nginx yaml
```
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
spec:
replicas:
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort:
``` ```
[root@k8s-node- ~]#kubectl get rc
NAME DESIRED CURRENT READY AGE
nginx-rc 2m [root@k8s-node- ~]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-2s8k9 / Running 10m 10.32.0.3 k8s-node-
nginx-rc-s16cm / Running 10m 10.40.0.1 k8s-node-
> 编写一个 nginx service 让集群内部容器可以访问 (ClusterIp)
```
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- port:
targetPort:
protocol: TCP
selector:
name: nginx
``` ```
[root@k8s-node- ~]#kubectl create -f nginx-svc.yaml
service "nginx-svc" created [root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> /TCP 2d <none>
nginx-svc 10.6.164.79 <none> /TCP 29s name=nginx ``` > 编写一个 curl 的pods ```
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: radial/busyboxplus:curl
command:
- sh
- -c
- while true; do sleep ; done
``` ```
# 测试pods 内部通信
[root@k8s-node- ~]#kubectl exec curl curl nginx
``` ```
# 在任何node节点中,可使用ip访问 [root@k8s-node- ~]# curl 10.6.164.79
[root@k8s-node- ~]# curl 10.6.164.79 ```
> 编写一个 nginx service 让外部可以访问 (NodePort)
```
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-node
spec:
ports:
- port:
targetPort:
protocol: TCP
type: NodePort
selector:
name: nginx
``` ```
[root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> /TCP 2d <none>
nginx-svc 10.6.164.79 <none> /TCP 29m name=nginx
nginx-svc-node 10.12.95.227 <nodes> /TCP 17s name=nginx [root@k8s-node- ~]#kubectl describe svc nginx-svc-node |grep NodePort
Type: NodePort
NodePort: <unset> /TCP
``` ```
# 使用 ALL node节点物理IP + 端口访问 http://10.6.0.140:32669 http://10.6.0.187:32669 http://10.6.0.188:32669
```
## 5.2 部署一个 zookeeper 集群
> 编写 一个 zookeeper-cluster.yaml
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "0.0.0.0,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "zookeeper-1,0.0.0.0,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "zookeeper-1,zookeeper-2,0.0.0.0"
ports:
- containerPort:
--- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- --- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- --- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- ``` ```
[root@k8s-node- ~]#kubectl create -f zookeeper-cluster.yaml --record [root@k8s-node- ~]#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
zookeeper---cfyt4 / Running 51m 10.32.0.3 k8s-node-
zookeeper---0bxee / Running 51m 10.40.0.1 k8s-node-
zookeeper---5csqy / Running 51m 10.40.0.2 k8s-node- [root@k8s-node- ~]#kubectl get deployment -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
zookeeper- 51m
zookeeper- 51m
zookeeper- 51m [root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
zookeeper- 10.8.111.19 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
zookeeper- 10.6.10.124 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
zookeeper- 10.0.146.143 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
## 5.3 部署一个 kafka 集群
> 编写 一个 kafka-cluster.yaml
``` apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- ```
# FAQ:
## kube-discovery error
failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists] kubeadm reset kubeadm init
Kubernetes 1.5.1 部署的更多相关文章
- 基于Kubernetes在AWS上部署Kafka时遇到的一些问题
作者:Jack47 转载请保留作者和原文出处 欢迎关注我的微信公众账号程序员杰克,两边的文章会同步,也可以添加我的RSS订阅源. 交代一下背景:我们的后台系统是一套使用Kafka消息队列的数据处理管线 ...
- ASP.NET Core在Azure Kubernetes Service中的部署和管理
目录 ASP.NET Core在Azure Kubernetes Service中的部署和管理 目标 准备工作 注册 Azure 账户 AKS文档 进入Azure门户(控制台) 安装 Azure Cl ...
- kubernetes nginx ingress controller部署
Kubernetes nginx ingress controller部署 1.下载kubernetes nginx的yaml文件 Wget https://raw.githubusercontent ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录
0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之集群部署环境规划(一)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.环境规划 软件 版本 ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之flanneld网络介绍及部署(三)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.flanneld介绍 ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...
- kubernetes二进制高可用部署实战
环境: 192.168.30.20 VIP(虚拟) 192.168.30.21 master1 192.168.30.22 master2 192.168.30.23 node1 192.168.30 ...
- Kubernetes集群的部署方式及详细步骤
一.部署环境架构以及方式 第一种部署方式 1.针对于master节点 将API Server.etcd.controller-manager.scheduler各组件进行yum install.编译安 ...
随机推荐
- asp xml对象转换为string
'xml文件中没有属性的情况 Dim xmlStrxmlStr="<root><count>1</count><error>0</err ...
- vim menu乱码
syntax enable syntax on colorscheme desert set nocompatible set filetype=c set number set wrap " ...
- css三种布局方式
第一种布局方式:标准流(文档流) 标准流即为元素默认的显示方式.如块级元素独占一行,行内元素可以在一行显示. 第二种布局方式:浮动,float属性 浮动对应的css属性是float:left/righ ...
- jQuery(3)——DOM操作
---恢复内容开始--- jQuery中的DOM操作 [DOM操作分类] DOM操作分为DOM Core(核心).HTML-DOM和CSS-DOM三个方面. DOM Core:任何一种支持DOM的 ...
- 小团队git开发模式
实验室要使用Git进行代码管理,但是git非常复杂,各种开发模式也是层出不穷.作为新手的偶们很是发囧啊!!网上搜了一下,发现很多并不适合我们小团队运作(它本身就是为Linux内核管理而开发的分布式代码 ...
- C#中Invoke的用法
在用.NET Framework框架的WinForm构建GUI程序界面时,如果要在控件的事件响应函数中改变控件的状态,例如:某个按钮上的文本原先叫"打开",单击之后按钮上的文本显示 ...
- C# 读书笔记之类与结构体
类和结构体都包括数据和操作数据的方法 类的定义形式 class PhoneCustomer{public const string DayOfSendingBill = "Monday&qu ...
- XmlNode和XmlElement区别
今天在做ASP.NET操作XML文档的过程中,发现了两个类:XmlNode和XmlElement.这两个类的功能极其类似(因为我们一般都是在对Element节点进行操作).上网搜罗了半天,千篇一律的答 ...
- hdu_2825_Wireless Password(AC自动机+状压DP)
题目链接:hdu_2825_Wireless Password 题意: 给你m个串,问长度为n至少含k个串的字符串有多少个 题解: 设dp[i][j][k]表示考虑到长度为i,第j个自动机的节点,含有 ...
- 6、Web应用程序中的安全向量 -- customErrors(适当的错误报告和堆栈跟踪)
几乎所有的网站在开发过程中都在web.config文件中设置了特性<customErrors mode="off">. customErrors模式有3个可选的设置项: ...