> kubernetes 1.5.0 , 配置文档

# 1 初始化环境

## 1.1 环境:

| 节 点  |      I P      |
|--------|-------------|
|node-1|10.6.0.140|
|node-2|10.6.0.187|
|node-3|10.6.0.188|

## 1.2 设置hostname

hostnamectl --static set-hostname hostname

|       I P     | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

## 1.3 配置 hosts

```
vi /etc/hosts
```

|     I P       | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

# 2.0 部署 kubernetes master

## 2.1 添加yum

# 使用我朋友的 yum 源,嘿嘿

cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[mritdrepo]
name=Mritd Repository
baseurl=https://yum.mritd.me/centos/7/x86_64
enabled=
gpgcheck=
gpgkey=https://cdn.mritd.me/keys/rpm.public.key
EOF yum makecache yum install -y socat kubelet kubeadm kubectl kubernetes-cni

## 2.2 安装docker

wget -qO- https://get.docker.com/ | sh

systemctl enable docker
systemctl start docker

## 2.3 安装 etcd 集群

yum -y install etcd

# 创建etcd data 目录

mkdir -p /opt/etcd/data

chown -R etcd:etcd /opt/etcd/

# 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数:

ETCD_NAME=etcd1
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"
# 修改 etcd 启动文件

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service
# 启动 etcd

systemctl enable etcd

systemctl start etcd

systemctl status etcd

# 查看集群状态

etcdctl cluster-health

## 2.4 下载镜像

images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done ``` ```
# 如果速度很慢,可配置一下加速 docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io" ```

## 2.4 启动 kubernetes

```
systemctl enable kubelet
systemctl start kubelet
```

## 2.5 创建集群

```
kubeadm init --api-advertise-addresses=10.6.0.140 \
--external-etcd-endpoints=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379 \
--use-kubernetes-version v1.5.1 \
--pod-network-cidr 10.244.0.0/ ``` ```
Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "c53ef2.d257d49589d634f0"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 15.299235 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.002937 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.502881 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140 ```

## 2.6 记录 token

You can now join any number of machines by running the following on each node:

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

## 2.7 配置网络

```
# 建议先下载镜像,否则容易下载不到 docker pull quay.io/coreos/flannel-git:v0.6.1--g5dde68d-amd64 # 或者这样 docker pull jicki/flannel-git:v0.6.1--g5dde68d-amd64
docker tag jicki/flannel-git:v0.6.1--g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1--g5dde68d-amd64
docker rmi jicki/flannel-git:v0.6.1--g5dde68d-amd64 ``` ```
# http://kubernetes.io/docs/admin/addons/ 这里有多种网络模式,选择一种 # 这里选择 Flannel 选择 Flannel init 时必须配置 --pod-network-cidr kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ```

## 2.8 检查 kubelet 状态

systemctl status kubelet

# 3.0 部署 kubernetes node

## 3.1 安装docker

```
wget -qO- https://get.docker.com/ | sh systemctl enable docker
systemctl start docker
```

## 3.2 下载镜像

```
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done ```

## 3.3 启动 kubernetes

systemctl enable kubelet
systemctl start kubelet

## 3.4 加入集群

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.

## 3.5 查看集群状态

[root@k8s-node- ~]#kubectl get node
NAME STATUS AGE
k8s-node- Ready,master 27m
k8s-node- Ready 6s
k8s-node- Ready 9s

## 3.6 查看服务状态

[root@k8s-node- ~]#kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy--qrp68 / Running 1h
kube-system kube-apiserver-k8s-node- / Running 1h
kube-system kube-controller-manager-k8s-node- / Running 1h
kube-system kube-discovery--g2lpc / Running 1h
kube-system kube-dns--xbhv4 / Running 1h
kube-system kube-flannel-ds-39g5n / Running 1h
kube-system kube-flannel-ds-dwc82 / Running 1h
kube-system kube-flannel-ds-qpkm0 / Running 1h
kube-system kube-proxy-16c50 / Running 1h
kube-system kube-proxy-5rkc8 / Running 1h
kube-system kube-proxy-xwrq0 / Running 1h
kube-system kube-scheduler-k8s-node- / Running 1h

# 4.0 设置 kubernetes

## 4.1 其他主机控制集群

```
# 备份master节点的 配置文件 /etc/kubernetes/admin.conf # 保存至 其他电脑, 通过执行配置文件控制集群 kubectl --kubeconfig ./admin.conf get nodes ```

## 4.2 配置dashboard

```
#下载 yaml 文件, 直接导入会去官方拉取images curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml #编辑 yaml 文件 vi kubernetes-dashboard.yaml image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0 修改为 image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0 imagePullPolicy: Always 修改为 imagePullPolicy: IfNotPresent ``` ```
kubectl create -f ./kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
``` ```
# 查看 NodePort ,既外网访问端口 kubectl describe svc kubernetes-dashboard --namespace=kube-system NodePort: <unset> /TCP ``` ```
# 访问 dashboard http://10.6.0.140:31736 ```

# 5.0 kubernetes 应用部署

## 5.1 部署一个 nginx rc

> 编写 一个 nginx yaml

```
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
spec:
replicas:
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort:
``` ```
[root@k8s-node- ~]#kubectl get rc
NAME DESIRED CURRENT READY AGE
nginx-rc 2m [root@k8s-node- ~]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-2s8k9 / Running 10m 10.32.0.3 k8s-node-
nginx-rc-s16cm / Running 10m 10.40.0.1 k8s-node-

> 编写一个 nginx service 让集群内部容器可以访问 (ClusterIp)

```
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- port:
targetPort:
protocol: TCP
selector:
name: nginx
``` ```
[root@k8s-node- ~]#kubectl create -f nginx-svc.yaml
service "nginx-svc" created [root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> /TCP 2d <none>
nginx-svc 10.6.164.79 <none> /TCP 29s name=nginx ``` > 编写一个 curl 的pods ```
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: radial/busyboxplus:curl
command:
- sh
- -c
- while true; do sleep ; done
``` ```
# 测试pods 内部通信
[root@k8s-node- ~]#kubectl exec curl curl nginx
``` ```
# 在任何node节点中,可使用ip访问 [root@k8s-node- ~]# curl 10.6.164.79
[root@k8s-node- ~]# curl 10.6.164.79 ```

> 编写一个 nginx service 让外部可以访问 (NodePort)

```
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-node
spec:
ports:
- port:
targetPort:
protocol: TCP
type: NodePort
selector:
name: nginx
``` ```
[root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> /TCP 2d <none>
nginx-svc 10.6.164.79 <none> /TCP 29m name=nginx
nginx-svc-node 10.12.95.227 <nodes> /TCP 17s name=nginx [root@k8s-node- ~]#kubectl describe svc nginx-svc-node |grep NodePort
Type: NodePort
NodePort: <unset> /TCP
``` ```
# 使用 ALL node节点物理IP + 端口访问 http://10.6.0.140:32669 http://10.6.0.187:32669 http://10.6.0.188:32669
```

## 5.2 部署一个 zookeeper 集群

> 编写 一个 zookeeper-cluster.yaml

```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "0.0.0.0,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "zookeeper-1,0.0.0.0,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-
spec:
replicas:
template:
metadata:
labels:
name: zookeeper-
spec:
containers:
- name: zookeeper-
image: zk:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: NODES
value: "zookeeper-1,zookeeper-2,0.0.0.0"
ports:
- containerPort:
--- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- --- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- --- apiVersion: v1
kind: Service
metadata:
name: zookeeper-
labels:
name: zookeeper-
spec:
ports:
- name: client
port:
protocol: TCP
- name: followers
port:
protocol: TCP
- name: election
port:
protocol: TCP
selector:
name: zookeeper- ``` ```
[root@k8s-node- ~]#kubectl create -f zookeeper-cluster.yaml --record [root@k8s-node- ~]#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
zookeeper---cfyt4 / Running 51m 10.32.0.3 k8s-node-
zookeeper---0bxee / Running 51m 10.40.0.1 k8s-node-
zookeeper---5csqy / Running 51m 10.40.0.2 k8s-node- [root@k8s-node- ~]#kubectl get deployment -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
zookeeper- 51m
zookeeper- 51m
zookeeper- 51m [root@k8s-node- ~]#kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
zookeeper- 10.8.111.19 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
zookeeper- 10.6.10.124 <none> /TCP,/TCP,/TCP 51m name=zookeeper-
zookeeper- 10.0.146.143 <none> /TCP,/TCP,/TCP 51m name=zookeeper-

## 5.3 部署一个 kafka 集群

> 编写 一个 kafka-cluster.yaml

```

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-deployment-
spec:
replicas:
template:
metadata:
labels:
name: kafka-
spec:
containers:
- name: kafka-
image: kafka:alpine
imagePullPolicy: IfNotPresent
env:
- name: NODE_ID
value: ""
- name: ZK_NODES
value: "zookeeper-1,zookeeper-2,zookeeper-3"
ports:
- containerPort: --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- --- apiVersion: v1
kind: Service
metadata:
name: kafka-
labels:
name: kafka-
spec:
ports:
- name: client
port:
protocol: TCP
selector:
name: kafka- ```

# FAQ:

## kube-discovery error

    failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]

kubeadm reset

kubeadm init

Kubernetes 1.5.1 部署的更多相关文章

  1. 基于Kubernetes在AWS上部署Kafka时遇到的一些问题

    作者:Jack47 转载请保留作者和原文出处 欢迎关注我的微信公众账号程序员杰克,两边的文章会同步,也可以添加我的RSS订阅源. 交代一下背景:我们的后台系统是一套使用Kafka消息队列的数据处理管线 ...

  2. ASP.NET Core在Azure Kubernetes Service中的部署和管理

    目录 ASP.NET Core在Azure Kubernetes Service中的部署和管理 目标 准备工作 注册 Azure 账户 AKS文档 进入Azure门户(控制台) 安装 Azure Cl ...

  3. kubernetes nginx ingress controller部署

    Kubernetes nginx ingress controller部署 1.下载kubernetes nginx的yaml文件 Wget https://raw.githubusercontent ...

  4. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录

    0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...

  5. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之集群部署环境规划(一)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.环境规划 软件 版本 ...

  6. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...

  7. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之flanneld网络介绍及部署(三)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.flanneld介绍 ...

  8. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...

  9. kubernetes二进制高可用部署实战

    环境: 192.168.30.20 VIP(虚拟) 192.168.30.21 master1 192.168.30.22 master2 192.168.30.23 node1 192.168.30 ...

  10. Kubernetes集群的部署方式及详细步骤

    一.部署环境架构以及方式 第一种部署方式 1.针对于master节点 将API Server.etcd.controller-manager.scheduler各组件进行yum install.编译安 ...

随机推荐

  1. 关于指针要注意的地方还有尝试在codeblocks上建立项目

    1.字符串: char a[]="house"; char *b="house"; a[2]='r';可以   b[2]='r'不可以,因为这个指针变量指的是字 ...

  2. 无法启动MYSQL服务”1067 进程意外终止”解决办法

    原文:http://www.111cn.net/database/mysql/48888.htm   本文章主要是总结了各种导致mysql提示无法启动MYSQL服务"1067 进程意外终止& ...

  3. vim menu乱码

    syntax enable syntax on colorscheme desert set nocompatible set filetype=c set number set wrap " ...

  4. java多线程并发编程

    Executor框架 Executor框架是指java 5中引入的一系列并发库中与executor相关的一些功能类,其中包括线程池,Executor,Executors,ExecutorService ...

  5. python如何保证多个线程同时修改共享对象时不出错!

    import threadingimport timenumber = 0lock = threading.RLock() #是Lock()的升级版,用Rlock()即可def run(num): l ...

  6. log4j配置示例

    在配置文件中按包名或类名来定义Logger 在程序中按类名取Logger 定义: log4j.rootLogger=debug,stdout log4j.logger.com.mypkg=debug, ...

  7. redo log

    1.redo log相关数据字典 v$log:display the redo log file information from the control file v$logfile:identif ...

  8. HDU 1896 Stones (优先队列)

    Problem Description Because of the wrong status of the bicycle, Sempr begin to walk east to west eve ...

  9. c语言正则表达式

    标准的C和C++都不支持正则表达式,但有一些函数库可以辅助C/C++程序员完成这一功能,其中最著名的当数Philip Hazel的Perl-Compatible Regular Expression库 ...

  10. ElasticSearch(1)-入门

    下一篇 Elastic Search基础(2) 相关文档: Gitbook[中文未完整]: http://learnes.net/ Gitbook[英文完整]:https://allen8807.gi ...