前些日子部门计划搞并行开发,需要对开发及测试环境进行隔离,所以打算用kubernetes对docker容器进行版本管理,搭建了下Kubernetes集群,过程如下:

本流程使用了阿里云加速器,配置流程自行百度。

系统设置(Ubuntu14.04):

  禁用swap:

    sudo swapoff -a

  禁用防火墙:

    $ systemctl stop firewalld

    $ systemctl disable firewalld

  禁用SELINUX:

    $ setenforce 0

首先在每个节点上安装Docker:

  apt-get update && apt-get install docker.io

  (如果apt-get update命令报 Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/ 类似错误,解决方法如下:

    sudo pkill -KILL appstreamcli

    wget -P /tmp https://launchpad.net/ubuntu/+archive/primary/+files/appstream_0.9.4-1ubuntu1_amd64.deb https://launchpad.net/ubuntu/+archive/primary/+files/libappstream3_0.9.4-1ubuntu1_amd64.deb

    sudo dpkg -i /tmp/appstream_0.9.4-1ubuntu1_amd64.deb /tmp/libappstream3_0.9.4-1ubuntu1_amd64.deb)

然后在所有节点上安装kubelet kubeadm kubectl三个组件:

  kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。

  kubeadm 用于初始化 Cluster。

  kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

  编辑源:

    sudo vi /etc/apt/sources.list

  添加kubeadm及kubernetes组件安装源:

    deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

  更新源:

    更新源:apt-get update

  强制安装kubeadm,kubectl,kubelet软件包(由于最新版没有镜像,改为安装1.10.0版本):

    apt-get install kubelet=1.10.0-00

    apt-get install kubeadm=1.10.0-00

    apt-get install kubectl=1.10.0-00

拉取k8s所需镜像(防止初始化时被墙,我在阿里镜像库上放置了所需镜像),创建为脚本执行:

  touch docker.sh

  chmod 755 docker.sh

  编辑脚本,添加如下:

  # 安装镜像

  url= registry.cn-hangzhou.aliyuncs.com

  #基础核心

  docker pull $url/sach-k8s/etcd-amd64:3.1.12

  docker pull $url/sach-k8s/kube-apiserver-amd64:v1.10.0

  docker pull $url/sach-k8s/kube-scheduler-amd64:v1.10.0

  docker pull $url/sach-k8s/kube-controller-manager-amd64:v1.10.0

  #网络

  docker pull $url/sach-k8s/flannel:v0.10.0-amd64

  docker pull $url/sach-k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.8

  docker pull $url/sach-k8s/k8s-dns-sidecar-amd64:1.14.8

  docker pull $url/sach-k8s/k8s-dns-kube-dns-amd64:1.14.8

  docker pull $url/sach-k8s/pause-amd64:3.1

  docker pull $url/sach-k8s/kube-proxy-amd64:v1.10.0

  #dashboard

  docker pull $url/sach-k8s/kubernetes-dashboard-amd64:v1.8.3

  #heapster

  docker pull $url/sach-k8s/heapster-influxdb-amd64:v1.3.3

  docker pull $url/sach-k8s/heapster-grafana-amd64:v4.4.3

  docker pull $url/sach-k8s/heapster-amd64:v1.4.2

  #ingress

  docker pull $url/sach-k8s/nginx-ingress-controller:0.15.0

  docker pull $url/sach-k8s/defaultbackend:1.4

  #还原kubernetes使用的镜像

  docker tag $url/sach-k8s/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12

  docker tag $url/sach-k8s/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0

  docker tag $url/sach-k8s/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0

  docker tag $url/sach-k8s/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0

  docker tag $url/sach-k8s/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

  docker tag $url/sach-k8s/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

  docker tag $url/sach-k8s/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0

  docker tag $url/sach-k8s/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8

  docker tag $url/sach-k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8

  docker tag $url/sach-k8s/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8

  docker tag $url/sach-k8s/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

  docker tag $url/sach-k8s/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

  docker tag $url/sach-k8s/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3

  docker tag $url/sach-k8s/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2

  docker tag $url/sach-k8s/defaultbackend:1.4 gcr.io/google_containers/defaultbackend:1.4

  docker tag $url/sach-k8s/nginx-ingress-controller:0.15.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0

  #删除多余镜像

  #基础核心

  docker rmi $url/sach-k8s/etcd-amd64:3.1.12

  docker rmi $url/sach-k8s/kube-apiserver-amd64:v1.10.0

  docker rmi $url/sach-k8s/kube-scheduler-amd64:v1.10.0

  docker rmi $url/sach-k8s/kube-controller-manager-amd64:v1.10.0

  #网络

  docker rmi $url/sach-k8s/flannel:v0.10.0-amd64

  docker rmi $url/sach-k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.8

  docker rmi $url/sach-k8s/k8s-dns-sidecar-amd64:1.14.8

  docker rmi $url/sach-k8s/k8s-dns-kube-dns-amd64:1.14.8

  docker rmi $url/sach-k8s/pause-amd64:3.1

  docker rmi $url/sach-k8s/kube-proxy-amd64:v1.10.0

  #dashboard

  docker rmi $url/sach-k8s/kubernetes-dashboard-amd64:v1.8.3

  #heapster

  docker rmi $url/sach-k8s/heapster-influxdb-amd64:v1.3.3

  docker rmi $url/sach-k8s/heapster-grafana-amd64:v4.4.3

  docker rmi $url/sach-k8s/heapster-amd64:v1.4.2

  #ingress

  docker rmi $url/sach-k8s/nginx-ingress-controller:0.15.0

  docker rmi $url/sach-k8s/defaultbackend:1.4

  

  执行脚本:

    ./docker.sh

每个节点都拉取完成所有镜像后,在主节点上初始化集群:

  初始化:

    kubeadm init --kubernetes-version=v1.10.0  --apiserver-advertise-address 192.168.254.128 --pod-network-cidr=10.244.0.0/16

  配置kubectl:

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

  如果是root用户:

    export KUBECONFIG=/etc/kubernetes/admin.conf

  如果不是root用户:

    echo "source <(kubectl completion bash)" >> ~/.bashrc

  初始化网络方案(本文采用flannel方案):

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

  查看节点状态:

    Kubectl get nodes

    (主节点状态为notready,由于没有调度pod)

  主节点调度pod:

    kubectl taint nodes --all node-role.kubernetes.io/master-

  查看日志:

    journalctl -xeu kubelet 或journalctl -xeu kubelet > a

  查看pod 状态:

    kubectl get pods --all-namespaces

从节点加入集群:

  kubeadm join --token q500wd.kcjrb2zwvhwqt7su 192.168.254.128:6443 --discovery-token-ca-cert-hash sha256:29e091cca420e505d0c5e091e68f6b5c4ba3f2a54fdcd693c681307c8a041a8b

  (token和证书都是集群初始化后输出在主节点控制台的,在主节点输出中查找token 和discovery-token-ca-cert-hash)

  查看集群节点:

    kubectl get nodes

  如果从节点加入失败则可能是token过期,查看日志:

    kubelet logs

  查看token是否过期:

    kubeadm token list

  如果过期生成新的token和证书:

    kubeadm token create

    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

    0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538

    删除之前证书:

      rm -rf  /etc/kubernetes/pki/ca.crt

    从节点状态重置:

      kubeadm reset

    完成上述步骤后重新加入集群即可。

    (在从节点上执行相关命令报错,如:

      kubectl get nodes

      报The connection to the server localhost:8080 was refused - did you specify the right host or port?错误,由于配置文件没有应用,解决如下:

        sudo cp /etc/kubernetes/kubelet.conf $HOME/

        sudo chown $(id -u):$(id -g) $HOME/kubelet.conf

        export KUBECONFIG=$HOME/kubelet.conf)

部署应用:

  创建pod:

    kubectl run nginx --replicas=1 --labels="run=load-balancer-example" --image=nginx  --port=80

    (replicas配置副本数)

  查询部署相关信息:

    kubectl get deployments nginx

    kubectl describe deployments nginx

    kubectl get replicasets

    kubectl describe replicasets

  创建service:

    kubectl expose deployment nginx --type=NodePort --name=example-service

  查看service:

    kubectl describe services example-service

  (service如果外网无法访问,原因往往只是Service的selector的值没有和pod匹配,这种错误很容易通过查看service的endpoints信息来验证,如果endpoints为空,就说明selector的值配错了。只需要修改为对应pod的标签就可以了。参考博文https://blog.csdn.net/bluishglc/article/details/52440312)

  

ps:

服务器重启后集群无法访问,如kubectl get pods命令报错时,需要重新禁用swap,重启kubelet:

    sudo swapoff -a

    systemctl daemon-reload

    sytemctl restart kubelet

  kubelet 状态查看:

    systemctl status kubelet

  kubelet 日志:

    journalctl -xefu kubelet

  查看swap状态:

    cat /proc/swaps

安装Dashboard :

  创建配置文件:

    touch kubernetes-dashboard.yaml

  添加如下配置:

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with

# Kubernetes 1.8.

#

# Example usage: kubectl create -f <this_file>

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque

---

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

---

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

rules:

# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

resources: ["secrets"]

verbs: ["create"]

# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

verbs: ["create"]

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: ["secrets"]

resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

verbs: ["get", "update", "delete"]

# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

resourceNames: ["kubernetes-dashboard-settings"]

verbs: ["get", "update"]

# Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

resources: ["services"]

resourceNames: ["heapster"]

verbs: ["proxy"]

- apiGroups: [""]

resources: ["services/proxy"]

resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system

---

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment

apiVersion: apps/v1beta2

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

spec:

containers:

- name: kubernetes-dashboard

image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

ports:

- containerPort: 8443

protocol: TCP

args:

- --auto-generate-certificates

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

# - --apiserver-host=http://my-address:port

volumeMounts:

- name: kubernetes-dashboard-certs

mountPath: /certs

# Create on-disk volume to store exec logs

- mountPath: /tmp

name: tmp-volume

livenessProbe:

httpGet:

scheme: HTTPS

path: /

port: 8443

initialDelaySeconds: 30

timeoutSeconds: 30

volumes:

- name: kubernetes-dashboard-certs

secret:

secretName: kubernetes-dashboard-certs

- name: tmp-volume

emptyDir: {}

serviceAccountName: kubernetes-dashboard

# Comment the following tolerations if Dashboard must not be deployed on master

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

---

# ------------------- Dashboard Service ------------------- #

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

ports:

- port: 443

targetPort: 8443

selector:

k8s-app: kubernetes-dashboard

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: kubernetes-dashboard

labels:

k8s-app: kubernetes-dashboard

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system

roleRef:

kind: ClusterRole

name: cluster-admin

apiGroup: rbac.authorization.k8s.io

  安装:

    kubectl apply -f kubernetes-dashboard.yaml

  修改服务类别,使可以外网访问:

    kubectl -n kube-system edit service kubernetes-dashboard

  修改如下图:

  查看修改后service:

    kubectl get services kubernetes-dashboard -n kube-system

  Dashboard通过nodeIp:[暴露端口]可以访问,页面打开后选择token登录。

  Token获取方式:

    kubectl -n kube-system get secret

  (找到Name为kubernetes-dashboard-token-XXXXX名称(此处为47psh))

    kubectl -n kube-system describe secret kubernetes-dashboard-token-47psh

    

  将token复制到页面中,进入面板。

至此,全部安装完成,可以进入面板对kubernetes集群进行管理。

 

Kubernetes及Dashboard详细安装配置(Ubuntu14.04)的更多相关文章

  1. Caffe学习系列(1):安装配置ubuntu14.04+cuda7.5+caffe+cudnn

    一.版本 linux系统:Ubuntu 14.04 (64位) 显卡:Nvidia K20c cuda: cuda_7.5.18_linux.run cudnn: cudnn-7.0-linux-x6 ...

  2. DL服务器主机环境配置(ubuntu14.04+GTX1080+cuda8.0)解决桌面重复登录

    DL服务器主机环境配置(ubuntu14.04+GTX1080+cuda8.0)解决桌面重复登录 前面部分是自己的记录,后面方案部分是成功安装驱动+桌面的正解 问题的开始在于:登录不了桌面,停留在重复 ...

  3. 【转】Syncthing – 数据同步利器---自己的网盘,详细安装配置指南,内网使用,发现服务器配置

    Syncthing – 数据同步利器---自己的网盘,详细安装配置指南,内网使用,发现服务器配置 原贴:https://www.cnblogs.com/jackadam/p/8568833.html ...

  4. Tomcat7.0/8.0 详细安装配置图解,以及UTF-8编码配置

    Tomcat7.0/8.0 详细安装配置图解,以及UTF-8编码配置 2017年01月24日 10:01:48 阅读数:51265 标签: tomcattomcat安装tomcat配置tomcat编码 ...

  5. 安装了ubuntu14.04+windows7双系统的笔记本启动后出现grub rescue>提示符

    解决思想如下: 1.在grub rescue>提示符处输入ls  即可看到该命令列出了硬盘上的所有分区,找到安装了linux的分区,我的安装在(hd0,msdos8)下,所以我以(hd0,msd ...

  6. windows 下android react native详细安装配置过程

    写在前面: 在网上搜了很多安装配置文档,感觉没有一个真的跟我安装的过程一模一样的,东拼拼西凑凑,总算是装好了,我不会告诉你,断断续续,我花了两天时间...一到黑屏报错就傻眼,幸好在react群里遇到了 ...

  7. 安装Win10+Ubuntu14.04双系统(uefi启动版)

    说明 本教程基于个人电脑(型号:神舟K550d-i7 D1)成功安装测试发布,不同硬件环境可能有细微差异,为预防安装过程中出现意想不到的报错,重要数据请提前备份 硬件环境 cpu:Intel i7-4 ...

  8. 全网最详细的基于Ubuntu14.04/16.04 + Anaconda2 / Anaconda3 + Python2.7/3.4/3.5/3.6安装Tensorflow详细步骤(图文)(博主推荐)

    不多说,直接上干货! 前言 建议参照最新的tensorflow安装步骤(Linux,官方网站经常访问不是很稳定,所以给了一个github的地址):         https://github.com ...

  9. ElasticSearch安装部署,基本配置(Ubuntu14.04)

    ElasticSearch部署文档(Ubuntu 14.04) 安装java sudo add-apt-repository ppa:webupd8team/java sudo apt-get upd ...

随机推荐

  1. zabbix AGENTS 在WINDOWS的安装

    1.下载 https://assets.zabbix.com/downloads/3.4.0/zabbix_agents_3.4.0.win.zip 解压 zabbix_agents_3.4.0.wi ...

  2. 天转凉了,注意保暖,好吗(需求规格说明书放在github了)

    团队项目——AI五子棋(小程序) 一.团队展示: 队名:未来的将来的明天在那里等你 小组 队员: 龙天尧(队长)(3116005190),林毓植(3116005188),黄晖朝(3116005178) ...

  3. height属性

    高度属性: height:长度值|百分比|auto 最大高度:max-height 最小高度:min-height 说明:设置块级元素和替换元素的内容高度.

  4. MATLAB实现Brovey图像融合

    自定义函数: function BF=Brovey_fuse(Hyperspectral_image,High_resolution_image) x0=imread(Hyperspectral_im ...

  5. angularjs - 自定义指令(directive)

    自定义指令(directive) 使用 .directive 函数来添加自定义的指令. 要调用自定义指令,HTML 元素上需要添加自定义指令名. 例子:使用驼峰法来命名一个指令, demoDirect ...

  6. OnTriggerEnter2D方法

    我两个物体A,B都添加了Circle Collider 2D,并且都勾选了is Trigger,我在A的脚本里用void OnTriggerEnter2D(Collider2D coll)检测碰撞,至 ...

  7. 记NOIP2018

    day0 中午在机房水了一波出发,坐了一下午的车,5点到了大门对面的红旗宾馆.南山中学的和我们住在一个宾馆里面,Z教练似乎同他们关系很好,见面还打招呼. 红旗宾馆附近特别偏僻,出门就是修路的工地,后面 ...

  8. SQL Server Service Broker 示例(转)

    1.定义数据类型.协议和服务(发送服务和接收服务) USE master; GO ALTER DATABASE 目标数据库 SET ENABLE_BROKER; GO -- 如果上面的操作执行后,长时 ...

  9. vue cli 3.x的history 和 hash模式切换的问题

    使用vue cli 3.x 创建的项目,有一个选项:Use history mode for router? (Requires proper server setup for index fallb ...

  10. zabbix批量添加被监控windows客户端

    由于公司大部分用的是windows服务器,大概有50多台.如果是一台一台添加的话很是麻烦,如果数量更多的话那工作量可想而知.所以网络管理员通常都是非常懒的. 环境:公司虽是域环境,但是除了几台域服务器 ...