安装前的准备工作:
Kubernetes包提供了一些服务:kube-apiserver,kube-scheduler,kube-controller-manager,kubelet,kube-proxy。 这些服务由systemd管理,配置位于:/etc/kubernetes。
Kubernetes master 将会跑这些服务:kube-apiserver, kube-controller-manager ,kube-scheduler和etcd。 kubernates工作节点跑的服务有:kubelet, proxy, cadvisor and docker。 所有节点都会起flanneld实现跨主机网络。
二.准备三台电脑
master:192.168.0.5
node1:192.168.0.6
node2:192.168.0.7
三.卸载系统自带docker
yum list installed | grep docker
yum remove -y docker.x86_64 docker-client.x86_64 docker-common.x86_64
四。关闭防火墙
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
五。安装配置Kubernetes
sh -x k8s-master.sh 172.31.17.187
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
#!/usr/bin/env bash
set -e
MASTER_IP=$1
if [ ! $MASTER_IP ]
then
echo "MASTER_IP is null"
exit 1
fi
echo "=================install ntpd==================="
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
echo "=================install docker, k8s, etcd, flannel==================="
cat <<EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
echo "=================config kubernetes==================="
mv /etc/kubernetes/config /etc/kubernetes/config.bak
cat <<EOF >/etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://${MASTER_IP}:8080"
EOF
setenforce 0
#systemctl disable iptables-services firewalld
#systemctl stop iptables-services firewalld
echo "================= config etcd======================"
sed -i s#'ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"'#'ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"'#g /etc/etcd/etcd.conf
sed -i s#'ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"'#'ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"'#g /etc/etcd/etcd.conf
echo "================= config apiserver==================="
mv /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak
cat <<EOF >/etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://${MASTER_IP}:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
EOF
echo "=================start and set etcd==============="
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
echo "=================config flannel==================="
mv /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
cat <<EOF >/etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://${MASTER_IP}:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
EOF
echo "=================start etcd k8s ==================="
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld ; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
-------------------------------------------------------------------------------------------------------------------------------
注: 上面脚本并没有启动docker和kublet,如果测试时需要在master上运行服务,请启动docker,并按照node的kublet配置并启动kublet。
六。安装node
sh install-k8s-node.sh 172.31.17.187 172.31.25.80 # master_ip node_ip
------------------------------------------------------------------------------------------------------------------------------------------------------
#/usr/bin/env bash
set -e
MASTER_IP=$1
NODE_IP=$2
if [ ! $MASTER_IP ] || [ ! $NODE_IP ]
then
echo "MASTER_IP or NODE_IP is null"
exit 1
fi
echo '=================install ntpd==================='
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
echo "=================install docker, k8s, etcd, flannel==================="
cat <<EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
setenforce 0
echo "===============config kubernetes================"
mv /etc/kubernetes/config /etc/kubernetes/config.bak
cat <<EOF >/etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://${MASTER_IP}:8080"
EOF
echo "===============install docker, k8s, etcd, flannel================"
cat <<EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
echo "===============config kublet================"
mv /etc/kubernetes/kubelet /etc/kubernetes/kubelet.bak
cat <<EOF >/etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=${NODE_IP}"
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://${MASTER_IP}:8080"
# Add your own!
KUBELET_ARGS=""
EOF
echo "===============config flanneld================"
mv /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
cat <<EOF >/etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://${MASTER_IP}:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
EOF
echo "==========start kube-proxy kubelet flanneld docker==========="
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
echo "==============set kubectl================"
kubectl config set-cluster default-cluster --server=http://${MASTER_IP}:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context
-------------------------------------------------------------------------------------------------------------------------------------------
查看节点状态
$ kubectl get no
NAME STATUS AGE
NAME STATUS AGE
172.31.16.52 Ready 1h
172.31.25.80 Ready 2h
------------------------------------------------------------------------------------------------------------------------------------------------------
测试服务器
测试通过master部署两个nginx到node。
在master上新建文件nginx-deployment.yml
-------------------------------------------------------------------------------------------------------------------------------------------------------------
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
创建deployment
$ kubectl create -f nginx-deployment.yml
deployment "nginx-deployment" created
查看pod:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-4087004473-kbbgs 1/1 Running 0 1h 172.30.41.2 172.31.25.80
nginx-deployment-4087004473-m47bg 1/1 Running 0 1h 172.30.93.2 172.31.16.52
# 访问nginx
$curl 172.30.41
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
--------------------------------------------------------------------------------------------------------
如果发现STATUS一直处于ContainerCreating状态,可能是正在拉取镜像。可以查看详细信息.
- centos7使用kubeadm搭建kubernetes集群
一.本地实验环境准备 服务器虚拟机准备 IP CPU 内存 hostname 192.168.222.129 >=2c >=2G master 192.168.222.130 >=2 ...
- [Jenkins]CentOS7下Jenkins搭建
最近在倒腾Kubernetes的一些东西,这次需要用到Jenkins来实现自动化构建.来讲一讲搭建的整个过程. Jenkins是什么 Jenkins提供了软件开发的持续集成服务.它运行在Servlet ...
- 二进制搭建kubernetes多master集群【一、使用TLS证书搭建etcd集群】
上一篇我们介绍了kubernetes集群架构以及系统参数配置,参考:二进制搭建kubernetes多master集群[开篇.集群环境和功能介绍] 下面本文etcd集群才用三台centos7.5搭建完成 ...
- kubeadm搭建kubernetes集群之三:加入node节点
在上一章<kubeadm搭建kubernetes集群之二:创建master节点>的实战中,我们把kubernetes的master节点搭建好了,本章我们将加入node节点,使得整个环境可以 ...
- kubeadm搭建kubernetes集群之二:创建master节点
在上一章kubeadm搭建kubernetes集群之一:构建标准化镜像中我们用VMware安装了一个CentOS7虚拟机,并且打算用这个虚拟机的镜像文件作为后续整个kubernetes的标准化镜像,现 ...
- kubernetes系列:(一)、kubeadm搭建kubernetes(v1.13.1)单节点集群
kubeadm是Kubernetes官方提供的用于快速部署Kubernetes集群的工具,本篇文章使用kubeadm搭建一个单master节点的k8s集群. 节点部署信息 节点主机名 节点IP 节点角 ...
- kubeadm 搭建kubernetes集群环境
需求 kubeadm 搭建kubernetes集群环境 准备条件 三台VPS(本文使用阿里云香港 - centos7.7) 一台能SSH连接到VPS的本地电脑 (推荐连接工具xshell) 安装步骤 ...
- 使用国内的镜像源搭建 kubernetes(k8s)集群
1. 概述 老话说的好:努力学习,提高自己,让自己知道的比别人多,了解的别人多. 言归正传,之前我们聊了 Docker,随着业务的不断扩大,Docker 容器不断增多,物理机也不断增多,此时我们会发现 ...
- 基于腾讯Centos7云服务器搭建SVN版本控制库
基于腾讯Centos7云服务器搭建SVN版本控制库 最近在和小伙伴组队参加一个关于人工智能的比赛,无奈不知道怎么处理好每个人的代码托管问题,于是找到了晚上免费svn托管服务器的服务,但是所给的免费空间 ...
随机推荐
- Python常用排序算法
1.冒泡排序 思路:将左右元素两两相比较,将值小的放在列表的头部,值大的放到列表的尾部 效率:O(n²) def bubble_sort(li): for i in range(len(li)-1): ...
- 新版TP-Link无线路由器怎么设置
TP-Link路由器的设置和无线WIFI的设置.. -------------- 一.准备工作: 1.首先是路由器的安装,将路由器电源接上,并通电,然后网线的连接.如果是拨号上网用户,请将猫引出的网线 ...
- 【踩坑】activiti工作流的svg-xml解析报错
1.问题记录 工作流配置画模板的时候保存成功但是部署报错. IE下 activiti工作流解析xml报错 type "path" must be followed by eithe ...
- SpringMVC(二)--处理数据模型、ModelAndView、Model、Map、重定向、@ModelAttribute、
1.处理模型数据 Spring MVC 提供了以下几种途径输出模型数据: – ModelAndView:处理方法返回值类型为 ModelAndView 时, 方法体即可通过该对象添加模型数据 ...
- 更符合面向对象的数据库操作方式-OrmLite
1. jar包下载 下载地址:http://ormlite.com/releases/,一般用core和android包即可. 如果使用的是android studio,也可以直接通过module s ...
- 最长单词(一星级题目) 本来是很简单的,其实就是加个flag
随机了一个题目: 给一个词典,找出其中所有最长的单词. 这道题对于初学者还是很有用的,毕竟用的逻辑是比较复杂的 样例 在词典 { "dog", "google" ...
- [2012-06-18]awk利用关联数组合并记录
问题源起:http://bbs.chinaunix.net/thread-3753784-1-1.html 代码如下 {% capture text %} $awk '{if(!a[$1]){a[$1 ...
- C# 异步编程2 EAP 异步程序开发
在前面一篇博文记录了C# APM异步编程的知识,今天再来分享一下EAP(基于事件的异步编程模式)异步编程的知识.后面会继续奉上TPL任务并行库的知识,喜欢的朋友请持续关注哦. EAP异步编程算是C#对 ...
- bootstrap 基础(二)
1 栅格系统偏移 栅格系统的偏移只能向右:col-xs-offset-x. <!DOCTYPE html> <html lang="en"> <hea ...
- hdu 3342 拓扑排序 水
好久没切题 先上水题! 拓扑排序! 代码: #include<iostream> #include<cstdio> #include<cstring> using ...