kubernetes多节点部署解析
注:以下操作均基于centos7系统。
安装ansible
ansilbe可以通过yum或者pip安装,由于kubernetes-ansible用到了密码,故而还需要安装sshpass:
pip install ansible
wget http://sourceforge.net/projects/sshpass/files/latest/download
tar zxvf download
cd sshpass-1.05
./configure && make && make install
配置kubernetes-ansible
# git clone https://github.com/eparis/kubernetes-ansible.git
# cd kubernetes-ansible
# #在group_vars/all.yml中配置用户为root
# cat group_vars/all.yml | grep ssh
ansible_ssh_user: root
# # Each kubernetes service gets its own IP address. These are not real IPs.
# # You need only select a range of IPs which are not in use elsewhere in your
# # environment. This must be done even if you do not use the network setup
# # provided by the ansible scripts.
# cat group_vars/all.yml | grep kube_service_addresses
kube_service_addresses: 10.254.0.0/16
# #配置root密码
# echo "password" > ~/rootpassword
配置master、etcd和minion的IP地址:
# cat inventory
[masters]
192.168.0.7
[etcd]
192.168.0.7
[minions]
# kube_ip_addr为该minion上Pods的地址池,默认为/24掩码
192.168.0.3 kube_ip_addr=10.0.1.1
192.168.0.6 kube_ip_addr=10.0.2.1
测试各机器连接并配置ssh key:
# ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息,可忽略
# ansible-playbook -i inventory keys.yml
目前kubernetes-ansible对依赖处理的还不是很全面,需要先手动配置下:
# # 安装iptables
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'yum -y install iptables-services'
# # 为CentOS 7添加kubernetes源
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo'
# # 配置ssh,防止ssh连接超时
# sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config'
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config'
# ansible all -i inventory --vault-password-file=~/rootpassword -a 'systemctl restart sshd'
配置docker网络,实际上就是创建kbr0网桥、为网桥配置ip并配置路由:
# ansible-playbook -i inventory hack-network.yml
PLAY [minions] ****************************************************************
GATHERING FACTS ***************************************************************
ok: [192.168.0.6]
ok: [192.168.0.3]
TASK: [network-hack-bridge | Create kubernetes bridge interface] **************
changed: [192.168.0.3]
changed: [192.168.0.6]
TASK: [network-hack-bridge | Configure docker to use the bridge inferface] ****
changed: [192.168.0.6]
changed: [192.168.0.3]
PLAY [minions] ****************************************************************
GATHERING FACTS ***************************************************************
ok: [192.168.0.6]
ok: [192.168.0.3]
TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] ***
ok: [192.168.0.6]
ok: [192.168.0.3]
TASK: [network-hack-routes | Set up a network config file] ********************
skipping: [192.168.0.3]
skipping: [192.168.0.6]
TASK: [network-hack-routes | Set up a static routing table] *******************
changed: [192.168.0.3]
changed: [192.168.0.6]
NOTIFIED: [network-hack-routes | apply changes] *******************************
changed: [192.168.0.6]
changed: [192.168.0.3]
NOTIFIED: [network-hack-routes | upload script] *******************************
changed: [192.168.0.6]
changed: [192.168.0.3]
NOTIFIED: [network-hack-routes | run script] **********************************
changed: [192.168.0.3]
changed: [192.168.0.6]
NOTIFIED: [network-hack-routes | remove script] *******************************
changed: [192.168.0.3]
changed: [192.168.0.6]
PLAY RECAP ********************************************************************
192.168.0.3 : ok=10 changed=7 unreachable=0 failed=0
192.168.0.6 : ok=10 changed=7 unreachable=0 failed=0
最后,在所有节点安装并配置kubernetes:
ansible-playbook -i inventory setup.yml
执行完成后可以看到kube相关的服务都在运行了:
# # 服务运行状态
# ansible all -i inventory -k -a 'bash -c "systemctl | grep -i kube"'
SSH password:
192.168.0.3 | success | rc=0 >>
kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
kubelet.service loaded active running Kubernetes Kubelet Server
192.168.0.7 | success | rc=0 >>
kube-apiserver.service loaded active running Kubernetes API Server
kube-controller-manager.service loaded active running Kubernetes Controller Manager
kube-scheduler.service loaded active running Kubernetes Scheduler Plugin
192.168.0.6 | success | rc=0 >>
kube-proxy.service loaded active running Kubernetes Kube-Proxy Server
kubelet.service loaded active running Kubernetes Kubelet Server
# # 端口监听状态
# ansible all -i inventory -k -a 'bash -c "netstat -tulnp | grep -E \"(kube)|(etcd)\""'
SSH password:
192.168.0.7 | success | rc=0 >>
tcp 0 0 192.168.0.7:7080 0.0.0.0:* LISTEN 14486/kube-apiserve
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 14544/kube-schedule
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 14515/kube-controll
tcp6 0 0 :::7001 :::* LISTEN 13986/etcd
tcp6 0 0 :::4001 :::* LISTEN 13986/etcd
tcp6 0 0 :::8080 :::* LISTEN 14486/kube-apiserve
192.168.0.3 | success | rc=0 >>
tcp 0 0 192.168.0.3:10250 0.0.0.0:* LISTEN 9500/kubelet
tcp6 0 0 :::46309 :::* LISTEN 9524/kube-proxy
tcp6 0 0 :::48500 :::* LISTEN 9524/kube-proxy
tcp6 0 0 :::38712 :::* LISTEN 9524/kube-proxy
192.168.0.6 | success | rc=0 >>
tcp 0 0 192.168.0.6:10250 0.0.0.0:* LISTEN 9474/kubelet
tcp6 0 0 :::52870 :::* LISTEN 9498/kube-proxy
tcp6 0 0 :::57961 :::* LISTEN 9498/kube-proxy
tcp6 0 0 :::40720 :::* LISTEN 9498/kube-proxy
执行下面的命令看看服务是否都是正常的
# curl -s -L http://192.168.0.7:4001/version # check etcd
etcd 0.4.6
# curl -s -L http://192.168.0.7:8080/api/v1beta1/pods | python -m json.tool # check apiserve
{
"apiVersion": "v1beta1",
"creationTimestamp": null,
"items": [],
"kind": "PodList",
"resourceVersion": 8,
"selfLink": "/api/v1beta1/pods"
}
# curl -s -L http://192.168.0.7:8080/api/v1beta1/minions | python -m json.tool # check apiserve
# curl -s -L http://192.168.0.7:8080/api/v1beta1/services | python -m json.tool # check apiserve
# kubectl get minions
NAME
192.168.0.3
192.168.0.6
部署apache服务
首先创建一个Pod:
# cat ~/apache.json
{
"id": "fedoraapache",
"kind": "Pod",
"apiVersion": "v1beta1",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "fedoraapache",
"containers": [{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
},
"labels": {
"name": "fedoraapache"
}
}
# kubectl create -f apache.json
# kubectl get pod fedoraapache
NAME IMAGE(S) HOST LABELS STATUS
fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Waiting
# #由于镜像下载较慢,因而Waiting持续的时间会比较久,等镜像下好后就会很快起来了
# kubectl get pod fedoraapache
NAME IMAGE(S) HOST LABELS STATUS
fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
# #到192.168.0.6机器上看看容器状态
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77dd7fe1b24f fedora/apache:latest "/run-apache.sh" 31 minutes ago Up 31 minutes k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0
1455249f2c7d kubernetes/pause:latest "/pause" About an hour ago Up About an hour 0.0.0.0:80->80/tcp k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
fedora/apache latest 2e11d8fd18b3 7 weeks ago 554.1 MB
kubernetes/pause latest 6c4579af347b 4 months ago 239.8 kB
# iptables-save | grep 2.2
-A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80
-A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT
# curl localhost # 说明Pod启动OK了,并且端口也正常
Apache
Replication Controllers
Replication Controllers保证足够数量的容器运行,以便均衡负载,并保证服务高可用:
A replication controller combines a template for pod creation (a “cookie-cutter” if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.
# cat replica.json
{
"id": "apacheController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"labels": {"name": "fedoraapache"},
"desiredState": {
"replicas": 3,
"replicaSelector": {"name": "fedoraapache"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "fedoraapache",
"containers": [{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
}]
}]
}
},
"labels": {"name": "fedoraapache"},
},
}
}
# kubectl create -f replica.json
apacheController
# kubectl get replicationController
NAME IMAGE(S) SELECTOR REPLICAS
apacheController fedora/apache name=fedoraapache 3
# kubectl get pod
NAME IMAGE(S) HOST LABELS STATUS
fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running
cf6726ae-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running
cf679152-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running
可以看到,已经有三个容器在运行了。
Services
通过Replication Controllers已经有多个Pod在运行了,但由于每个Pod都分配了不同的IP,并且随着系统运行这些IP地址有可能会变化,那问题来了,如何从外部访问这个服务呢?这就是service干的事情了。
A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.
As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.
Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define “virtual” IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.
# cat service.json
{
"id": "fedoraapache",
"kind": "Service",
"apiVersion": "v1beta1",
"selector": {
"name": "fedoraapache",
},
"protocol": "TCP",
"containerPort": 80,
"port": 8987
}
# kubectl create -f service.json
fedoraapache
# kubectl get service
NAME LABELS SELECTOR IP PORT
kubernetes-ro component=apiserver,provider=kubernetes 10.254.0.2 80
kubernetes component=apiserver,provider=kubernetes 10.254.0.1 443
fedoraapache name=fedoraapache 10.254.0.3 8987
# # 切换到minion上
# curl 10.254.0.3:8987
Apache
也可以为service配置一个公网IP,前提是要配置一个cloud provider。目前支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。
For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.
注:对Openstack的支持是使用rackspace开源的github.com/rackspace/gophercloud来做的,
Health Check
Currently, there are three types of application health checks that you can choose from:
HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
Container Exec - The Kubelet will execute a command inside your container. If it returns “ok” it will be considered a success.
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.The container health checks are configured in the “LivenessProbe” section of your container config. There you can also specify an “initialDelaySeconds” that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check:
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: dockerfile/nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080
References
- https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md
- https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough
- https://cloud.google.com/container-engine/docs/services/
- https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md
- https://github.com/rackspace/gophercloud
- http://wiki.mikejung.biz/index.php?title=Kubernetes
kubernetes多节点部署解析的更多相关文章
- kubernetes多节点部署的决心
注:以下操作均基于centos7系统. 安装ansible ansilbe能够通过yum或者pip安装,因为kubernetes-ansible用到了密码.故而还须要安装sshpass: pip in ...
- kubernetes Node节点部署(四)
一.部署kubelet 1.1.二进制包准备 将软件包从linux-node1复制到linux-node2中去 [root@linux-node1 ~]# cd /usr/local/src/kube ...
- kubernetes master节点部署(三)
一.部署kubernetes api服务 1.1.准备软件包 [root@linux-node1 ~]# cd /usr/local/src/kubernetes [root@linux-node1 ...
- Kubernetes集群部署之四Master节点部署
Kubernetes Master节点部署三个服务:kube-apiserver.kube-controller-manager.kube-scheduler和一个命令工具kubectl. Maste ...
- 二、安装并配置Kubernetes Master节点
1. 安装配置Master节点上的Kubernetes服务 1.1 安装Master节点上的Kubernetes服务 yum -y install kubernetes 1.2 修改kube-apis ...
- Kubernetes 二进制部署(一)单节点部署(Master 与 Node 同一机器)
0. 前言 最近受“新冠肺炎”疫情影响,在家等着,入职暂时延后,在家里办公和学习 尝试通过源码编译二进制的方式在单一节点(Master 与 Node 部署在同一个机器上)上部署一个 k8s 环境,整理 ...
- kubernetes实战之部署一个接近生产环境的consul集群
系列目录 前面我们介绍了如何在windows单机以及如何基于docker部署consul集群,看起来也不是很复杂,然而如果想要把consul部署到kubernetes集群中并充分利用kubernete ...
- kubernetes 集群部署
kubernetes 集群部署 环境JiaoJiao_Centos7-1(152.112) 192.168.152.112JiaoJiao_Centos7-2(152.113) 192.168.152 ...
- linux运维、架构之路-Kubernetes集群部署TLS双向认证
一.kubernetes的认证授权 Kubernetes集群的所有操作基本上都是通过kube-apiserver这个组件进行的,它提供HTTP RESTful形式的API供集群内外客户端调 ...
随机推荐
- 学习SVG系列(1):SVG基础
什么是SVG? 1.指可伸缩矢量图形 2.用来定义用于网络的基于矢量的图形 3.使用XML格式定义图形 4.图像在放大或改变尺寸的情况下其图形不会有所损失 5.万维网联盟的标准, 用于描述二维矢量图形 ...
- Android锁屏或灭屏状态下,快速按两次音量下键实现抓拍功能(1.2Framework层使用startService形式实现)
如前一篇博文所分析,我们可以使用广播的形式在快速按下两次音量下键的时候发出广播,以方便客户端进行捕捉.既然有两种方式可以实现该Issue那么哪种方式是首选呢? 我个人推荐使用启动服务的 ...
- android4.4以上,快捷实现标题栏透明
方法很简单写一个values-v19的文件夹,当安卓版本大于4.4时便会调用该文件夹下的styles.xml文件 结构如图: styles.xml <resources> <!-- ...
- (转载)android炫酷实用的开源框架(UI框架)
可以实现一些场常用炫酷效果,包含android-lockpattern(图案密码解锁).Titanic(可以显示水位上升下降的TextView).Pull-to-Refresh.Rentals-And ...
- C#中MessageBox用法大全
我们在程序中经常会用到MessageBox. MessageBox.Show()共有21中重载方法.现将其常见用法总结如下: 1.MessageBox.Show("Hello~~~~&quo ...
- ios--个人资料修改
点击进行编辑  (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *) ...
- codeforces 723D(DFS)
题目链接:http://codeforces.com/problemset/problem/723/D 题意:n*m的矩阵中,'*'代表陆地,'.'代表水,连在一起且不沿海的水形成湖泊.问最少填多少块 ...
- 几种Position属性的用法
几种Position常见的属性就是一下几种: 1.static:默认值.没有定位,元素出现在正常的流中(忽略 top, bottom, left, right 或者 z-index 声明). 2.re ...
- thinkphp在模型中自动完成session赋值
相信用过thinkphp的用户都知道thinkphp的模型可以完成很多辅助功能,比 如自动验证.自动完成等,今天在开发中遇到自动完成中需要获取session值 然后自动赋值的功能,具体看代码:clas ...
- C++ 开篇
C++ 程序员历练之路 1.C++ primer 2.C++程序设计语言 C++之父的作品 3.C++标准库 STL 4.TCP/IP协议详解 共3卷 5.Oracle数据库和MySQl数据库的学习 ...