k8s-高级调度方式-二十一
两类:
- 节点选择器:nodeSelector(给node打上标签,pod通过标签预选节点),nodeName
- 节点亲和调度:nodeAffinity
1、节点选择器(nodeSelector,nodeName)
[root@master ~]# kubectl explain pods.spec.nodeSelector [root@master schedule]# pwd
/root/manifests/schedule [root@master schedule]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
mageedu.com/created-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
nodeSelector: #节点选择器
disktype: ssd #该pod运行在有disktype=ssd标签的node节点上
[root@master schedule]# kubectl apply -f pod-demo.yaml
pod/pod-demo created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo / Running 8m13s 10.244.1.6 node01 <none> <none> [root@master schedule]# kubectl get nodes --show-labels |grep node01
node01 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01 #可见新创建的pod已经运行在node01上了,因为node01上有disktype=ssd标签;
接下来我们给node02打上标签,修改一下资源定义清单文件,再创建pod:
将node02打上标签,pod资源清单里面的节点选择器里,改为和node02一样的标签;
[root@master schedule]# kubectl delete -f pod-demo.yaml [root@master ~]# kubectl label nodes node02 disktype=harddisk
node/node02 labeled [root@master schedule]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
mageedu.com/created-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
nodeSelector:
disktype: harddisk [root@master schedule]# kubectl get nodes --show-labels |grep node02
node02 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node02 [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo / Running 104s 10.244.2.5 node02 <none> <none>
可见pod已经运行在node02上了;
2、节点亲和度调度
[root@master scheduler]# kubectl explain pods.spec.affinity
[root@master scheduler]# kubectl explain pods.spec.affinity.nodeAffinity
preferredDuringSchedulingIgnoredDuringExecution:软亲和,
requiredDuringSchedulingIgnoredDuringExecution:硬亲和,表示必须满足
[root@master ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions #硬亲和性
[root@master schedule]# vim pod-nodeaffinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: In
values:
- foo
- bar [root@master schedule]# kubectl apply -f pod-nodeaffinity-demo.yaml
pod/pod-node-affinity-demo created [root@master schedule]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-node-affinity-demo / Pending 76s
#此时pod是Pending, 是因为没有节点满足条件;
下面我们再创建一个软亲和性的pod:
#软亲和性,就算没有符合条件的节点,也会找一个勉强运行; [root@master schedule]# kubectl delete -f pod-nodeaffinity-demo.yaml
pod "pod-node-affinity-demo" deleted [root@master schedule]# vim pod-nodeaffinity-demo2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo2
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: zone
operator: In
values:
- foo
- bar
weight: [root@master schedule]# kubectl apply -f pod-nodeaffinity-demo2.yaml
pod/pod-node-affinity-demo2 created [root@master schedule]# kubectl get pods #可见pod已经运行了
NAME READY STATUS RESTARTS AGE
pod-node-affinity-demo2 / Running 74s pod-node-affinity-demo- 运行起来了,因为这个pod我们是定义的软亲和性,即使没有符合条件的及诶单,也会找个节点让Pod运行起来
3、pod亲和性调度
比如在机房中,我们可以将一个机柜中的机器都打上标签,让pod调度的时候,对此机柜有亲和性;
或者将机柜中某几台机器打上标签,让pod调度的时候,对这几个机器有亲和性;
#查看资源定义清单字段
[root@master ~]# kubectl explain pods.spec.affinity.podAffinity
FIELDS:
preferredDuringSchedulingIgnoredDuringExecution <[]Object> #软亲和
requiredDuringSchedulingIgnoredDuringExecution <[]Object> #硬亲和 [root@master ~]# kubectl explain pods.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution
FIELDS:
labelSelector <Object> #表示选定一组资源,(跟哪些pod进行亲和);
namespaces <[]string> #指定Pod属于哪个名称空间中,一般不跨名称空间去引用
topologyKey <string> -required- #定义键(要亲和的关键字)
pod硬亲和性调度:
[root@master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready master 77d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01 Ready <none> 77d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01
node02 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node02 #资源定义清单
[root@master schedule]# vim pod-requieed-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
namespace: default
labels:
app: backend
tier: db
spec:
containers:
- name: busybox #前面的-号表示这是一个列表格式的,也可以用中括号表示
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: #硬亲和性
- labelSelector:
matchExpressions:
- {key: app,operator: In,values: ["myapp"]} #意思是当前这个pod要跟一个有着标签app=myapp(要和上面pod-first的metadata里面的标签一致)的pod在一起
topologyKey: kubernetes.io/hostname #匹配的节点key是kubernetes.io/hostname #创建
[root@master schedule]# kubectl apply -f pod-requieed-affinity-demo.yaml
pod/pod-first created
pod/pod-second created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-first / Running 3m25s 10.244.2.9 node02 <none> <none>
pod-second / Running 3m25s 10.244.2.10 node02 <none> <none> #可以看到我们的两个pod都运行在同一个节点了,这是因为pod-second会和pod-first运行在同一个节点上,pod-second依赖于pod-first;
4、pod反亲和性调度
[root@master ~]# kubectl explain pods.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector
FIELDS:
matchExpressions <[]Object>
matchLabels <map[string]string> [root@master schedule]# kubectl delete -f pod-requieed-affinity-demo.yaml #删掉刚才的pod #资源定义清单
[root@master schedule]# vim pod-requieed-Anti-affinity-demo.yaml apiVersion: v1
kind: Pod
metadata:
name: pod-first
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
namespace: default
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app,operator: In,values: ["myapp"]}
topologyKey: kubernetes.io/hostname #创建
[root@master schedule]# kubectl apply -f pod-requieed-Anti-affinity-demo.yaml
pod/pod-first created
pod/pod-second created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-first / Running 53s 10.244.1.7 node01 <none> <none>
pod-second / Running 53s 10.244.2.11 node02 <none> <none> #可见pod-first和pod-second就不会被调度到同一个节点上;
下面可以给两个节点打相同的标签,因为pod调度策略是podAntiAffinity反亲和性,所以pod-first和pod-second不能同时运行在标有zone标签的节点上;
最终出现的情况就是有一个pod-first能成功运行,而另外一个pod-second因为是反亲和的,没有节点可以运行而处于pending状态;
#打标,相同的标签
[root@master ~]# kubectl label nodes node01 zone=foo
node/node01 labeled
[root@master ~]# kubectl label nodes node02 zone=foo [root@master schedule]# kubectl delete -f pod-requieed-Anti-affinity-demo.yaml #删掉pod #资源定义定义清单
[root@master schedule]# vim pod-requieed-Anti-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
namespace: default
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app,operator: In,values: ["myapp"]}
topologyKey: zone #节点标签改为zone #创建
[root@master schedule]# kubectl apply -f pod-requieed-Anti-affinity-demo.yaml
pod/pod-first created
pod/pod-second created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-first / Running 4s 10.244.2.12 node02 <none> <none>
pod-second / Pending 4s <none> <none> <none> <none> #可见pod-first能成功运行,而pod-second因为是反亲和的,没有节点可以运行而处于pending状态;
5、污点调度
污点调度是让节点来选择哪些pod能运行在其上面,污点(taints)用在节点上,容忍度(Tolerations
)用在pod上;
污点定义:
[root@master ~]# kubectl explain nodes.spec.taints #taints:定义节点的污点
FIELDS:
effect <string> -required- #表示当pod不能容忍节点上污点时的行为是什么,主要有以下三种行为:
{NoSchedule:仅影响调度过程,不影响现存pod。没调度过来的就调度不过来了。如果对节点新加了污点,那么对节点上现存的Pod没有影响。
NoExecute:既影响调度过程,也影响现存Pod,没调度过来的就调度不过来了,如果对节点新加了污点,那么对现存的pod对象将会被驱逐
PreferNoSchedule:不能容忍就不能调度过来,但是实在没办法也是能调度过来的。对节点新加了污点,那么对节点上现存的pod没有影响。}
key <string> -required-
timeAdded <string>
value <string> #查看节点的污点
[root@master ~]# kubectl describe node node01 |grep Taints
Taints: <none>
[root@master ~]# kubectl describe node node02 |grep Taints
Taints: <none> #查看pod的容忍度
[root@master ~]# kubectl describe pods kube-apiserver-master -n kube-system |grep Tolerations
Tolerations: :NoExecute [root@master ~]# kubectl taint -h | grep -A Usage #给节点打污点的方式
Usage:
kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options] 污点和容忍度都是自定义的键值对形式;
下面给node1打上污点node-type=production:NoSchedule:
[root@master ~]# kubectl taint node node01 node-type=production:NoSchedule
node/node01 tainted #pod资源定义清单,此文件没有定义容忍度,但是node01有污点,pod应该都会运行在node02上;
[root@master schedule]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas:
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: #创建
[root@master schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy created #可见pod都运行在了node02上
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-6b56d98b6b-52hth / Running 9s 10.244.2.15 node02 <none> <none>
myapp-deploy-6b56d98b6b-dr224 / Running 9s 10.244.2.14 node02 <none> <none>
myapp-deploy-6b56d98b6b-z278x / Running 9s 10.244.2.13 node02 <none> <none>
容忍度定义:
[root@master ~]# kubectl explain pods.spec.tolerations
FIELDS:
effect <string>
key <string>
operator <string> #两个值:Exists表示只要节点有这个污点的key,pod都能容忍,值是什么都行;Equal表示只要节点必须精确匹配污点的key和value才能容忍;
tolerationSeconds <integer> #表示宽限多长时间pod才会被驱逐
value <string> [root@master ~]# kubectl taint node node02 node-type=dev:NoExecute #给node02打上另一个标签
node/node02 tainted [root@master schedule]# kubectl delete -f deploy-demo.yaml #资源定义清单
[root@master schedule]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas:
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort:
tolerations:
- key: "node-type"
operator: "Equal" #要精确匹配污点键值
value: "production"
effect: "NoSchedule" #创建pod
[root@master schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-779c578779-5vkbw / Running 12s 10.244.1.12 node01 <none> <none>
myapp-deploy-779c578779-bh9td / Running 12s 10.244.1.11 node01 <none> <none>
myapp-deploy-779c578779-dn52p / Running 12s 10.244.1.13 node01 <none> <none> #可见pod都运行在了node01上,因为我们设置了pod能容忍node01的污点;
下面我们把operator: "Equal"改成operator: "Exists"
Exists表示只要节点有这个污点的key,pod都能容忍,值是什么都行;
[root@master schedule]# kubectl delete -f deploy-demo.yaml [root@master schedule]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas:
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort:
tolerations:
- key: "node-type"
operator: "Exists"
value: ""
effect: "" #不设置行为 #创建
[root@master schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy create [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-69b95476c8-bfpgj / Running 13s 10.244.2.20 node02 <none> <none>
myapp-deploy-69b95476c8-fhwbd / Running 13s 10.244.1.17 node01 <none> <none>
myapp-deploy-69b95476c8-tzzlx / Running 13s 10.244.2.19 node02 <none> <none> #可见,node01 node02上面都有pod了;
effect:不设置表示什么行为都能容忍;
最后可以去除节点上的污点:
#去除污点命令,删除指定key上所有的effect
[root@master ~]# kubectl taint node node02 node-type-
node/node02 untainted
[root@master ~]# kubectl taint node node01 node-type-
node/node01 untainted
k8s-高级调度方式-二十一的更多相关文章
- K8S 高级调度方式
可以使用高级调度分为: 节点选择器: nodeSelector.nodeName 节点亲和性调度: nodeAffinity Pod亲和性调度:PodAffinity Pod反亲和性调度:podAnt ...
- Unix环境高级编程(二十一)数据库函数库
本章的内容是开发一个简单的.多用户数据库的C函数库.调用此函数库提供的C语言函数,其他程序可以读取和存储数据库中的记录.绝大部分商用数据库函数库提供多进程同时更新数据库所需要的并发控制,采用建议记录锁 ...
- 【圣诞特献】Web 前端开发精华文章推荐【系列二十一】
<Web 前端开发精华文章推荐>2013年第九期(总第二十一期)和大家见面了.梦想天空博客关注 前端开发 技术,分享各种增强网站用户体验的 jQuery 插件,展示前沿的 HTML5 和 ...
- JAVA之旅(二十一)——泛型的概述以及使用,泛型类,泛型方法,静态泛型方法,泛型接口,泛型限定,通配符
JAVA之旅(二十一)--泛型的概述以及使用,泛型类,泛型方法,静态泛型方法,泛型接口,泛型限定,通配符 不知不觉JAVA之旅已经写到21篇了,不得不感叹当初自己坚持要重学一遍JAVA的信念,中途也算 ...
- ASP.NET MVC深入浅出(被替换) 第一节: 结合EF的本地缓存属性来介绍【EF增删改操作】的几种形式 第三节: EF调用普通SQL语句的两类封装(ExecuteSqlCommand和SqlQuery ) 第四节: EF调用存储过程的通用写法和DBFirst模式子类调用的特有写法 第六节: EF高级属性(二) 之延迟加载、立即加载、显示加载(含导航属性) 第十节: EF的三种追踪
ASP.NET MVC深入浅出(被替换) 一. 谈情怀-ASP.NET体系 从事.Net开发以来,最先接触的Web开发框架是Asp.Net WebForm,该框架高度封装,为了隐藏Http的无状态 ...
- Senparc.Weixin.MP SDK 微信公众平台开发教程(二十一):在小程序中使用 WebSocket (.NET Core)
本文将介绍如何在 .NET Core 环境下,借助 SignalR 在小程序内使用 WebSocket.关于 WebSocket 和 SignalR 的基础理论知识不在这里展开,已经有足够的参考资料, ...
- 学习笔记:CentOS7学习之二十一: 条件测试语句和if流程控制语句的使用
目录 学习笔记:CentOS7学习之二十一: 条件测试语句和if流程控制语句的使用 21.1 read命令键盘读取变量的值 21.1.1 read常用见用法及参数 21.2 流程控制语句if 21.2 ...
- 无废话ExtJs 入门教程二十一[继承:Extend]
无废话ExtJs 入门教程二十一[继承:Extend] extjs技术交流,欢迎加群(201926085) 在开发中,我们在使用视图组件时,经常要设置宽度,高度,标题等属性.而这些属性可以通过“继承” ...
- Bootstrap <基础二十一>徽章(Badges)
Bootstrap 徽章(Badges).徽章与标签相似,主要的区别在于徽章的边角更加圆滑. 徽章(Badges)主要用于突出显示新的或未读的项.如需使用徽章,只需要把 <span class= ...
随机推荐
- android 查看手机运行的进程列表
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools=&q ...
- Windows下Python安装pyecharts
都说pyechart用来可视化好,可是安装的时候各种坑 正常情况是 pip install pyecharts 然后各种报错,找到一种可行的方式 在https://pypi.org/project/p ...
- 基于源码学习-fighting
今天逛着逛着,看到个培训网站,点进去和客服人员聊了一下.接着,看了看他们的培训课程,想了解一下 嵌入式开发的. (人就是要放空自己,把自己当做什么都不会,当着个婴儿[小学生]一般认真,要学什么知识就是 ...
- Node.js机制及原理理解初步
http://blog.csdn.net/leftfist/article/details/41891407 一.node.js优缺点 node.js是单线程. 好处就是 1)简单 2)高性能,避免了 ...
- CSS 的导入方式 (link or import ?)
前言 最常看见的CSS的使用方式有三种 1. 在span, div 等标签上直接使用 style 属性定义CSS <span style="color:blue">Th ...
- uva 11468 - Substring(AC自己主动机+概率)
题目链接:uva 11468 - Substring 题目大意:给出一些字符和各自字符相应的选择概率.随机选择L次后得到一个长度为L的字符串,要求该字符串不包括随意一个子串的概率. 解题思路:构造AC ...
- iOS开发核心语言Objective C —— 面向对象思维、setter和getter方法及点语法
本分享是面向有意向从事iOS开发的伙伴们.或者已经从事了iOS的开发人员.假设您对iOS开发有极高的兴趣,能够与我一起探讨iOS开发.一起学习,共同进步.假设您是零基础,建议您先翻阅我之前分享的iOS ...
- 细说linux IPC(三):mmap系统调用共享内存
[版权声明:尊重原创,转载请保留出处:blog.csdn.net/shallnet 或 .../gentleliu,文章仅供学习交流,请勿用于商业用途] 前面讲到socket的进程间通 ...
- webpack打包报错Unexpected token
最近项目要上线,需要对项目进行打包部署到服务器上面,在打包过程中npm run build后出现以下报错Unexpected token: punc (() [./~/_element-ui@1.4. ...
- com.mongodb. org.mongodb.