Kubernetes之Controllers三
StatefulSets
StatefulSet is the workload API object used to manage stateful applications.
Note: StatefulSets are stable (GA) in 1.9.
Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
A StatefulSet operates under the same pattern as any other Controller. You define your desired state in a StatefulSet object, and the StatefulSet controller makes any necessary updates to get there from the current state.
- Using StatefulSets
- Limitations
- Components
- Pod Selector
- Pod Identity
- Deployment and Scaling Guarantees
- Update Strategies
Using StatefulSets
StatefulSets are valuable for applications that require one or more of the following.
- Stable, unique network identifiers.
- Stable, persistent storage.
- Ordered, graceful deployment and scaling.
- Ordered, graceful deletion and termination.
- Ordered, automated rolling updates.
In the above, stable is synonymous with persistence across Pod (re)scheduling. If an application doesn’t require any stable identifiers or ordered deployment, deletion, or scaling, you should deploy your application with a controller that provides a set of stateless replicas. Controllers such as Deployment or ReplicaSet may be better suited to your stateless needs.
Limitations
- StatefulSet was a beta resource prior to 1.9 and not available in any Kubernetes release prior to 1.5.
- As with all alpha/beta resources, you can disable StatefulSet through the
--runtime-configoption passed to the apiserver. - The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested
storage class, or pre-provisioned by an admin. - Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
- StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
Components
The example below demonstrates the components of a StatefulSet.
- A Headless Service, named nginx, is used to control the network domain.
- The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
- The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: my-storage-class
resources:
requests:
storage: 1Gi
Pod Selector
You must set the spec.selector field of a StatefulSet to match the labels of its .spec.template.metadata.labels. Prior to Kubernetes 1.8, the spec.selector field was defaulted when omitted. In 1.8 and later versions, failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation.
Pod Identity
StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage. The identity sticks to the Pod, regardless of which node it’s (re)scheduled on.
Ordinal Index
For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, in the range [0,N), that is unique over the Set.
Stable Network ID
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. The pattern for the constructed hostname is $(statefulset name)-$(ordinal). The example above will create three Pods named web-0,web-1,web-2. A StatefulSet can use a Headless Service to control the domain of its Pods. The domain managed by this Service takes the form: $(service name).$(namespace).svc.cluster.local, where “cluster.local” is the cluster domain. As each Pod is created, it gets a matching DNS subdomain, taking the form: $(podname).$(governing service domain), where the governing service is defined by the serviceName field on the StatefulSet.
Here are some examples of choices for Cluster Domain, Service name, StatefulSet name, and how that affects the DNS names for the StatefulSet’s Pods.

Note that Cluster Domain will be set to cluster.local unless otherwise configured.
Stable Storage
Kubernetes creates one PersistentVolume for each VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume with a StorageClass of my-storage-class and 1 Gib of provisioned storage. If no StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods’ PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.
Pod Name Label
When the StatefulSet controller creates a Pod, it adds a label, statefulset.kubernetes.io/pod-name, that is set to the name of the Pod. This label allows you to attach a Service to a specific Pod in the StatefulSet.
Deployment and Scaling Guarantees
- For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
- When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
- Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
- Before a Pod is terminated, all of its successors must be completely shutdown.
The StatefulSet should not specify a pod.Spec.TerminationGracePeriodSeconds of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to force deleting StatefulSet Pods.
When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is Running and Ready, and web-2 will not be deployed until web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and becomes Running and Ready.
If a user were to scale the deployed example by patching the StatefulSet such that replicas=1, web-2 would be terminated first. web-1 would not be terminated until web-2 is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and is completely shutdown, but prior to web-1’s termination, web-1 would not be terminated until web-0 is Running and Ready.
Pod Management Policies
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
OrderedReady Pod Management
OrderedReady pod management is the default for StatefulSets. It implements the behavior described above.
Parallel Pod Management
Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and to not wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
Update Strategies
In Kubernetes 1.7 and later, StatefulSet’s .spec.updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
On Delete
The OnDelete update strategy implements the legacy (1.6 and prior) behavior. It is the default strategy when spec.updateStrategy is left unspecified. When a StatefulSet’s .spec.updateStrategy.type is set to OnDelete, the StatefulSet controller will not automatically update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a StatefulSet’s .spec.template.
Rolling Updates
The RollingUpdate update strategy implements automated, rolling update for the Pods in a StatefulSet. When a StatefulSet’s .spec.updateStrategy.type is set to RollingUpdate, the StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed in the same order as Pod termination (from the largest ordinal to the smallest), updating each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to updating its predecessor.
Partitions
The RollingUpdate update strategy can be partitioned, by specifying a .spec.updateStrategy.rollingUpdate.partition. If a partition is specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet’s .spec.template is updated. All Pods with an ordinal that is less than the partition will not be updated, and, even if they are deleted, they will be recreated at the previous version. If a StatefulSet’s .spec.updateStrategy.rollingUpdate.partition is greater than its .spec.replicas, updates to its .spec.template will not be propagated to its Pods. In most cases you will not need to use a partition, but they are useful if you want to stage an update, roll out a canary, or perform a phased roll out
Daemon Sets
- What is a DaemonSet?
- Writing a DaemonSet Spec
- How Daemon Pods are Scheduled
- Communicating with Daemon Pods
- Updating a DaemonSet
- Alternatives to DaemonSet
What is a DaemonSet?
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Some typical uses of a DaemonSet are:
- running a cluster storage daemon, such as
glusterd,ceph, on each node. - running a logs collection daemon on every node, such as
fluentdorlogstash. - running a node monitoring daemon on every node, such as Prometheus Node Exporter,
collectd, Datadog agent, New Relic agent, or Gangliagmond.
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types.
Writing a DaemonSet Spec
Create a DaemonSet
You can describe a DaemonSet in a YAML file. For example, the daemonset.yaml file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: gcr.io/google-containers/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Kubernetes之Controllers三的更多相关文章
- 二进制安装 kubernetes 1.12(三) - 部署 Master 节点组件
在Master节点部署组件 在部署Kubernetes之前一定要确保etcd.flannel.docker是正常工作的,否则先解决问题再继续. 创建 CA 证书 mkdir -p /iba/master ...
- Kubernetes之Controllers二
Deployments A Deployment controller provides declarative updates for Pods and ReplicaSets. You des ...
- Kubernetes之Controllers一
ReplicaSet is the next-generation Replication Controller. The only difference between a ReplicaSet a ...
- Kubernetes 系列(三):Kubernetes使用Traefik Ingress暴露服务
一.Kubernetes 服务暴露介绍 从 kubernetes 1.2 版本开始,kubernetes提供了 Ingress 对象来实现对外暴露服务:到目前为止 kubernetes 总共有三种暴露 ...
- 不吹不黑,今天我们来聊一聊 Kubernetes 落地的三种方式
作者 | 王国梁 Kubernetes 社区成员与项目维护者原文标题<Kubernetes 应用之道:让 Kubernetes落地的"三板斧">,首发于知乎专栏:进击 ...
- 【三小时学会Kubernetes!(三) 】Service实践
服务Service Kubernetes 服务资源可以作为一组提供相同服务的 Pod 的入口.这个资源肩负发现服务和平衡 Pod 之间负荷的重任,如图 16 所示. 图16:Kubernetes 服务 ...
- 弄明白kubernetes中的“三种IP”
Node IP : Node节点的IP地址 Pod IP:Pod的IP地址 Cluster IP : Service 的IP地址 首先,Node IP是Kubernetes集群中每个节点(服务器)物理 ...
- Kubernetes笔记(三):Gitlab+Jenkins Pipeline+Docker+k8s+Helm自动化部署实践(干货分享!)
通过前面两篇文章,我们已经有了一个"嗷嗷待哺"的K8s集群环境,也对相关的概念与组件有了一个基本了解(前期对概念有个印象即可,因为只有实践了才能对其有深入理解,所谓"纸上 ...
- kubernetes进阶(三)服务发现-coredns
服务发现,说白了就是服务(应用)之间相互定位的过程. 服务发现需要解决的问题: 1.服务动态性强--容器在k8s中ip变化或迁移 2.更新发布频繁--版本迭代快 3.支持自动伸缩--大促或流量高峰 我 ...
随机推荐
- ModelState查看错误字段的信息
if (!ModelState.IsValid) { List<string> sb = new List<string>(); //获取所有错误的Key List<st ...
- 【Hbase学习之三】Hbase Java API
环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 jdk8 hadoop-2.6.5 hbase-0.98.12.1-h ...
- PersistenceContext.properties()
在做 Spring + SpringMVC + SpringData 时,单元测试 报这个错误: java.lang.NoSuchMethodError:javax.persistence.Persi ...
- 转:【专题十二】实现一个简单的FTP服务器
引言: 休息一个国庆节后好久没有更新文章了,主要是刚开始休息完心态还没有调整过来的, 现在差不多进入状态了, 所以继续和大家分享下网络编程的知识,在本专题中将和大家分享如何自己实现一个简单的FTP服务 ...
- Android几种解析XML方式的比较
https://blog.csdn.net/isee361820238/article/details/52371342 一.使用SAX解析XML SAX(Simple API for XML) 使用 ...
- 【js】关于闭包和匿名函数
关于js闭包.之前我一直以为是匿名函数,以为封闭式的创建即执行销毁就是闭包,其实这是匿名函数,不一样的.也没有闭包的使用经验. 后来去网上查了下才知道,闭包的意思是:函数内部还有函数,返回一个函数,内 ...
- centos7开放及查看端口
centos7中的防火墙改成了firewall,使用iptables无作用,开放端口的方法如下: firewall-cmd --zone=public --add-port=80/tcp --perm ...
- Prometheus监控学习笔记之360基于Prometheus的在线服务监控实践
0x00 初衷 最近参与的几个项目,无一例外对监控都有极强的要求,需要对项目中各组件进行详细监控,如服务端API的请求次数.响应时间.到达率.接口错误率.分布式存储中的集群IOPS.节点在线情况.偏移 ...
- java泛型中<?>和<T>区别
public static void printColl(ArrayList<?> al){ Iterator<?> it = al.iterat ...
- wireshark抓包的过滤规则
使用wireshark抓包的过滤规则.1.过滤源ip.目的ip.在wireshark的过滤规则框Filter中输入过滤条件.如查找目的地址为192.168.101.8的包,ip.dst==192.16 ...