一.系统环境

服务器版本 docker软件版本 Kubernetes(k8s)集群版本 CPU架构
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 v1.21.9 x86_64

Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

服务器 操作系统版本 CPU架构 进程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

二.前言

本文介绍污点taint 与容忍度tolerations,可以影响pod的调度。

使用污点taint 与容忍度tolerations的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html

三.污点taint

3.1 污点taint概览

节点亲和性 是 Pod 的一种属性,它使 Pod 被吸引到一类特定的节点 (这可能出于一种偏好,也可能是硬性要求)。 污点(Taint) 则相反——它使节点能够排斥一类特定的 Pod。

3.2 给节点添加污点taint

给节点增加一个污点的语法如下:给节点 node1 增加一个污点,它的键名是 key1,键值是 value1,效果是 NoSchedule。 这表示只有拥有和这个污点相匹配的容忍度的 Pod 才能够被分配到 node1 这个节点。

#污点的格式:键=值:NoSchedule
kubectl taint nodes node1 key1=value1:NoSchedule #只有键没有值的话,格式为:键:NoSchedule
kubectl taint nodes node1 key1:NoSchedule

移除污点语法如下:

kubectl taint nodes node1 key1=value1:NoSchedule-

节点的描述信息里有一个Taints字段,Taints字段表示节点有没有污点

[root@k8scloude1 deploy]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8scloude1 Ready control-plane,master 8d v1.21.0 192.168.110.130 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12
k8scloude2 Ready <none> 8d v1.21.0 192.168.110.129 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12
k8scloude3 Ready <none> 8d v1.21.0 192.168.110.128 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12 [root@k8scloude1 deploy]# kubectl describe nodes k8scloude1
Name: k8scloude1
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8scloude1
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.110.130/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.158.64
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 09 Jan 2022 16:19:06 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
......

查看节点是否有污点,Taints: node-role.kubernetes.io/master:NoSchedule表示k8s集群的master节点有污点,这是默认就存在的污点,这也是master节点为什么不能运行应用pod的原因。

[root@k8scloude1 deploy]# kubectl describe nodes k8scloude2 | grep -i Taints
Taints: <none> [root@k8scloude1 deploy]# kubectl describe nodes k8scloude1 | grep -i Taints
Taints: node-role.kubernetes.io/master:NoSchedule [root@k8scloude1 deploy]# kubectl describe nodes k8scloude3 | grep -i Taints
Taints: <none>

创建pod,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod运行在标签为kubernetes.io/hostname=k8scloude1的节点上。

关于pod的调度详细内容,请查看博客《pod(八):pod的调度——将 Pod 指派给节点》https://www.cnblogs.com/renshengdezheli/p/16863405.html

[root@k8scloude1 pod]# vim schedulepod4.yaml 

[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
nodeSelector:
kubernetes.io/hostname: k8scloude1
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

标签为kubernetes.io/hostname=k8scloude1的节点为k8scloude1节点

[root@k8scloude1 pod]# kubectl get nodes -l kubernetes.io/hostname=k8scloude1
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 8d v1.21.0

创建pod,因为k8scloude1上有污点,pod1不能运行在k8scloude1上,所以pod1状态为Pending

[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created #因为k8scloude1上有污点,pod1不能运行在k8scloude1上,所以pod1状态为Pending
[root@k8scloude1 pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 0/1 Pending 0 9s <none> <none> <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pod -o wide
No resources found in pod namespace.

四.容忍度tolerations

4.1 容忍度tolerations概览

容忍度(Toleration) 是应用于 Pod 上的。容忍度允许调度器调度带有对应污点的 Pod。 容忍度允许调度但并不保证调度:作为其功能的一部分, 调度器也会评估其他参数。

污点和容忍度(Toleration)相互配合,可以用来避免 Pod 被分配到不合适的节点上。 每个节点上都可以应用一个或多个污点,这表示对于那些不能容忍这些污点的 Pod, 是不会被该节点接受的。

4.2 设置容忍度tolerations

只有拥有和这个污点相匹配的容忍度的 Pod 才能够被分配到 node节点。

查看k8scloude1节点的污点

[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule

你可以在 Pod 规约中为 Pod 设置容忍度,创建pod,tolerations参数表示可以容忍污点:node-role.kubernetes.io/master:NoSchedule ,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod运行在标签为kubernetes.io/hostname=k8scloude1的节点上。

[root@k8scloude1 pod]# vim schedulepod4.yaml 

[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
nodeSelector:
kubernetes.io/hostname: k8scloude1
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {} [root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace. [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created

查看pod,即使k8scloude1节点有污点,pod还是正常运行。

taint污点和cordon,drain的区别:某个节点上有污点,可以设置tolerations容忍度,让pod运行在该节点,某个节点被cordon,drain,则该节点不能被分配出去运行pod。

关于cordon,drain的详细信息,请查看博客《cordon节点,drain驱逐节点,delete 节点》https://www.cnblogs.com/renshengdezheli/p/16860674.html

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 4s 10.244.158.84 k8scloude1 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

注意,tolerations容忍度有两种写法,任选一种即可:

tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule" tolerations:
- key: "key1"
operator: "Exists"
effect: "NoSchedule"

给k8scloude2节点打标签

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 taint=T
node/k8scloude2 labeled [root@k8scloude1 pod]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8scloude1 Ready control-plane,master 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2 Ready <none> 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux,taint=T
k8scloude3 Ready <none> 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux

对k8scloude2设置污点

#污点taint的格式:键=值:NoSchedule
[root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian=true:NoSchedule
node/k8scloude2 tainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -i Taints
Taints: wudian=true:NoSchedule

创建pod,tolerations参数表示容忍污点wudian=true:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点。

[root@k8scloude1 pod]# vim schedulepod4.yaml 

[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Equal"
value: "true"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {} [root@k8scloude1 pod]# kubectl get pod -o wide
No resources found in pod namespace. [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created

查看pod,k8scloude2节点就算有污点也能运行pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 8s 10.244.112.177 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

污点容忍的另一种写法:operator: "Exists",没有value值。

[root@k8scloude1 pod]# vim schedulepod4.yaml 

[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {} [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created

查看pod,k8scloude2节点就算有污点也能运行pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 10s 10.244.112.178 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

给k8scloude2节点再添加一个污点

[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints
Taints: wudian=true:NoSchedule [root@k8scloude1 pod]# kubectl taint node k8scloude2 zang=shide:NoSchedule
node/k8scloude2 tainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints
Taints: wudian=true:NoSchedule [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints: wudian=true:NoSchedule
zang=shide:NoSchedule
Unschedulable: false [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A1 Taints
Taints: wudian=true:NoSchedule
zang=shide:NoSchedule

创建pod,tolerations参数表示容忍2个污点:wudian=true:NoSchedule和zang=shide:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点。

[root@k8scloude1 pod]# vim schedulepod4.yaml 

[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "zang"
operator: "Equal"
value: "shide"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {} [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created

查看pod,k8scloude2节点就算有2个污点也能运行pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 6s 10.244.112.179 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

创建pod,tolerations参数表示容忍污点:wudian=true:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点。

[root@k8scloude1 pod]# vim schedulepod4.yaml 

[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
namespace: pod
spec:
tolerations:
- key: "wudian"
operator: "Equal"
value: "true"
effect: "NoSchedule"
nodeSelector:
taint: T
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {} [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml
pod/pod1 created

查看pod,一个节点有两个污点值,但是yaml文件只容忍一个,所以pod创建不成功。

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 0/1 Pending 0 8s <none> <none> <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

取消k8scloude2污点

[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints: wudian=true:NoSchedule
zang=shide:NoSchedule
Unschedulable: false #取消污点
[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang-
node/k8scloude2 untainted [root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian-
node/k8scloude2 untainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -A2 Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease: [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints: <none>
Unschedulable: false
Lease: [root@k8scloude1 pod]# kubectl describe nodes k8scloude3 | grep -A2 Taints
Taints: <none>
Unschedulable: false
Lease:

Tips:如果自身机器有限,只能有一台机器,则可以把master节点的污点taint取消,就可以在master上运行pod了。

pod(九):污点taint 与容忍度tolerations的更多相关文章

  1. kubernetes调度之污点(taint)和容忍(toleration)

    系列目录 节点亲和性(affinity),是节点的一种属性,让符合条件的pod亲附于它(倾向于或者硬性要求).污点是一种相反的行为,它会使pod抗拒此节点(即pod调度的时候不被调度到此节点) 污点和 ...

  2. Kubernetes 调度 - 污点和容忍度详解

    当我们使用节点亲和力(Pod 的一个属性)时,它会将Pod吸引到一组节点(作为偏好或硬性要求).污点的行为完全相反,它们允许一个节点排斥一组 Pod. 在 Kubernetes 中,您可以标记(污染) ...

  3. 容器编排系统K8s之节点污点和pod容忍度

    前文我们了解了k8s上的kube-scheduler的工作方式,以及pod调度策略的定义:回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14243312.ht ...

  4. Kubernetes使用节点污点和pod容忍度阻止节点调度到特定节点

    Kubernetes允许你去影响pod被调度到哪个节点.起初,只能通过在pod规范里指定节点选择器来实现,后面其他的机制逐渐加入来扩容这项功能,本章将包括这些内容. 现在要介绍的高级调度的两个特性是节 ...

  5. k8s核心资源之namespace与pod污点容忍度生命周期进阶篇(四)

    目录 1.命名空间namespace 1.1 什么是命名空间? 1.2 namespace应用场景 1.3 namespacs常用指令 1.4 namespace资源限额 2.标签 2.1 什么是标签 ...

  6. 回归分析|r^2|Se|变差|多重相关系数|决定系数|多重共线性|容忍度|VIF|forward selection|backward elimination|stepwise regression procedure|best-subset approach|回归方程的置信区间|预测区间|残差分析|虚拟变量

    应用统计学-回归分析 拟合度使用r^2和Se来检验. 显著性检验中,对于线性model使用ANOVA,对于单独的回归系数使用t检验. 最小二乘法.贝叶斯和最大似然都可用于求回归参数,最小二乘法是最小化 ...

  7. Kubernetes中的Taint污点和Toleration容忍

    Taint(污点)和 Toleration(容忍)可以作用于 node 和 pod master 上添加taint kubectl taint nodes master1 node-role.kube ...

  8. kubernetes之Taints污点和Tolerations容忍

    介绍说明 nodeaffinity节点亲和性是pod上定义的一种属性, 使得pod能够被调度到某些node上运行, taint污点正好相反, 它让node拒绝pod运行, 除非pod明确声明能够容忍这 ...

  9. Kubernetes-14:一文详解Pod、Node调度规则(亲和性、污点、容忍、固定节点)

    Kubernetes Pod调度说明 简介 Scheduler 是 Kubernetes 的调度器,主要任务是把定义的Pod分配到集群的节点上,听起来非常简单,但要考虑需要方面的问题: 公平:如何保证 ...

随机推荐

  1. Hadoop的由来、Block切分、进程详解

    Hadoop的由来.Block切分.进程详解 一.hadoop的由来 Google发布了三篇论文: GFS(Google File System) MapReduce(数据计算方法) BigTable ...

  2. MySQL查询性能优化七种武器之索引下推

    前面已经讲了MySQL的其他查询性能优化方式,没看过可以去了解一下: MySQL查询性能优化七种武器之索引潜水 MySQL查询性能优化七种武器之链路追踪 今天要讲的是MySQL的另一种查询性能优化方式 ...

  3. 【美国血统 American Heritage 题解】已知前序中序 求后序

    题目: 题目名称:美国血统 American Heritage 题目来源:美国血统 American Heritage ## 题目描述 农夫约翰非常认真地对待他的奶牛们的血统.然而他不是一个真正优秀的 ...

  4. Office宏病毒学习第一弹--恶意的Excel 4.0宏

    Office宏病毒学习第一弹--恶意的Excel 4.0宏 前言 参考:https://outflank.nl/blog/2018/10/06/old-school-evil-excel-4-0-ma ...

  5. 【MySQL】DDL因Waiting for table metadata lock卡住

    在数据库空闲时间,对表做碎片整理: alter table my_abc engine=innodb; 发现会话被阻塞,显示状态是: Waiting for table metadata lock 手 ...

  6. OpenJudge 1.5.24 正常血压

    24:正常血压 总时间限制: 1000ms 内存限制: 65536kB 描述 监护室每小时测量一次病人的血压,若收缩压在90 - 140之间并且舒张压在60 - 90之间(包含端点值)则称之为正常,现 ...

  7. Android配置OpenCV C++开发环境

    网上的OpenCV配置环境大部分都不能正常配置成功,不是编译时报找不到so,就是运行找不到so.本文是我试了不少坑才找到的配置方法.其原理是让AndroidStudio自己根据mk文件自动配置. 1. ...

  8. 常用的SSH,你了解多少?(长文警告)

    1.SSH工作原理 从ssh的加密方式说开去,看下文 1.1.对称加密 客户端和服务端采用相同的密钥进行数据的加解密,很难保证密钥不丢失,或者被截获.隐藏着中间人攻击的风险 如果攻击者插在用户与远程主 ...

  9. OpenStack云计算平台部署(单节点)

    环境配置 虚拟机(centos7 .内存8G.硬盘300G.处理器4核并开启intel vt-x,网络模式设置为NAT,虚拟机网络一定要设置好,并可以ping通baidu,不然有中途掉IP的情况发生) ...

  10. js函数( 普通函数、箭头函数 ) 内部this的指向

    - 普通函数   | 具名普通函数.匿名普通函数,在不作为对象的属性值的情况下,其内部的 this 总是指向代码运行环境下的全局对象 ( 例如,浏览器中的 window ). 示例: (functio ...