前文我们聊到了k8s上crd资源的使用和相关说明,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14267400.html;今天我们来了解下k8s的第二种扩展机制自定义apiserver,以及apiservice资源的相关话题;

  在开始聊自定义apiserver前,我们先来了解下k8s原生的apiserver;其实apiserver就是一个https服务器,我们可以使用kubectl工具通过https协议请求apiserver创建资源,删除资源,查看资源等等操作;每个请求都对应着RESTful api中的请求方法,对应资源就是http协议中的url路径;比如我们要创建一个pod,其kubectl请求apiserver 使用post方法将资源定义提交给apiserver;pod资源就是对应群组中的某个版本下某个名称空间下的某个pod资源;

  apiserver资源组织逻辑

  提示:客户端访问apiserver,对应资源类似上图中的组织方式;比如访问default名称空间下某个pod,其路径就为/apis/core/v1/namespace/default/pod/mypod;对应resource包含名称空间以及对应资源类型;

  k8s原生apiserver组成

  k8s原生apiserver主要有两个组件组成,第一个组件aggregator,其功能类似web代理服务器,第二个组件就是真正的apiserver;其工作逻辑是,用户请求首先送达给aggregator,由aggregator根据用户请求的资源,将对应请求路由至apiserver;简单讲aggregator这个组件主要作用就是用来路由用户请求;默认情况aggregator会把所有请求都路由至原生的apiserver上进行响应;如果我们需要自定义apiserver,就需要在默认的aggregator上使用apiservice资源将自定义apiserver注册到原生的apiserver上,让其用户请求能够被路由至自定义apiserver进行响应;如下图

  提示:apiserver是k8s的唯一访问入口,默认客户端的所有操作都是发送给apiserver进行响应,我们自定义的apiserver要想能够被客户端访问,就必须通过内建apiserver中的aggregator组件中的路由信息,把对应路径的访问路由至对应apiserver进行访问;对应aggregator中的路由信息,由k8s内建apiservice资源定义;简单讲apiservice资源就是用来定义原生apiserver中aggregator组件上的路由信息;该路由就是将某某端点的访问路由至对应apiserver;

  查看原生apiserver中的群组/版本信息

  1. [root@master01 ~]# kubectl api-versions
  2. admissionregistration.k8s.io/v1
  3. admissionregistration.k8s.io/v1beta1
  4. apiextensions.k8s.io/v1
  5. apiextensions.k8s.io/v1beta1
  6. apiregistration.k8s.io/v1
  7. apiregistration.k8s.io/v1beta1
  8. apps/v1
  9. authentication.k8s.io/v1
  10. authentication.k8s.io/v1beta1
  11. authorization.k8s.io/v1
  12. authorization.k8s.io/v1beta1
  13. autoscaling/v1
  14. autoscaling/v2beta1
  15. autoscaling/v2beta2
  16. batch/v1
  17. batch/v1beta1
  18. certificates.k8s.io/v1
  19. certificates.k8s.io/v1beta1
  20. coordination.k8s.io/v1
  21. coordination.k8s.io/v1beta1
  22. crd.projectcalico.org/v1
  23. discovery.k8s.io/v1beta1
  24. events.k8s.io/v1
  25. events.k8s.io/v1beta1
  26. extensions/v1beta1
  27. flowcontrol.apiserver.k8s.io/v1beta1
  28. mongodb.com/v1
  29. networking.k8s.io/v1
  30. networking.k8s.io/v1beta1
  31. node.k8s.io/v1
  32. node.k8s.io/v1beta1
  33. policy/v1beta1
  34. rbac.authorization.k8s.io/v1
  35. rbac.authorization.k8s.io/v1beta1
  36. scheduling.k8s.io/v1
  37. scheduling.k8s.io/v1beta1
  38. stable.example.com/v1
  39. storage.k8s.io/v1
  40. storage.k8s.io/v1beta1
  41. v1
  42. [root@master01 ~]#

  提示:只有上面列出的群组版本才能够被客户端访问,即客户端只能访问上述列表中群组版本中的资源,没有出现群组版本是一定不会被客户端访问到;

  apiservice资源的使用

  示例:创建apiservice资源

  1. [root@master01 ~]# cat apiservice-demo.yaml
  2. apiVersion: apiregistration.k8s.io/v1
  3. kind: APIService
  4. metadata:
  5. name: v2beta1.auth.ilinux.io
  6. spec:
  7. insecureSkipTLSVerify: true
  8. group: auth.ilinux.io
  9. groupPriorityMinimum: 1000
  10. versionPriority: 15
  11. service:
  12. name: auth-api
  13. namespace: default
  14. version: v2beta1
  15. [root@master01 ~]#

  提示:apiservice资源属于apiregistration.k8s.io/v1群组,其类型为APIService,其中spec.insecureSkipTLSVerify字段用来描述是否忽略安全验证,即不验证https证书;true表示不验证,false表示要验证;group字段用来描述对应自定义apiserver的群组;groupPriorityMinimum字段用来描述对应群组的优先级;versionPriority字段用来描述对应群组版本的优先级;service字段是用来描述把对应群组版本的请求路由至某个service;该service就是对应自定义apiserver关联的service;version字段用来描述对应apiserver属于对应群组中的某个版本;上述资源清单表示在aggregator上注册auth.ilinux.io/v2beta1这个端点,该端点对应的后端apiserver的service是default名称空间下的auth-api service;即客户端访问auth.ilinux.io/v2beta1下的资源都会被路由至default名称空间下的auth-api service进行响应;

  应用清单

  1. [root@master01 ~]# kubectl apply -f apiservice-demo.yaml
  2. apiservice.apiregistration.k8s.io/v2beta1.auth.ilinux.io created
  3. [root@master01 ~]# kubectl get apiservice |grep auth.ilinux.io
  4. v2beta1.auth.ilinux.io default/auth-api False (ServiceNotFound) 16s
  5. [root@master01 ~]# kubectl api-versions |grep auth.ilinux.io
  6. auth.ilinux.io/v2beta1
  7. [root@master01 ~]#

  提示:可以看到应用清单以后,对应的端点信息就出现在api-versions中;

  上述清单只是用来说明对应apiservice资源的使用;其实应用上述清单创建apiservice资源没有实质的作用,其原因是我们对应名称空间下并没有对应的服务,也没有对应自定义apiserver;所以通常自定义apiserver,我们会用到apiservice资源来把自定义apiserver整合进原生apiserver中;接下来我们部署一个真正意义上的自定义apiserver

  部署metrics-server

  metrics-server是用来扩展k8s的第三方apiserver,其主要作用是收集pod或node上的cpu,内存,磁盘等指标数据,并提供一个api接口供kubectl top命令访问;默认情况kubectl top 命令是没法正常使用,其原因是默认apiserver上没有对应的接口提供收集pod或node的cpu,内存,磁盘等核心指标数据;kubectl top命令主要用来显示pod/node资源的cpu,内存,磁盘的占用比例;该命令能够正常使用必须依赖Metrics API;

  默认没有部署metrics server使用kubectl top pod/node查看pod或节点的cpu,内存占用比例

  1. [root@master01 ~]# kubectl top
  2. Display Resource (CPU/Memory/Storage) usage.
  3.  
  4. The top command allows you to see the resource consumption for nodes or pods.
  5.  
  6. This command requires Metrics Server to be correctly configured and working on the server.
  7.  
  8. Available Commands:
  9. node Display Resource (CPU/Memory/Storage) usage of nodes
  10. pod Display Resource (CPU/Memory/Storage) usage of pods
  11.  
  12. Usage:
  13. kubectl top [flags] [options]
  14.  
  15. Use "kubectl <command> --help" for more information about a given command.
  16. Use "kubectl options" for a list of global command-line options (applies to all commands).
  17. [root@master01 ~]# kubectl top pod
  18. error: Metrics API not available
  19. [root@master01 ~]# kubectl top node
  20. error: Metrics API not available
  21. [root@master01 ~]#

  提示:默认没有部署metrics server,使用kubectl top pod/node命令,它会告诉我们没有可用的metrics api;

  部署metrics server

  下载部署清单

  1. [root@master01 ~]# mkdir metrics-server
  2. [root@master01 ~]# cd metrics-server
  3. [root@master01 metrics-server]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml
  4. --2021-01-14 23:54:30-- https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml
  5. Resolving github.com (github.com)... 52.74.223.119
  6. Connecting to github.com (github.com)|52.74.223.119|:443... connected.
  7. HTTP request sent, awaiting response... 302 Found
  8. Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/c700f080-1f7e-11eb-9e30-864a63f442f4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210114%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210114T155432Z&X-Amz-Expires=300&X-Amz-Signature=fc5a6f41ca50ec22e87074a778d2cb35e716ae6c3231afad17dfaf8a02203e35&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream [following]
  9. --2021-01-14 23:54:32-- https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/c700f080-1f7e-11eb-9e30-864a63f442f4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210114%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210114T155432Z&X-Amz-Expires=300&X-Amz-Signature=fc5a6f41ca50ec22e87074a778d2cb35e716ae6c3231afad17dfaf8a02203e35&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream
  10. Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.39.44
  11. Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.39.44|:443... connected.
  12. HTTP request sent, awaiting response... 200 OK
  13. Length: 3962 (3.9K) [application/octet-stream]
  14. Saving to: components.yaml
  15.  
  16. 100%[===========================================================================================>] 3,962 11.0KB/s in 0.4s
  17.  
  18. 2021-01-14 23:54:35 (11.0 KB/s) - components.yaml saved [3962/3962]
  19.  
  20. [root@master01 metrics-server]# ls
  21. components.yaml
  22. [root@master01 metrics-server]#

  修改部署清单内容

  1. [root@master01 metrics-server]# cat components.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. labels:
  6. k8s-app: metrics-server
  7. name: metrics-server
  8. namespace: kube-system
  9. ---
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. kind: ClusterRole
  12. metadata:
  13. labels:
  14. k8s-app: metrics-server
  15. rbac.authorization.k8s.io/aggregate-to-admin: "true"
  16. rbac.authorization.k8s.io/aggregate-to-edit: "true"
  17. rbac.authorization.k8s.io/aggregate-to-view: "true"
  18. name: system:aggregated-metrics-reader
  19. rules:
  20. - apiGroups:
  21. - metrics.k8s.io
  22. resources:
  23. - pods
  24. - nodes
  25. verbs:
  26. - get
  27. - list
  28. - watch
  29. ---
  30. apiVersion: rbac.authorization.k8s.io/v1
  31. kind: ClusterRole
  32. metadata:
  33. labels:
  34. k8s-app: metrics-server
  35. name: system:metrics-server
  36. rules:
  37. - apiGroups:
  38. - ""
  39. resources:
  40. - pods
  41. - nodes
  42. - nodes/stats
  43. - namespaces
  44. - configmaps
  45. verbs:
  46. - get
  47. - list
  48. - watch
  49. ---
  50. apiVersion: rbac.authorization.k8s.io/v1
  51. kind: RoleBinding
  52. metadata:
  53. labels:
  54. k8s-app: metrics-server
  55. name: metrics-server-auth-reader
  56. namespace: kube-system
  57. roleRef:
  58. apiGroup: rbac.authorization.k8s.io
  59. kind: Role
  60. name: extension-apiserver-authentication-reader
  61. subjects:
  62. - kind: ServiceAccount
  63. name: metrics-server
  64. namespace: kube-system
  65. ---
  66. apiVersion: rbac.authorization.k8s.io/v1
  67. kind: ClusterRoleBinding
  68. metadata:
  69. labels:
  70. k8s-app: metrics-server
  71. name: metrics-server:system:auth-delegator
  72. roleRef:
  73. apiGroup: rbac.authorization.k8s.io
  74. kind: ClusterRole
  75. name: system:auth-delegator
  76. subjects:
  77. - kind: ServiceAccount
  78. name: metrics-server
  79. namespace: kube-system
  80. ---
  81. apiVersion: rbac.authorization.k8s.io/v1
  82. kind: ClusterRoleBinding
  83. metadata:
  84. labels:
  85. k8s-app: metrics-server
  86. name: system:metrics-server
  87. roleRef:
  88. apiGroup: rbac.authorization.k8s.io
  89. kind: ClusterRole
  90. name: system:metrics-server
  91. subjects:
  92. - kind: ServiceAccount
  93. name: metrics-server
  94. namespace: kube-system
  95. ---
  96. apiVersion: v1
  97. kind: Service
  98. metadata:
  99. labels:
  100. k8s-app: metrics-server
  101. name: metrics-server
  102. namespace: kube-system
  103. spec:
  104. ports:
  105. - name: https
  106. port: 443
  107. protocol: TCP
  108. targetPort: https
  109. selector:
  110. k8s-app: metrics-server
  111. ---
  112. apiVersion: apps/v1
  113. kind: Deployment
  114. metadata:
  115. labels:
  116. k8s-app: metrics-server
  117. name: metrics-server
  118. namespace: kube-system
  119. spec:
  120. selector:
  121. matchLabels:
  122. k8s-app: metrics-server
  123. strategy:
  124. rollingUpdate:
  125. maxUnavailable: 0
  126. template:
  127. metadata:
  128. labels:
  129. k8s-app: metrics-server
  130. spec:
  131. containers:
  132. - args:
  133. - --cert-dir=/tmp
  134. - --secure-port=4443
  135. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  136. - --kubelet-use-node-status-port
  137. - --kubelet-insecure-tls
  138. image: k8s.gcr.io/metrics-server/metrics-server:v0.4.0
  139. imagePullPolicy: IfNotPresent
  140. livenessProbe:
  141. failureThreshold: 3
  142. httpGet:
  143. path: /livez
  144. port: https
  145. scheme: HTTPS
  146. periodSeconds: 10
  147. name: metrics-server
  148. ports:
  149. - containerPort: 4443
  150. name: https
  151. protocol: TCP
  152. readinessProbe:
  153. failureThreshold: 3
  154. httpGet:
  155. path: /readyz
  156. port: https
  157. scheme: HTTPS
  158. periodSeconds: 10
  159. securityContext:
  160. readOnlyRootFilesystem: true
  161. runAsNonRoot: true
  162. runAsUser: 1000
  163. volumeMounts:
  164. - mountPath: /tmp
  165. name: tmp-dir
  166. nodeSelector:
  167. kubernetes.io/os: linux
  168. priorityClassName: system-cluster-critical
  169. serviceAccountName: metrics-server
  170. volumes:
  171. - emptyDir: {}
  172. name: tmp-dir
  173. ---
  174. apiVersion: apiregistration.k8s.io/v1
  175. kind: APIService
  176. metadata:
  177. labels:
  178. k8s-app: metrics-server
  179. name: v1beta1.metrics.k8s.io
  180. spec:
  181. group: metrics.k8s.io
  182. groupPriorityMinimum: 100
  183. insecureSkipTLSVerify: true
  184. service:
  185. name: metrics-server
  186. namespace: kube-system
  187. version: v1beta1
  188. versionPriority: 100
  189. [root@master01 metrics-server]#

  提示:在deploy中,spec.template.containers.args字段中加上--kubelet-insecure-tls选项,表示不验证客户端证书;上述清单主要用deploy控制器将metrics server运行为一个pod,然后授权metrics-server用户能够对pod/node资源进行只读权限;然后把metrics.k8s.io/v1beta1注册到原生apiserver上,让其客户端访问metrics.k8s.io下的资源能够被路由至metrics-server这个服务上进行响应;

  应用资源清单

  1. [root@master01 metrics-server]# kubectl apply -f components.yaml
  2. serviceaccount/metrics-server created
  3. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  4. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  5. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  6. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  7. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  8. service/metrics-server created
  9. deployment.apps/metrics-server created
  10. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
  11. [root@master01 metrics-server]#

  验证:查看原生apiserver是否有metrics.k8s.io/v1beta1?

  1. [root@master01 metrics-server]# kubectl api-versions|grep metrics
  2. metrics.k8s.io/v1beta1
  3. [root@master01 metrics-server]#

  提示:可以看到metrics.k8s.io/v1beta1群组已经注册到原生apiserver上;

  查看metrics server pod是否运行正常?

  1. [root@master01 metrics-server]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-744cfdf676-kh6rm 1/1 Running 4 5d7h
  4. canal-5bt88 2/2 Running 20 11d
  5. canal-9ldhl 2/2 Running 22 11d
  6. canal-fvts7 2/2 Running 20 11d
  7. canal-mwtg4 2/2 Running 23 11d
  8. canal-rt8nn 2/2 Running 21 11d
  9. coredns-7f89b7bc75-k9gdt 1/1 Running 32 37d
  10. coredns-7f89b7bc75-kp855 1/1 Running 31 37d
  11. etcd-master01.k8s.org 1/1 Running 36 37d
  12. kube-apiserver-master01.k8s.org 1/1 Running 14 13d
  13. kube-controller-manager-master01.k8s.org 1/1 Running 43 37d
  14. kube-flannel-ds-fnd2w 1/1 Running 5 5d5h
  15. kube-flannel-ds-k9l4k 1/1 Running 7 5d5h
  16. kube-flannel-ds-s7w2j 1/1 Running 4 5d5h
  17. kube-flannel-ds-vm4mr 1/1 Running 6 5d5h
  18. kube-flannel-ds-zgq92 1/1 Running 37 37d
  19. kube-proxy-74fxn 1/1 Running 10 10d
  20. kube-proxy-fbl6c 1/1 Running 8 10d
  21. kube-proxy-n82sf 1/1 Running 10 10d
  22. kube-proxy-ndww5 1/1 Running 11 10d
  23. kube-proxy-v8dhk 1/1 Running 11 10d
  24. kube-scheduler-master01.k8s.org 1/1 Running 39 37d
  25. metrics-server-58fcfcc9d-drbw2 1/1 Running 0 32s
  26. [root@master01 metrics-server]#

  提示:可以看到对应pod已经正常运行;

  查看pod里的日志是否正常?

  1. [root@master01 metrics-server]# kubectl logs metrics-server-58fcfcc9d-drbw2 -n kube-system
  2. I0114 17:52:03.601493 1 serving.go:325] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
  3. E0114 17:52:04.140587 1 pathrecorder.go:107] registered "/metrics" from goroutine 1 [running]:
  4. runtime/debug.Stack(0x1942e80, 0xc00069aed0, 0x1bb58b5)
  5. /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
  6. k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).trackCallers(0xc00028afc0, 0x1bb58b5, 0x8)
  7. /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/mux/pathrecorder.go:109 +0x86
  8. k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).Handle(0xc00028afc0, 0x1bb58b5, 0x8, 0x1e96f00, 0xc0006d88d0)
  9. /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/mux/pathrecorder.go:173 +0x84
  10. k8s.io/apiserver/pkg/server/routes.MetricsWithReset.Install(0xc00028afc0)
  11. /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/routes/metrics.go:43 +0x5d
  12. k8s.io/apiserver/pkg/server.installAPI(0xc00000a1e0, 0xc000589b00)
  13. /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/config.go:711 +0x6c
  14. k8s.io/apiserver/pkg/server.completedConfig.New(0xc000589b00, 0x1f099c0, 0xc0001449b0, 0x1bbdb5a, 0xe, 0x1ef29e0, 0x2cef248, 0x0, 0x0, 0x0)
  15. /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/config.go:657 +0xb45
  16. sigs.k8s.io/metrics-server/pkg/server.Config.Complete(0xc000589b00, 0xc000599440, 0xc000599b00, 0xdf8475800, 0xc92a69c00, 0x0, 0x0, 0xdf8475800)
  17. /go/src/sigs.k8s.io/metrics-server/pkg/server/config.go:52 +0x312
  18. sigs.k8s.io/metrics-server/cmd/metrics-server/app.runCommand(0xc00001c6e0, 0xc000114600, 0x0, 0x0)
  19. /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/app/start.go:66 +0x157
  20. sigs.k8s.io/metrics-server/cmd/metrics-server/app.NewMetricsServerCommand.func1(0xc0000d9340, 0xc0005a4cd0, 0x0, 0x5, 0x0, 0x0)
  21. /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/app/start.go:37 +0x33
  22. github.com/spf13/cobra.(*Command).execute(0xc0000d9340, 0xc00013a130, 0x5, 0x5, 0xc0000d9340, 0xc00013a130)
  23. /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842 +0x453
  24. github.com/spf13/cobra.(*Command).ExecuteC(0xc0000d9340, 0xc00013a180, 0x0, 0x0)
  25. /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
  26. github.com/spf13/cobra.(*Command).Execute(...)
  27. /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
  28. main.main()
  29. /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/metrics-server.go:38 +0xae
  30. I0114 17:52:04.266492 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
  31. I0114 17:52:04.267021 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
  32. I0114 17:52:04.266641 1 secure_serving.go:197] Serving securely on [::]:4443
  33. I0114 17:52:04.266670 1 tlsconfig.go:240] Starting DynamicServingCertificateController
  34. I0114 17:52:04.266682 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
  35. I0114 17:52:04.266688 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  36. I0114 17:52:04.267120 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  37. I0114 17:52:04.266692 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
  38. I0114 17:52:04.267301 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
  39. I0114 17:52:04.367448 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
  40. I0114 17:52:04.367472 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  41. I0114 17:52:04.367462 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
  42. [root@master01 metrics-server]#

  提示:只要metrics server pod没有出现错误日志,或者无法注册等信息,就表示pod里的容器运行正常;

  验证:使用kubectl top 命令查看pod的cpu ,内存占比,看看对应命令是否可以正常执行?

  1. [root@master01 metrics-server]# kubectl top node
  2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  3. master01.k8s.org 235m 11% 1216Mi 70%
  4. node01.k8s.org 140m 3% 747Mi 20%
  5. node02.k8s.org 120m 3% 625Mi 17%
  6. node03.k8s.org 133m 3% 594Mi 16%
  7. node04.k8s.org 125m 3% 700Mi 19%
  8. [root@master01 metrics-server]# kubectl top pods -n kube-system
  9. NAME CPU(cores) MEMORY(bytes)
  10. calico-kube-controllers-744cfdf676-kh6rm 2m 23Mi
  11. canal-5bt88 50m 118Mi
  12. canal-9ldhl 22m 86Mi
  13. canal-fvts7 49m 106Mi
  14. canal-mwtg4 57m 113Mi
  15. canal-rt8nn 56m 113Mi
  16. coredns-7f89b7bc75-k9gdt 3m 12Mi
  17. coredns-7f89b7bc75-kp855 3m 15Mi
  18. etcd-master01.k8s.org 25m 72Mi
  19. kube-apiserver-master01.k8s.org 99m 410Mi
  20. kube-controller-manager-master01.k8s.org 14m 88Mi
  21. kube-flannel-ds-fnd2w 3m 45Mi
  22. kube-flannel-ds-k9l4k 3m 27Mi
  23. kube-flannel-ds-s7w2j 4m 46Mi
  24. kube-flannel-ds-vm4mr 3m 45Mi
  25. kube-flannel-ds-zgq92 2m 19Mi
  26. kube-proxy-74fxn 1m 27Mi
  27. kube-proxy-fbl6c 1m 23Mi
  28. kube-proxy-n82sf 1m 25Mi
  29. kube-proxy-ndww5 1m 25Mi
  30. kube-proxy-v8dhk 2m 23Mi
  31. kube-scheduler-master01.k8s.org 3m 33Mi
  32. metrics-server-58fcfcc9d-drbw2 6m 23Mi
  33. [root@master01 metrics-server]#

  提示:可以看到kubectl top命令可以正常执行,说明metrics server 部署成功没有问题;

  以上就是使用apiservice资源结合自定义apiserver扩展k8s功能的示例,简单总结apiservice资源的主要作用就是在aggregator上创建对应的路由信息,该路由信息的主要作用是将对应端点访问路由至自定义apiserver所对应的service进行响应;

容器编排系统K8s之APIService资源的更多相关文章

  1. 容器编排系统K8s之crd资源

    前文我们了解了k8s节点污点和pod的对节点污点容忍度相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14255486.html:今天我们来聊一下扩展 ...

  2. 容器编排系统k8s之Service资源

    前文我们了解了k8s上的DemonSet.Job和CronJob控制器的相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14157306.html:今 ...

  3. 容器编排系统k8s之Ingress资源

    前文我们了解了k8s上的service资源的相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14161950.html:今天我们来了解下k8s上的In ...

  4. 容器编排系统K8s之HPA资源

    前文我们了解了用Prometheus监控k8s上的节点和pod资源,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14287942.html:今天我们来了解下 ...

  5. 容器编排系统K8s之NetworkPolicy资源

    前文我们了解了k8s的网络插件flannel的基础工作逻辑,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14225657.html:今天我们来聊一下k8s上 ...

  6. 容器编排系统K8s之Prometheus监控系统+Grafana部署

    前文我们聊到了k8s的apiservice资源结合自定义apiserver扩展原生apiserver功能的相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/ ...

  7. 容器编排系统K8s之ConfigMap、Secret资源

    前文我们了解了k8s上的pv/pvc/sc资源的使用和相关说明,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14188621.html:今天我们主要来聊一下 ...

  8. 容器编排系统K8s之PV、PVC、SC资源

    前文我们聊到了k8s中给Pod添加存储卷相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14180752.html:今天我们来聊一下持久存储卷相关话题 ...

  9. 容器编排系统K8s之Volume的基础使用

    前文我们聊到了k8s上的ingress资源相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14167581.html:今天们来聊一下k8s上volum ...

随机推荐

  1. STL-Vector容量问题:

    1.clear,erase ,pop_back() 函数只删除对象,并没有释放vec中的内存,若对象是指针还需要delete:2.在erase,clear,pop_back()删除对象的后,size改 ...

  2. 【题解】P3631 [APIO2011]方格染色

    很有意思的一道题,所以单独拿出来了. 完整分享看 这里 题目链接 luogu 题意 有一个包含 \(n \times m\) 个方格的表格.要将其中的每个方格都染成红色或蓝色.表格中每个 \(2 \t ...

  3. P5857 「SWTR-03」Matrix

    原本自己有一个思路的,推了半天不太确定看了下题解,发现到后面完全不知道他代码在写些什么(我太弱了),所以打算自己理一下. 题解 首先我们可以肯定的一点就是,我们可以发现,一个矩阵的形态只和他横着和竖着 ...

  4. Android FART脱壳机流程分析

    本文首发于安全客 链接:https://www.anquanke.com/post/id/219094 0x1 前言 在Android平台上,程序员编写的Java代码最终将被编译成字节码在Androi ...

  5. React Native学习记录

    1.端口问题 在调试的时候,监听的是8081端口.如果被占用,会报错,并且在reload的时候导致app直接崩掉. 2.插件网站收集 https://nativebase.io/ https://js ...

  6. sqli-labs lexx25-28a(各种过滤)

    less-25AND OR 过滤 less-25a基于Bool_GET_过滤AND/OR_数字型_盲注 less-26过滤了注释和空格的注入 less-26a过滤了空格和注释的盲注 less-27过滤 ...

  7. 判断一个对象是否为空?怎么得到一个对象的第几个键名(key)?

    var obj = {"微信":[],"qq":[]} console.log( Object.keys(obj) ) // ["微信",& ...

  8. SpringBoot-2.3镜像方案为什么要做多个layer

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  9. 个人微信公众号搭建Python实现 -个人公众号搭建-被动回复消息建模(14.3.2)

    @ 目录 1.阅读官方文档 2.思考 关于作者 1.阅读官方文档 点击进入微信官方开发者文档 接收普通消息 文本消息 图片消息 语言消息 视频消息 小视频消息 地理位置消息 链接消息 接收事件消息 关 ...

  10. Python之excel第三方库xlrd和xlwt

    Python读取excel表格的库xlrd,首先安装xlrd: pip3 install xlrd 代码: #!usr/bin/env python3 #!-*-coding=utf-8 -*- '' ...