处理K8S PVC删除后pod报错
报错如下
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.381558 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.581422 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.781432 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.981401 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.181612 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.381434 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.581538 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.781372 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.981466 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:20 node1 kubelet[1722]: E0619 17:15:20.182079 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:20 node1 kubelet[1722]: E0619 17:15:20.381529 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
处理办法
查找到该pod详情,将控制器删除即可
[root@master ~]# kubectl get po -A|grep gr8333e7
672f06d5992f4b4580ae04289e33dde4 gr8333e7-0
[root@master ~]# kubectl describe po gr8333e7-0 -n 672f06d5992f4b4580ae04289e33dde4
Name: gr8333e7-0
Namespace: 672f06d5992f4b4580ae04289e33dde4
Priority: 0
Node: node1/172.31.200.68
Start Time: Fri, 19 Jun 2020 14:54:34 +0800
Labels: controller-revision-hash=gr8333e7-595ff46986
creater_id=1592549674520418745
Annotations: rainbond.com/tolerate-unready-endpoints: true
Status: Running
IP: 10.244.3.196
IPs:
IP: 10.244.3.196
Controlled By: StatefulSet/gr8333e7
Containers:
f6719b6d0f2adace1d930dc5f48333e7:
Container ID: docker://2258ee0b766f2ce261563de4bda331f8bcb172ec474f0c78a9e0627eb6dbe708
Image: goodrain.me/f6719b6d0f2adace1d930dc5f48333e7:20200619145017
Image ID: docker-pullable://goodrain.me/0b8f5af437254bb55b4d8907a0bbb3ab@sha256:8e4eca55761ebadacc6503acced877fa69689389b27c56a15ba165810e563e31
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 19 Jun 2020 14:54:36 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1280m
memory: 1Gi
Requests:
cpu: 240m
memory: 1Gi
Readiness: tcp-socket :3306 delay=4s timeout=5s period=3s #success=1 #failure=3
Environment:
LOGGER_DRIVER_NAME: streamlog
REVERSE_DEPEND_SERVICE: gr512123:28d93ce6688d13325dc7986169512123,gr58ee27:599b46254ee0690d3ee750b5ab58ee27,gr5efe93:9c69edab427540f0aecc9bd0bb5efe93
DB_HOST: 127.0.0.1
DB_PORT: 3306
TENANT_ID: 672f06d5992f4b4580ae04289e33dde4
SERVICE_ID: f6719b6d0f2adace1d930dc5f48333e7
MEMORY_SIZE: large
SERVICE_NAME: gr8333e7
SERVICE_POD_NUM: 1
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
Mounts:
/var/lib/mysql from manual165 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lctjg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
manual165:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: manual165-gr8333e7-0
ReadOnly: false
default-token-lctjg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lctjg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[root@master ~]# grctl service get gr8333e7 -t npwrtv4l
Namespace: 672f06d5992f4b4580ae04289e33dde4
ServiceID: f6719b6d0f2adace1d930dc5f48333e7
ReplicationType: statefulset
ReplicationID: gr8333e7
Status: running
------------Service------------
+---------------------+----------------+------------+
| Name | IP | Port |
+---------------------+----------------+------------+
| gr8333e7 | None | (TCP:3306) |
| service-392-3306 | 10.108.223.74 | (TCP:3306) |
| service-392-3306out | 10.105.125.132 | (TCP:3306) |
+---------------------+----------------+------------+
------------Ingress------------
+------+------+
| Name | Host |
+------+------+
+------+------+
-------------------Pod_1-----------------------
PodName: gr8333e7-0
PodStatus: Initialized : True Ready : True ContainersReady : True PodScheduled : True
PodIP: 10.244.3.196
PodHostIP: 172.31.200.68
PodHostName: node1
PodVolumePath:
PodStratTime: 2020-06-19T14:54:34+08:00
Containers:
+--------------+----------------------------------+-------------------------------------------------------------+------------------------------------+
| ID | Name | Image | State |
+--------------+----------------------------------+-------------------------------------------------------------+------------------------------------+
| 2258ee0b766f | f6719b6d0f2adace1d930dc5f48333e7 | goodrain.me/0b8f5af437254bb55b4d8907a0bbb3ab:20200424215058 | Running(2020-06-19T14:54:36+08:00) |
+--------------+----------------------------------+-------------------------------------------------------------+------------------------------------+
[root@master ~]# kubectl delete sts gr8333e7 -n 672f06d5992f4b4580ae04289e33dde4
statefulset.apps "gr8333e7" deleted
查看日志即正常
处理K8S PVC删除后pod报错的更多相关文章
- k8s 执行 ingress yaml 文件报错: error when creating "ingress-myapp.yaml": Internal error occurred: failed calling webhook
k8s 执行 ingress yaml 文件报错:错误如下: [root@k8s-master01 baremetal]# kubectl apply -f ingress-test.yaml Err ...
- dialogic d300语音卡驱动重装后启动报错问题解决方法
dialogic d300 驱动重装后 dlstart 报错解决 问题描述:dlstart 后如下报错 [root@BJAPQ091 data]#dlstop Stopping Dialogic ...
- Heka 编译安装后 运行报错 panic: runtime error: cgo argument has Go pointer to Go pointer
Heka 编译安装后 运行报错 panic: runtime error: cgo argument has Go pointer to Go pointer 解决办法: 1. Start heka ...
- 安装mongodb后启动报错libstdc++
安装mongo后启动报错如下图 显然说是libstdc++.so文件版本的问题,这种一般都是gcc版本太低了 接着查询gcc的版本 strings /usr/lib/libstdc++.so.6 ...
- Maven项目下update maven后Eclipse报错:java.lang.ClassNotFoundException: ContextLoaderL
Maven项目下update maven后Eclipse报错:java.lang.ClassNotFoundException: ContextLoaderL 严重: Error config ...
- linux上安装完torch后仍报错:ImportError: No module named torch
linux上安装完torch后仍报错: Traceback (most recent call last): File , in <module> import torch ImportE ...
- 安卓工作室 android studio 汉化后,报错。 设置界面打不开。Can't find resource for bundle java.util.PropertyResourceBundle, key emmet.bem.class.name.element.separator.label
安卓工作室 android studio 汉化后,报错. 设置界面打不开. Android studio has been sinified and reported wrong.The setup ...
- Python首次安装后运行报错(0xc000007b)的解决方法
最近在安装完Python后运行发现居然报错了,错误代码是0xc000007b,于是通过往上查找发现是因为首次安装Python缺乏VC++库的原因,下面通过这篇文章看看如何解决这个问题吧. 错误提示 ...
- Maven项目下update maven后Eclipse报错
Maven项目下update maven后Eclipse报错:java.lang.ClassNotFoundException: ContextLoaderL 严重: Error config ...
随机推荐
- 最火的分布式调度系统 XXL-JOB 安装和简单使用
唉,在谈文章之前先说一下自己的情况.原计划是在上周六写完这篇文章的,然而周六的时候打开电脑的,按照平常"惯例",先补一些 "黑色五叶草"/"进巨&qu ...
- springcloud根据日期区间查询同时其他字段模糊查询
/** * 分页查询完工送检单 * @param entity * @param query * @return */ @GetMapping("getQcProInsAppOverList ...
- day114:MoFang:基于支付宝沙箱测试环境完成创建充值订单接口&服务端处理支付结果的同步通知和异步通知
目录 1.基于支付宝提供的沙箱测试环境开发支付接口 1.后端提供创建充值订单接口 2.前端调用AlipayPlus发起支付 3.注意:自定义APPLoader完成接下来的开发 4.下载支付宝沙箱钱包A ...
- [LeetCode]100. Same Tree判断树相同
dfs遍历一下判断 public boolean isSameTree(TreeNode p, TreeNode q) { if (p==null) { return q == null; } els ...
- 解决使用Navicat等工具进行连接登录mysql的1521错误,(mysql为8.0版本)
mysql 8.0的版本的加密方式和以前的不一样,因此使用Navicat等工具进行连接的时候,会报1521的异常. 解决方法如下: 登录mysql的命令行工具,输入如下代码: ALTER USER ' ...
- 伯俊BOS2.0店铺收入对账功能设计
一.客户需求 通过导入银行POS机流水,将流水与ERP系统的零售付款数据进行对比,统计差异! 二.功能设计 1.新增"POS机号对应表单",用于维护POS机与erp店仓对应 2.新 ...
- 这些JS技巧,看看你是否都会用?
问题1:以下代码在浏览器控制台上会打印什么? var a = 10; function foo() { console.log(a); // ?? var a = 20; } foo(); 问题2:如 ...
- 面试官:Mysql 中主库跑太快,从库追不上怎么整?
写这篇文章是因为之前有一次删库操作,需要进行批量删除数据,当时没有控制好删除速度,导致产生了主从延迟,出现了一点小事故. 今天我们就来看看为什么会产生主从延迟以及主从延迟如何处理等相关问题. 坐好了, ...
- sparkStreaming实时数据处理的优化方面
1.并行度 在direct方式下,sparkStreaming的task数量是等于kafka的分区数,kakfa单个分区的一般吞吐量为10M/s 常规设计下:kafka的分区数一般为broken节点的 ...
- C# 设置默认关联程序
以下代码做个Mark /// <summary> /// Create an associaten for a file extension in the windows registry ...