启动docker失败,报错了

启动docker失败,报错了。Failed to load environment files: No such file or directory

[root@mcwk8s05 ~]# systemctl start docker
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@mcwk8s05 ~]# journalctl -xe
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
.....
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Apr 18 00:33:44 mcwk8s05 kube-proxy[1006]: I0418 00:33:44.786333 1006 reflector.go:160] Listing and watching *v1.Endpoints from k8s.io/client-go/informers/factory.go:133
Apr 18 00:33:44 mcwk8s05 kube-proxy[1006]: I0418 00:33:44.788405 1006 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
Apr 18 00:33:46 mcwk8s05 kube-proxy[1006]: I0418 00:33:46.143912 1006 proxier.go:748] Not syncing ipvs rules until Services and Endpoints have been received from master
Apr 18 00:33:46 mcwk8s05 kube-proxy[1006]: I0418 00:33:46.144004 1006 proxier.go:744] syncProxyRules took 185.651µs
Apr 18 00:33:46 mcwk8s05 kube-proxy[1006]: I0418 00:33:46.144024 1006 bounded_frequency_runner.go:221] sync-runner: ran, next possible in 0s, periodic in 30s
Apr 18 00:33:46 mcwk8s05 systemd[1]: docker.service holdoff time over, scheduling restart.
Apr 18 00:33:46 mcwk8s05 systemd[1]: Failed to load environment files: No such file or directory

查看这个环境文件

[root@mcwk8s05 ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service [Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env

发现这个文件是flannel运行时的临时文件。flannel没有启动。那么先启动flannel

[root@mcwk8s05 ~]# ls /run/
abrt console crond.pid dbus faillock lock mount NetworkManager sepermit sshd.pid svnserve systemd tuned user vmware
auditd.pid containerd cron.reboot docker.sock initramfs log netreport plymouth setrans sudo syslogd.pid tmpfiles.d udev utmp
[root@mcwk8s05 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:dd brd ff:ff:ff:ff:ff:ff
inet 10.0.0.35/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3a1f:8b4:d1f1:9759/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:e7 brd ff:ff:ff:ff:ff:ff
[root@mcwk8s05 ~]#

启动网络,然后启动容器,正常启动

[root@mcwk8s05 ~]# systemctl start flanneld.service
[root@mcwk8s05 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:dd brd ff:ff:ff:ff:ff:ff
inet 10.0.0.35/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3a1f:8b4:d1f1:9759/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:e7 brd ff:ff:ff:ff:ff:ff
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 4e:bb:c2:5c:bf:37 brd ff:ff:ff:ff:ff:ff
inet 172.17.98.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::4cbb:c2ff:fe5c:bf37/64 scope link
valid_lft forever preferred_lft forever
[root@mcwk8s05 ~]# systemctl start docker
[root@mcwk8s05 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:dd brd ff:ff:ff:ff:ff:ff
inet 10.0.0.35/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3a1f:8b4:d1f1:9759/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:e7 brd ff:ff:ff:ff:ff:ff
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 4e:bb:c2:5c:bf:37 brd ff:ff:ff:ff:ff:ff
inet 172.17.98.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::4cbb:c2ff:fe5c:bf37/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:f6:d4:62:1b brd ff:ff:ff:ff:ff:ff
inet 172.17.98.1/24 brd 172.17.98.255 scope global docker0
valid_lft forever preferred_lft forever
[root@mcwk8s05 ~]#

一次k8s的node 是not ready的排查

检查状态没有准备好

[root@mcwk8s03 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
[root@mcwk8s03 ~]#
[root@mcwk8s03 ~]#
[root@mcwk8s03 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
mcwk8s05 NotReady <none> 166d v1.15.12
mcwk8s06 NotReady <none> 166d v1.15.12
关闭防火墙
systemctl stop firewalld.service
node 上kubelet没有启动
[root@mcwk8s05 ~]# systemctl status kubelet.service

node上查看错误信息,查看到访问的是nginx负载均衡器的vip。

[root@mcwk8s05 ~]# tail -100f /var/log/messages
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.110814 2985 reflector.go:160] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:454
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118520 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/azure-disk
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118562 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/gce-pd
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118568 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/cinder
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118573 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/aws-ebs
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118591 2985 kubelet_node_status.go:471] Recording NodeHasSufficientMemory event message for node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118605 2985 kubelet_node_status.go:471] Recording NodeHasNoDiskPressure event message for node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118628 2985 kubelet_node_status.go:471] Recording NodeHasSufficientPID event message for node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118644 2985 kubelet_node_status.go:72] Attempting to register node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118645 2985 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mcwk8s05", UID:"mcwk8s05", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node mcwk8s05 status is now: NodeHasSufficientMemory
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118671 2985 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mcwk8s05", UID:"mcwk8s05", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node mcwk8s05 status is now: NodeHasNoDiskPressure
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118701 2985 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mcwk8s05", UID:"mcwk8s05", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node mcwk8s05 status is now: NodeHasSufficientPID
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.129924 2985 kubelet.go:1973] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Apr 18 01:14:35 mcwk8s05 kubelet: E0418 01:14:35.194840 2985 kubelet.go:2252] node "mcwk8s05" not found
Apr 18 01:14:35 mcwk8s05 kubelet: E0418 01:14:35.295918 2985 kubelet.go:2252] node "mcwk8s05" not found Apr 18 01:14:37 mcwk8s05 kubelet: E0418 01:14:37.012374 2985 kubelet.go:2252] node "mcwk8s05" not found
Apr 18 01:14:37 mcwk8s05 kube-proxy: E0418 01:14:37.109904 1006 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://10.0.0.30:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 10.0.0.30:6443: connect: no route to host
Apr 18 01:14:37 mcwk8s05 kube-proxy: E0418 01:14:37.109992 1006 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://10.0.0.30:6443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 10.0.0.30:6443: connect: no route to host
Apr 18 01:14:37 mcwk8s05 kubelet: E0418 01:14:37.110082 2985 kubelet_node_status.go:94] Unable to register node "mcwk8s05" with API server: Post https://10.0.0.30:6443/api/v1/nodes: dial tcp 10.0.0.30:6443: connect: no route to host
Apr 18 01:14:37 mcwk8s05 kubelet: E0418 01:14:37.110127 2985 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:454: Failed to list *v1.Node: Get https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmcwk8s05&limit=500&resourceVersion=0: dial tcp 10.0.0.30:6443: connect: no route to host

在两个nginx服务器上启动nginx进程。启动高可用

[root@mcwk8s01 ~]# ps -ef|grep nginx
root 1575 1416 0 01:17 pts/0 00:00:00 grep --color=auto nginx
[root@mcwk8s01 ~]# nginx
[root@mcwk8s01 ~]# systemctl start keepalived.service
[root@mcwk8s01 ~]#

然后查看node,已经成为准备状态,可以正常使用了

[root@mcwk8s03 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
mcwk8s05 Ready <none> 166d v1.15.12
mcwk8s06 Ready <none> 166d v1.15.12
[root@mcwk8s03 ~]#

docker/k8s常见错误处理的更多相关文章

  1. Docker Toolbox常见错误解决方案

    错误1 Error checking TLS connection: Error checking and/or regenerating the certs: There was an error ...

  2. docker之常见错误

    1. docker run -d --name showdoc -p 4999:80 -v /showdoc_data/html:/var/www/html/ star7th/showdoc WARN ...

  3. docker 运行时常见错误

    docker 运行时常见错误 (1) Cannot connect to the Docker daemon at unix:///var/run/docker.sock. [root@localho ...

  4. Docker Hadoop 配置常见错误及解决办法

    Docker Hadoop 配置常见错误及解决办法 问题1:wordcount运行卡住,hadoop 任务运行到running job就卡住了 INFO mapreduce.Job: Running ...

  5. docker+k8s基础篇三

    Docker+K8s基础篇(三) kubernetes上的资源 A:k8s上的常用资源 Pod的配置清单 A:Pod上的清单定义 B:Pod创建资源的方法 C:spec下其它字段的介绍 Pod的生命周 ...

  6. docker k8s 1.3.8 + flannel

    docker k8s + flannel kubernetes 是谷歌开源的 docker 集群管理解决方案. 项目地址: http://kubernetes.io/ 测试环境: node-1: 10 ...

  7. docker&k8s填坑记

    本篇主要用于记录在实施docker和kubenetes过程中遇到的一个问题和解决办法. 本节部分内容摘自互联网,有些部分为自己在测试环境中遇到到实际问题,后面还会根据实际情况不断分享关于docker/ ...

  8. docker+k8s基础篇一

    Docker+K8s基础篇(一) docker的介绍 A:为什么是docker B:k8s介绍 docker的使用 A:docker的安装 B:docker的常用命令 C:docker容器的启动和操作 ...

  9. Docker & k8s 系列三:在k8s中部署单个服务实例

    本章将会讲解: pod的概念,以及如何向k8s中部署一个单体应用实例. 在上面的篇幅中,我们了解了docker,并制作.运行了docker镜像,然后将镜像发布至中央仓库了.然后又搭建了本机的k8s环境 ...

  10. 初识JAVA(二)(送给Java和安卓初学者)----常见错误

    博主接着上篇的来讲哦,以后的更新中,博主会出一些练习题,有兴趣的可以做做然后吧代码粘贴到下面,大家可以一起研究学习,一起进步,本篇文章主要讲的是: 一.常见错误 二.连接上篇一起的训练 无论是什么方向 ...

随机推荐

  1. VS2019快捷键

    快捷键功能CTRL + SHIFT + B生成解决方案CTRL + F7 生成编译CTRL + O 打开文件CTRL + SHIFT + O打开项目CTRL + SHIFT + C显示类视图窗口F4 ...

  2. Docker 12 Dockerfile

    简介 Dockerfile 是用来构建 Docker 镜像的文件,可以理解为命令参数脚本. Dockerfile 是面向开发的,想要打包项目,就要编写 Dockerfile 文件. 由于 Docker ...

  3. HarmonyOS音频开发指导:使用AVPlayer开发音频播放功能

      如何选择音频播放开发方式 在HarmonyOS系统中,多种API都提供了音频播放开发的支持,不同的API适用于不同音频数据格式.音频资源来源.音频使用场景,甚至是不同开发语言.因此,选择合适的音频 ...

  4. ContOS7搭建RAID-0磁盘阵列

    RAID-0条带数据: 优点:2块硬盘同时在写数据,而且各写各的不影响,速度较快:性能提升2倍(理论): 缺点:服务器硬盘特别容易损坏,一点损坏一个,其余不能用:没有容错性:服务器用的不多,都是配合使 ...

  5. Redis工具类,不用框架时备用

    redis.hostName=127.0.0.1 redis.port=6379 redis.database=3 redis.timeout=15000 redis.usePool=true red ...

  6. 【Oracle】year must be between -4713 and +9999,and not be 0

    [Oracle]year must be between -4713 and +9999,and not be 0 year must be between -4713 and +9999,and n ...

  7. 力扣541(java)-反转字符串Ⅱ(简单)

    题目: 给定一个字符串 s 和一个整数 k,从字符串开头算起,每计数至 2k 个字符,就反转这 2k 字符中的前 k 个字符. 如果剩余字符少于 k 个,则将剩余字符全部反转.如果剩余字符小于 2k ...

  8. App隐私合规“免费”自动化检测

    简介: App隐私合规检测提供了全面的隐私合规检测报告和专家建议,从确保形式合规(隐私政策文本合规性)及实质合规(代码层合规性)的一致性,从个人信息收集.权限使用场景.超范围采集.隐私政策.三方SDK ...

  9. SchedulerX 如何帮助用户解决分布式任务调度难题?

    ​简介:本文分别对任务调度平台的资源定义.可视化管控能力.分布式批处理能力进行了简述,并基于 SchedulerX 的能力结合实际业务场景提供了一些基础参考案例.希望通过上述内容能让大家方便地熟悉任务 ...

  10. 如何进行基于Anolis OS的企业级Java应用规模化实践?|龙蜥技术

    ​简介:提供了7×24小时的专属钉钉或者电话支持,响应时间保证到在业务不可用情况下10分钟响应,业务一般的问题在一小时可以获得响应,主要城市可以两小时内得到到达现场的服务. 本文作者郁磊,是Java语 ...