启动docker失败,报错了

启动docker失败,报错了。Failed to load environment files: No such file or directory

[root@mcwk8s05 ~]# systemctl start docker
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@mcwk8s05 ~]# journalctl -xe
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
.....
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Apr 18 00:33:44 mcwk8s05 kube-proxy[1006]: I0418 00:33:44.786333 1006 reflector.go:160] Listing and watching *v1.Endpoints from k8s.io/client-go/informers/factory.go:133
Apr 18 00:33:44 mcwk8s05 kube-proxy[1006]: I0418 00:33:44.788405 1006 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
Apr 18 00:33:46 mcwk8s05 kube-proxy[1006]: I0418 00:33:46.143912 1006 proxier.go:748] Not syncing ipvs rules until Services and Endpoints have been received from master
Apr 18 00:33:46 mcwk8s05 kube-proxy[1006]: I0418 00:33:46.144004 1006 proxier.go:744] syncProxyRules took 185.651µs
Apr 18 00:33:46 mcwk8s05 kube-proxy[1006]: I0418 00:33:46.144024 1006 bounded_frequency_runner.go:221] sync-runner: ran, next possible in 0s, periodic in 30s
Apr 18 00:33:46 mcwk8s05 systemd[1]: docker.service holdoff time over, scheduling restart.
Apr 18 00:33:46 mcwk8s05 systemd[1]: Failed to load environment files: No such file or directory

查看这个环境文件

[root@mcwk8s05 ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service [Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env

发现这个文件是flannel运行时的临时文件。flannel没有启动。那么先启动flannel

[root@mcwk8s05 ~]# ls /run/
abrt console crond.pid dbus faillock lock mount NetworkManager sepermit sshd.pid svnserve systemd tuned user vmware
auditd.pid containerd cron.reboot docker.sock initramfs log netreport plymouth setrans sudo syslogd.pid tmpfiles.d udev utmp
[root@mcwk8s05 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:dd brd ff:ff:ff:ff:ff:ff
inet 10.0.0.35/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3a1f:8b4:d1f1:9759/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:e7 brd ff:ff:ff:ff:ff:ff
[root@mcwk8s05 ~]#

启动网络,然后启动容器,正常启动

[root@mcwk8s05 ~]# systemctl start flanneld.service
[root@mcwk8s05 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:dd brd ff:ff:ff:ff:ff:ff
inet 10.0.0.35/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3a1f:8b4:d1f1:9759/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:e7 brd ff:ff:ff:ff:ff:ff
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 4e:bb:c2:5c:bf:37 brd ff:ff:ff:ff:ff:ff
inet 172.17.98.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::4cbb:c2ff:fe5c:bf37/64 scope link
valid_lft forever preferred_lft forever
[root@mcwk8s05 ~]# systemctl start docker
[root@mcwk8s05 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:dd brd ff:ff:ff:ff:ff:ff
inet 10.0.0.35/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3a1f:8b4:d1f1:9759/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:25:ef:e7 brd ff:ff:ff:ff:ff:ff
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 4e:bb:c2:5c:bf:37 brd ff:ff:ff:ff:ff:ff
inet 172.17.98.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::4cbb:c2ff:fe5c:bf37/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:f6:d4:62:1b brd ff:ff:ff:ff:ff:ff
inet 172.17.98.1/24 brd 172.17.98.255 scope global docker0
valid_lft forever preferred_lft forever
[root@mcwk8s05 ~]#

一次k8s的node 是not ready的排查

检查状态没有准备好

[root@mcwk8s03 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
[root@mcwk8s03 ~]#
[root@mcwk8s03 ~]#
[root@mcwk8s03 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
mcwk8s05 NotReady <none> 166d v1.15.12
mcwk8s06 NotReady <none> 166d v1.15.12
关闭防火墙
systemctl stop firewalld.service
node 上kubelet没有启动
[root@mcwk8s05 ~]# systemctl status kubelet.service

node上查看错误信息,查看到访问的是nginx负载均衡器的vip。

[root@mcwk8s05 ~]# tail -100f /var/log/messages
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.110814 2985 reflector.go:160] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:454
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118520 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/azure-disk
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118562 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/gce-pd
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118568 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/cinder
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118573 2985 setters.go:753] Error getting volume limit for plugin kubernetes.io/aws-ebs
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118591 2985 kubelet_node_status.go:471] Recording NodeHasSufficientMemory event message for node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118605 2985 kubelet_node_status.go:471] Recording NodeHasNoDiskPressure event message for node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118628 2985 kubelet_node_status.go:471] Recording NodeHasSufficientPID event message for node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118644 2985 kubelet_node_status.go:72] Attempting to register node mcwk8s05
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118645 2985 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mcwk8s05", UID:"mcwk8s05", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node mcwk8s05 status is now: NodeHasSufficientMemory
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118671 2985 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mcwk8s05", UID:"mcwk8s05", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node mcwk8s05 status is now: NodeHasNoDiskPressure
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.118701 2985 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mcwk8s05", UID:"mcwk8s05", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientPID' Node mcwk8s05 status is now: NodeHasSufficientPID
Apr 18 01:14:35 mcwk8s05 kubelet: I0418 01:14:35.129924 2985 kubelet.go:1973] SyncLoop (housekeeping, skipped): sources aren't ready yet.
Apr 18 01:14:35 mcwk8s05 kubelet: E0418 01:14:35.194840 2985 kubelet.go:2252] node "mcwk8s05" not found
Apr 18 01:14:35 mcwk8s05 kubelet: E0418 01:14:35.295918 2985 kubelet.go:2252] node "mcwk8s05" not found Apr 18 01:14:37 mcwk8s05 kubelet: E0418 01:14:37.012374 2985 kubelet.go:2252] node "mcwk8s05" not found
Apr 18 01:14:37 mcwk8s05 kube-proxy: E0418 01:14:37.109904 1006 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://10.0.0.30:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 10.0.0.30:6443: connect: no route to host
Apr 18 01:14:37 mcwk8s05 kube-proxy: E0418 01:14:37.109992 1006 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://10.0.0.30:6443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: dial tcp 10.0.0.30:6443: connect: no route to host
Apr 18 01:14:37 mcwk8s05 kubelet: E0418 01:14:37.110082 2985 kubelet_node_status.go:94] Unable to register node "mcwk8s05" with API server: Post https://10.0.0.30:6443/api/v1/nodes: dial tcp 10.0.0.30:6443: connect: no route to host
Apr 18 01:14:37 mcwk8s05 kubelet: E0418 01:14:37.110127 2985 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:454: Failed to list *v1.Node: Get https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmcwk8s05&limit=500&resourceVersion=0: dial tcp 10.0.0.30:6443: connect: no route to host

在两个nginx服务器上启动nginx进程。启动高可用

[root@mcwk8s01 ~]# ps -ef|grep nginx
root 1575 1416 0 01:17 pts/0 00:00:00 grep --color=auto nginx
[root@mcwk8s01 ~]# nginx
[root@mcwk8s01 ~]# systemctl start keepalived.service
[root@mcwk8s01 ~]#

然后查看node,已经成为准备状态,可以正常使用了

[root@mcwk8s03 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
mcwk8s05 Ready <none> 166d v1.15.12
mcwk8s06 Ready <none> 166d v1.15.12
[root@mcwk8s03 ~]#

docker/k8s常见错误处理的更多相关文章

  1. Docker Toolbox常见错误解决方案

    错误1 Error checking TLS connection: Error checking and/or regenerating the certs: There was an error ...

  2. docker之常见错误

    1. docker run -d --name showdoc -p 4999:80 -v /showdoc_data/html:/var/www/html/ star7th/showdoc WARN ...

  3. docker 运行时常见错误

    docker 运行时常见错误 (1) Cannot connect to the Docker daemon at unix:///var/run/docker.sock. [root@localho ...

  4. Docker Hadoop 配置常见错误及解决办法

    Docker Hadoop 配置常见错误及解决办法 问题1:wordcount运行卡住,hadoop 任务运行到running job就卡住了 INFO mapreduce.Job: Running ...

  5. docker+k8s基础篇三

    Docker+K8s基础篇(三) kubernetes上的资源 A:k8s上的常用资源 Pod的配置清单 A:Pod上的清单定义 B:Pod创建资源的方法 C:spec下其它字段的介绍 Pod的生命周 ...

  6. docker k8s 1.3.8 + flannel

    docker k8s + flannel kubernetes 是谷歌开源的 docker 集群管理解决方案. 项目地址: http://kubernetes.io/ 测试环境: node-1: 10 ...

  7. docker&k8s填坑记

    本篇主要用于记录在实施docker和kubenetes过程中遇到的一个问题和解决办法. 本节部分内容摘自互联网,有些部分为自己在测试环境中遇到到实际问题,后面还会根据实际情况不断分享关于docker/ ...

  8. docker+k8s基础篇一

    Docker+K8s基础篇(一) docker的介绍 A:为什么是docker B:k8s介绍 docker的使用 A:docker的安装 B:docker的常用命令 C:docker容器的启动和操作 ...

  9. Docker & k8s 系列三:在k8s中部署单个服务实例

    本章将会讲解: pod的概念,以及如何向k8s中部署一个单体应用实例. 在上面的篇幅中,我们了解了docker,并制作.运行了docker镜像,然后将镜像发布至中央仓库了.然后又搭建了本机的k8s环境 ...

  10. 初识JAVA(二)(送给Java和安卓初学者)----常见错误

    博主接着上篇的来讲哦,以后的更新中,博主会出一些练习题,有兴趣的可以做做然后吧代码粘贴到下面,大家可以一起研究学习,一起进步,本篇文章主要讲的是: 一.常见错误 二.连接上篇一起的训练 无论是什么方向 ...

随机推荐

  1. OpenHarmony创新赛|赋能直播第四期

     开放原子开源大赛OpenHarmony创新赛进入了中期评审环节,为了解决开发者痛点,本期以三方库移植.MQTT移植案例.开发工具介绍的3节系列技术课程,帮助开发者提升开发效率,为作品的创新能力奠定坚 ...

  2. OpenHarmony集成OCR三方库实现文字提取

    1. 简介 Tesseract(Apache 2.0 License)是一个可以进行图像OCR识别的C++库,可以跨平台运行 .本样例基于Tesseract库进行适配,使其可以运行在OpenAtom ...

  3. SSM框架整合——书籍管理系统

    1.准备工作: 1.1.环境要求 IDEA MySQL 5.7.19 Tomcat 9 Maven 3.6 1.2.数据库设计 创建一个存放书籍数据的数据库表: CREATE DATABASE `ss ...

  4. 在 RedHat、 CentOS、 Fedora 和 Debian、 Ubuntu、 Linux Mint、 Xubuntu 等这些系统中安装 Teamviewer

    这篇指南介绍了怎么样在 RedHat. CentOS. Fedora 和 Debian. Ubuntu. Linux Mint. Xubuntu 等这些系统中安装 Teamviewer 9.Teamv ...

  5. Redis为什么是单线程还支持高并发

    Redis为什么设计成单线程模式因为redis是基于内存的读写操作,所以CPU不是性能瓶颈,而单线程更好实现,所以就设计成单线程模式 单线程模式省却了CPU上下文切换带来的开销问题,也不用去考虑各种锁 ...

  6. MySQL 分析查询与来源机器

    当前分析针对版本:MariaDB 10.5 线上出现报错:can't create more than max_prepared_stmt_count statements.造成这个错误的直接原因就是 ...

  7. 第一次blog

    前言:我在大一上学期学习了c语言,然后在下学期学习了第二门语言java,因为之前c语言学的挺一般的,然后在这学期学习java感觉还是挺不简单的,要自学很多东西,在这段时间里,我学习了JAVA的基本语法 ...

  8. 力扣608(MySQL)-树节点(中等)

    题目: 给定一个表 tree,id 是树节点的编号, p_id 是它父节点的 id . 树中每个节点属于以下三种类型之一: 叶子:如果这个节点没有任何孩子节点.根:如果这个节点是整棵树的根,即没有父节 ...

  9. 基于开放共享的自主研发—MaxCompute 持续增强生态与开放性建设

    简介: MaxCompute 是阿里巴巴自研的云原生数据仓库,同时也兼容大部分大数据生态系统.一个平台无法实现所有功能和解决所有问题,MaxCompute 需持续增强生态与开放性建设,方能走得更远. ...

  10. 千万级可观测数据采集器--iLogtail代码完整开源

    简介: 2022年6月29日,阿里云iLogtail开源后迎来首次重大更新,正式发布完整功能的iLogtail社区版.本次更新开源全部C++核心代码,该版本在内核能力上首次对齐企业版,开发者可以构建出 ...