故障描述:

当我使用docker-compose的方式部署内网的harbor时。它自动创建了一个bridge网桥,与内网的一个网段(172.18.0.1/16)发生冲突,docker 默认的网络模式是bridge ,默认网段是172.17.0.1/16。

多次执行docker-compose up -d 部署服务后,自动生成的网桥会依次使用: 172.18.x.x ,172.19.x.x....

然后碰巧内网的一个网段也是172.18.x.x。这样就导致这台机器死活也连不到172.18.x.x这台机器。

现象:

telnet不通,也无法ping通。。

解决方法

1、查看路由表:

2、查看docker network如下:

3、将docker-compose应用停止

docker-compose down

4、修改docker.json文件

下次docker启动的时候docker0将会变为172.31.0.1/24,docker-compose自动创建的bridge也会变为172.31.x.x/24

# cat /etc/docker/daemon.json
{
"debug" : true,
"default-address-pools" : [
{
"base" : "172.31.0.0/16",
"size" : 24
}
]
}

5、删除原来有冲突的bridge

# docker network ls
NETWORK ID NAME DRIVER SCOPE
e4c5724969c9 bridge bridge local
b3cd674d3894 harbor_harbor bridge local
7f20b3d56275 host host local
7b5f3000115b none null local # docker network rm b3cd674d3894
b3cd674d3894

6、重启docker服务

# systemctl restart docker

7、查看ip a和路由表

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:f4:69 brd ff:ff:ff:ff:ff:ff
inet 10.18.61.80/24 brd 10.18.61.255 scope global eth0
valid_lft forever preferred_lft forever
64: br-6a82e7536981: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:4f:5a:cc:53 brd ff:ff:ff:ff:ff:ff
inet 172.31.1.1/24 brd 172.31.1.255 scope global br-6a82e7536981
valid_lft forever preferred_lft forever
65: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:91:20:87:bd brd ff:ff:ff:ff:ff:ff
inet 172.31.0.1/24 brd 172.31.0.255 scope global docker0
valid_lft forever preferred_lft forever # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.18.61.1 0.0.0.0 UG 0 0 0 eth0
10.18.61.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
172.31.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.31.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-6a82e7536981 # docker network ls
NETWORK ID NAME DRIVER SCOPE
27b40217b79c bridge bridge local
6a82e7536981 harbor_harbor bridge local
7f20b3d56275 host host local
7b5f3000115b none null local

可以看到64: br-6a82e7536981 和 65: docker0的网段都已经变成172.31.x.x了。说明配置ok了

8、启动docker-compose

# docker-compose up -d

如果出现以下错误

Starting log         ... error
Starting registry ... error
Starting registryctl ... error
Starting postgresql ... error
Starting portal ... error
Starting redis ... error
Starting core ... error
Starting jobservice ... error
Starting proxy ... error ERROR: for log Cannot start service log: network b3cd674d38943c91c439ea8eafc0ecc4cea6d4e0df875d930e8342f6d678d135 not found
ERROR: No containers to start

因为没有自动切换。NETWORK ID:b3cd674d3....还是原来的NETWORK ID

需要绑定bridge

# docker network ls
NETWORK ID NAME DRIVER SCOPE
27b40217b79c bridge bridge local
6a82e7536981 harbor_harbor bridge local
7f20b3d56275 host host local
7b5f3000115b none null local
# docker network connect network_name container_name
[root@mvxl74019 harbor]# docker network connect harbor_harbor nginx
[root@mvxl74019 harbor]# docker network connect harbor_harbor harbor-jobservice
[root@mvxl74019 harbor]# docker network connect harbor_harbor harbor-core
[root@mvxl74019 harbor]# docker network connect harbor_harbor registryctl
[root@mvxl74019 harbor]# docker network connect harbor_harbor registry
[root@mvxl74019 harbor]# docker network connect harbor_harbor redis
[root@mvxl74019 harbor]# docker network connect harbor_harbor harbor-db
[root@mvxl74019 harbor]# docker network connect harbor_harbor harbor-portal
[root@mvxl74019 harbor]# docker network connect harbor_harbor harbor-log
[root@mvxl74019 harbor]# docker-compose up -d
Starting log ... done
Starting registry ... done
Starting registryctl ... done
Starting postgresql ... done
Starting portal ... done
Starting redis ... done
Starting core ... done
Starting jobservice ... done
Starting proxy ... done
9、检查服务是否正常
# docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------
harbor-core /harbor/entrypoint.sh Up (healthy)
harbor-db /docker-entrypoint.sh Up (healthy)
harbor-jobservice /harbor/entrypoint.sh Up (healthy)
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-portal nginx -g daemon off; Up (healthy)
nginx nginx -g daemon off; Up (healthy) 0.0.0.0:80->8080/tcp
redis redis-server /etc/redis.conf Up (healthy)
registry /home/harbor/entrypoint.sh Up (healthy)
registryctl /home/harbor/start.sh Up (healthy)

10、再尝试是否能telnet通目标IP: 172.18.5.145

可以看到是已经能够telnet通和ping通了。

 说明:此文档参考转载至:https://zhuanlan.zhihu.com/p/379305319,仅用于学习使用

docker报错:报错br-xxxx 之Docker-Compose 自动创建的网桥与局域网络冲突的更多相关文章

  1. 【docker】docker启动、重启、关闭命令,附带:docker启动容器报错:docker: Error response from daemon: driver failed programming external connectivity on endpoint es2-node

    在关闭并放置centos 的防火墙重启之后[操作:https://www.cnblogs.com/sxdcgaq8080/p/10032829.html] 启动docker容器就发现开始报错: [ro ...

  2. zun 不能创建 docker 容器,报错: datastore for scope "global" is not initialized

    问题:zun不能创建docker容器,报错:datastore for scope "global" is not initialized   解决:修改docker 服务配置文件 ...

  3. docker 启动容器报错

    2018-10-24 报错信息: /usr/bin/docker-current: Error response from daemon: driver failed programming exte ...

  4. egg 连接 mysql 的 docker 容器,报错:Client does not support authentication protocol requested by server; consider upgrading MySQL client

    egg 连接 mysql 的 docker 容器,报错:Client does not support authentication protocol requested by server; con ...

  5. 使用nsenter进入docker容器后端报错 mesg: ttyname failed: No such file or directory

    通过nsenter 进入到docker容器的后端总是报下面的错,, [root@devdtt ~]# docker inspect -f {{.State.Pid}} mynginx411950 [r ...

  6. docker启动容器报错 Unknown runtime specified nvidia.

    启动docker容器时,报错 问题复现 当我启动一个容器时,运行以下命令: docker run --runtime=nvidia .... 后面一部分命令没写出来,此时报错的信息如下: docker ...

  7. docker启动镜像报错

    docker启动镜像报错: docker: Error response from daemon: driver failed programming external connectivity on ...

  8. docker启动redis报错 oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo

    docker启动redis报错 1:C 17 Jun 08:18:04.613 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo1:C 17 Jun 08 ...

  9. docker推送镜像到docker本地仓库报错:http: server gave HTTP response to HTTPS client

    因为Docker从1.3.X之后,与docker registry交互默认使用的是https,然而此处搭建的私有仓库只提供http服务,所以当与私有仓库交互时就会报上面的错误. 解决办法: vim / ...

  10. Docker获取镜像报错docker: Error response from daemon

    docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled ...

随机推荐

  1. Cobalt Strike 之:提权

    郑重声明: 本笔记编写目的只用于安全知识提升,并与更多人共享安全知识,切勿使用笔记中的技术进行违法活动,利用笔记中的技术造成的后果与作者本人无关.倡导维护网络安全人人有责,共同维护网络文明和谐. Co ...

  2. 代码随想录算法训练营day06 | leetcode 242、349 、202、1

    基础知识 哈希 常见的结构(不要忘记数组) 数组 set (集合) map(映射) 注意 哈希冲突 哈希函数 LeetCode 242 分析1.0 HashMap<Character, Inte ...

  3. docker-compose部署rocketmq

    docker-compose安装: 1.从github上下载docker-compose二进制文件安装 Ubuntu下载docker-compose文件 sudo curl -L https://gi ...

  4. input type = file 在部分安卓手机上无法调起相册

    移动端H5web 用input type = file 在部分安卓手机上无法调起摄像头拍照,有的也无法访问相册而是直接访问了文档,解决办法是: 加上 accept = "image/*&qu ...

  5. JSP 页面引入静态资源 404 未找到

    jsp 页面引入了 css 文件,部署项目时发现 css 不生效,打开 f12 查看网络,发现请求状态码是 404.导致这个问题的情况大概有以下两种情况: 如果你通过浏览器 f12 查看 link 或 ...

  6. Spring整合Mybatis原理

    Spring整合Mybatis原理 目录 Spring整合Mybatis原理 1.@MapperScan注解发挥作用 1.1.导入MapperScannerRegistrar类 1.1.2.执行Imp ...

  7. fiddler的界面详细讲解

    一.fiddler首页概述

  8. 基于VS码代码时出现访问错误的个人理解

    日常在VS写代码中,有时候我们将写好的代码调试出来会出现有未经处理的异常,在0x00000000XXXX处有未经处理的异常,写入位置0xFFFFFFFFFFFFXX时发生访问冲突. 类似于图中情况,遇 ...

  9. WPF ItemsControl Command 绑定操作

    视图模型: using System.Collections.ObjectModel; using System.Diagnostics; using System.Windows.Input; us ...

  10. Redis设置开机自启动

    0.前提条件 redis_version:7.0.5 Linux Alibaba Cloud Linux release 3 (soaring Falcon) 查询版本有如下两种方式: 1.通过red ...