weave官网 https://www.weave.works

1. 下载安装

sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave

2. 部署weave网络

(1) 在第一台机器上运行,如果使用默认的 10.0.*.* 网段则如下

weave launch

本次测试使用自定义的网段,所以启动指令有所不同:

weave launch --ipalloc-range 168.108.0.0/

启动成功后,会有3个weave的容器运行中

# docker ps -a
c9ed14e97dfd weaveworks/weave:2.0. "/home/weave/weave..." days ago Up days weave
7db070b5f54e weaveworks/weaveexec:2.0. "/bin/false" days ago Created weavevolumes-2.0.
b6d603c8c7a8 weaveworks/weavedb "data-only" days ago Created weavedb

可看到增加了虚拟网卡 weave

# ifconfig
datapath: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
ether a6::9d:b6:5f: txqueuelen (Ethernet)
RX packets bytes (84.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions docker0: flags=<UP,BROADCAST,MULTICAST> mtu
inet 192.168.0.1 netmask 255.255.240.0 broadcast 0.0.0.0
ether :::9e::4b txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions docker_gwbridge: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 192.168.16.1 netmask 255.255.240.0 broadcast 0.0.0.0
ether ::b9::2f:b8 txqueuelen (Ethernet)
RX packets bytes (28.1 MiB)
RX errors dropped overruns frame
TX packets bytes (28.1 MiB)
TX errors dropped overruns carrier collisions eth0: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 10.28.148.61 netmask 255.255.252.0 broadcast 10.28.151.255
ether ::3e:0e::7a txqueuelen (Ethernet)
RX packets bytes (11.5 GiB)
RX errors dropped overruns frame
TX packets bytes (7.9 GiB)
TX errors dropped overruns carrier collisions eth1: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 101.37.162.152 netmask 255.255.252.0 broadcast 101.37.163.255
ether ::3e:0e::ce txqueuelen (Ethernet)
RX packets bytes (513.3 MiB)
RX errors dropped overruns frame
TX packets bytes (4.3 GiB)
TX errors dropped overruns carrier collisions lo: flags=<UP,LOOPBACK,RUNNING> mtu
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen (Local Loopback)
RX packets bytes (28.1 MiB)
RX errors dropped overruns frame
TX packets bytes (28.1 MiB)
TX errors dropped overruns carrier collisions veth7720327: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
ether 7e::f5::6d:9e txqueuelen (Ethernet)
RX packets bytes (372.0 B)
RX errors dropped overruns frame
TX packets bytes (798.0 B)
TX errors dropped overruns carrier collisions weave: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
ether c2:a6::::0a txqueuelen (Ethernet)
RX packets bytes (84.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions

(2) 其他的节点加入加入上面已经创建的weave网络

weave launch 10.28.148.61 --ipalloc-range 168.108.0.0/

(3) 创建网络成功的话,在每个节点上都可以用docker命令查看到weave网络

# docker network ls
NETWORK ID NAME DRIVER SCOPE
7c19813ffbff bridge bridge local
a7a2188380ba docker_gwbridge bridge local
7f97ac1cfe6e host host local
z08xcdlswkbk ingress overlay swarm
dfa68b3918b3 none null local
42f695c8c061 weave weavemesh local

3. docker启动测试

(1) 启动相当简单,仅需正常的docker命令中指定network为weave就行了

docker run -ti --network weave mytest

(2) 在2个节点上启动容器

在容器内部ifconfig可以看到容器使用的是weave的子网段,2个节点分别是168.108.0.1和168.108.192.0

[root@f451f6736785 /]# ifconfig
ethwe0 Link encap:Ethernet HWaddr :6E:BF:E4::A7
inet addr:168.108.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (42.0 b) TX bytes: (42.0 b) eth0 Link encap:Ethernet HWaddr ::C0:A8::
inet addr:192.168.16.6 Bcast:0.0.0.0 Mask:255.255.240.0
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 b) TX bytes: (0.0 b) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 b) TX bytes: (0.0 b)
[root@7c202270ff9f /]# ifconfig
ethwe0 Link encap:Ethernet HWaddr F6:8D:A2:CB:EF:F5
inet addr:168.108.192.0 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 b) TX bytes: (42.0 b) eth0 Link encap:Ethernet HWaddr ::C0:A8::
inet addr:192.168.16.3 Bcast:0.0.0.0 Mask:255.255.240.0
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 b) TX bytes: (0.0 b) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 b) TX bytes: (0.0 b)

在容器里可以互相ping通

[root@f451f6736785 /]# ping 168.108.192.0
PING 168.108.192.0 (168.108.192.0) () bytes of data.
bytes from 168.108.192.0: icmp_seq= ttl= time=0.935 ms
bytes from 168.108.192.0: icmp_seq= ttl= time=0.334 ms
bytes from 168.108.192.0: icmp_seq= ttl= time=0.257 ms
bytes from 168.108.192.0: icmp_seq= ttl= time=0.386 ms
^C
--- 168.108.192.0 ping statistics ---
packets transmitted, received, % packet loss, time 3845ms
rtt min/avg/max/mdev = 0.257/0.478/0.935/0.267 ms
[root@7c202270ff9f /]# ping 168.108.0.1
PING 168.108.0.1 (168.108.0.1) () bytes of data.
bytes from 168.108.0.1: icmp_seq= ttl= time=0.428 ms
bytes from 168.108.0.1: icmp_seq= ttl= time=0.274 ms
bytes from 168.108.0.1: icmp_seq= ttl= time=0.344 ms
bytes from 168.108.0.1: icmp_seq= ttl= time=0.341 ms
^C
--- 168.108.0.1 ping statistics ---
packets transmitted, received, % packet loss, time 8592ms
rtt min/avg/max/mdev = 0.235/0.301/0.428/0.056 ms

(3) 网速测试:

本次测试的环境是阿里云上的ECS,内网带宽为 1Gbits。

先安装iperf3(网速测试工具)

curl "http://downloads.es.net/pub/iperf/iperf-3.0.6.tar.gz" -o iperf-3.0..tar.gz
tar xzvf iperf-3.0..tar.gz
cd iperf-3.0.
./configure
make
make install

在节点2上启动iperf服务

# iperf3 -s
-----------------------------------------------------------
Server listening on
-----------------------------------------------------------

在节点1上启动网速测试

# iperf3 -c 168.108.192.0
Connecting to host 168.108.192.0, port
[ ] local 168.108.0.1 port connected to 168.108.192.0 port
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ ] 0.00-1.00 sec MBytes 1.42 Gbits/sec KBytes
[ ] 1.00-2.00 sec 95.2 MBytes Mbits/sec KBytes
[ ] 2.00-3.00 sec 95.0 MBytes Mbits/sec KBytes
[ ] 3.00-4.00 sec 96.2 MBytes Mbits/sec KBytes
[ ] 4.00-5.00 sec 93.8 MBytes Mbits/sec KBytes
[ ] 5.00-6.00 sec 95.0 MBytes Mbits/sec KBytes
[ ] 6.00-7.00 sec 95.0 MBytes Mbits/sec KBytes
[ ] 7.00-8.00 sec 95.0 MBytes Mbits/sec KBytes
[ ] 8.00-9.00 sec 93.8 MBytes Mbits/sec KBytes
[ ] 9.00-10.00 sec 95.0 MBytes Mbits/sec KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ ] 0.00-10.00 sec MBytes Mbits/sec sender
[ ] 0.00-10.00 sec MBytes Mbits/sec receiver iperf Done.

测试下来平均网速:发送速度 859 Mbits/sec ,收取速度 856 Mbits/sec。网速还是让人比较满意的。

docker (centOS 7) 使用笔记5 - weave网络的更多相关文章

  1. docker (centOS 7) 使用笔记3 - docker swarm mode

    1. 什么是docker swarm mode docker engine自带的 容器管理 工具.功能比较早的 docker swarm 更多,且集成在docker engine里. (docker ...

  2. docker (centOS 7) 使用笔记2 - 使用nfs作为volume

    本次测试的服务器2台,服务器#1(centos7)最为docker容器所在的host,服务器#2(centos6)提供NFS服务 1. #2上配置NFS服务 (1) 安装nfs软件包 yum -y i ...

  3. docker (centOS 7) 使用笔记1

    1. docker配置 初次在安装完docker后,初始化配置 copy默认的docker.service后,重启服务,会在/etc/systemd/system/multi-user.target. ...

  4. docker (centOS 7) 使用笔记3 - 修改docker默认的虚拟网址

    近日在使用VPN时发现和docker的虚拟网址发生了冲突,都是172.17.0.1,故需要修改docker的默认网址. 1. 当前状态 # ifconfig docker0: flags=<UP ...

  5. docker (centOS 7) 使用笔记4 - etcd服务

    本次测试的系统包含centos 7.2 64 bit,centos 7.3 64 bit 1. 安装 yum -y install etcd 2. 配置 此处一共准备了3台机器(10.10.10.10 ...

  6. docker (centOS 7) 使用笔记6 - skydns

    skydns被用于kubenets作为DNS服务.本次测试是单独使用skydns作为DNS服务器,且作为loadbalance使用. 前提:需要先安装配置etcd服务 (在前面的文章里,已经安装部署了 ...

  7. 如何使用 Weave 网络?- 每天5分钟玩转 Docker 容器技术(63)

    weave 是 Weaveworks 开发的容器网络解决方案.weave 创建的虚拟网络可以将部署在多个主机上的容器连接起来.对容器来说,weave 就像一个巨大的以太网交换机,所有容器都被接入这个交 ...

  8. Weave 网络结构分析 - 每天5分钟玩转 Docker 容器技术(64)

    上一节我们安装并创建了 Weave 网络,本节将部署容器并分析网络结构.在 host1 中运行容器 bbox1: eval $(weave env) docker run --name bbox1 - ...

  9. Docker Weave网络部署

    Weave在Docker主机之间实现Overlay网络,使用业界标准VXLAN封装,基于UDP传输,也可以加密传输.Weave Net创建一个连接多个Docker主机的虚拟网络,类似于一个以太网交换机 ...

随机推荐

  1. 漫谈 Clustering (5): Hierarchical Clustering

    系列不小心又拖了好久,其实正儿八经的 blog 也好久没有写了,因为比较忙嘛,不过觉得 Hierarchical Clustering 这个话题我能说的东西应该不多,所以还是先写了吧(我准备这次一个公 ...

  2. solr dataimport

    solrconfig.xml <requestHandler name="/dataimport" class="org.apache.solr.handler.d ...

  3. Oracle下如何收集 Systemstate dump

    2: dump (不包括lock element) 10: dump 11: dump + global cache of RAC 256: short stack (函数堆栈) 258: 256+2 ...

  4. UC浏览器打开首页显示:显示此网页时出了点问题

    使用UC浏览器打开网页的时候显示出错,如下图所示.但是用其他浏览器都很正常 我自己用的解决方法:最近刚下载了驱动精灵,听同学的把驱动精灵卸载了就恢复正常了

  5. windows Server 2008 r2-搭建FTP服务器

    FTP协议介绍 FTP协议工作在OSI参考模型的第七层,TCP模型的第四层上(即应用层上).使用FTP传输而不是UDP,与服务端建立连接经过三次握手. FTP端口介绍 FTP默认端口是21,.(21端 ...

  6. JAVA运行环境配置

    win10下,右击我的电脑-->高级系统设置-->高级-->环境变量-->系统变量 1新建 变量名   JAVA_HOME 变量值   C:\Program Files\Jav ...

  7. B1013 数素数(20分)

    B1013 数素数(20分) 令 \(P​_i\)表示第 i 个素数.现任给两个正整数 \(M≤N≤10^4\),请输出 \(P_M\)到 \(P_N\)的所有素数. 输入格式: 输入在一行中给出 M ...

  8. 并查集:HDU5326-Work(并查集比较简单灵活的运用)

    Work HDU原题地址:http://acm.hdu.edu.cn/showproblem.php?pid=5326 Time Limit: 2000/1000 MS (Java/Others) M ...

  9. 递推:Number Sequence(mod找规律)

    解题心得: 1.对于数据很大,很可怕,不可能用常规手段算出最后的值在进行mod的时候,可以思考找规律. 2.找规律时不必用手算(我傻,用手算了好久).直接先找前100项进行mod打一个表出来,直接看就 ...

  10. 2 Model层-模型成员

    1 类的属性 objects:是Manager类型的对象,用于与数据库进行交互 当定义模型类时没有指定管理器,则Django会为模型类提供一个名为objects的管理器 支持明确指定模型类的管理器 c ...