docker--linux network namespace
docker container的namespace使用 的是一种虚拟网络设备 veth-pair。顾名思义,veth-pair 就是一对的虚拟设备接口,和 tap/tun 设备不同的是,它都是成对出现的。一端连着协议栈,一端彼此相连着。如下图所示:

接下来做一番测试:
[root@localhost ~]# ip netns list #查看当前存在的namespace
test1
[root@localhost ~]# ip netns delete test1 #删除namespace
[root@localhost ~]# ip netns list
[root@localhost ~]# ipnetns add test1
bash: ipnetns: command not found...
[root@localhost ~]# ip netns add test1 #增加test1
[root@localhost ~]# ip netns add test2 #增加test2
[root@localhost ~]# ip netns list
test2
test1
[root@localhost ~]# ip netns exec test1 ip a #查看test1的IP地址,exec test1 表示在test1里面执行 ip a 这个命令,发现只有一个lo回环口,状态是DOWN,而且没有127.0.0.1这个IP地址
: lo: <LOOPBACK> mtu qdisc noop state DOWN qlen
link/loopback ::::: brd :::::
[root@localhost ~]# ip netns exec test2 ip a #查看test2的IP地址,结果和test1一样
: lo: <LOOPBACK> mtu qdisc noop state DOWN qlen
link/loopback ::::: brd :::::
[root@localhost ~]#
通过ip link命令可以查看namespace有多少个link,并且可以up link
[root@localhost ~]# ip link #查看本地link
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP mode DEFAULT qlen
link/ether :0c:::e1:eb brd ff:ff:ff:ff:ff:ff
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN mode DEFAULT qlen
link/ether ::::5a:be brd ff:ff:ff:ff:ff:ff
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen
link/ether ::::5a:be brd ff:ff:ff:ff:ff:ff
: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP mode DEFAULT
link/ether ::e8::c7:6c brd ff:ff:ff:ff:ff:ff
: vethd03ae3e@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue master docker0 state UP mode DEFAULT
link/ether 9e:f7:f6:f6:fe: brd ff:ff:ff:ff:ff:ff link-netnsid
: vethbb1dfcd@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue master docker0 state UP mode DEFAULT
link/ether b2:::9f:e5: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]# ip netns exec test1 ip link #查看test1 link
: lo: <LOOPBACK> mtu qdisc noop state DOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
[root@localhost ~]# ip netns exec test1 ip link set dev lo up #up test1 的lo回环口。
[root@localhost ~]# ip netns exec test1 ip link #发现state 是 UNKNOWN ,和本地lo状态一样。 因为端口要up起来,需要满足条件,它需要两端是连起来的,就像ens33需要和MAC虚拟化的一个端口连起来,单个端口无法up,必须是一对
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
[root@localhost ~]#

在本地添加一对veth-pair
[root@localhost ~]# ip link add veth-test1 type veth peer name veth-test2 #在本地link一对veth-pair
[root@localhost ~]# ip link
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP mode DEFAULT qlen
link/ether :0c:::e1:eb brd ff:ff:ff:ff:ff:ff
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN mode DEFAULT qlen
link/ether ::::5a:be brd ff:ff:ff:ff:ff:ff
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen
link/ether ::::5a:be brd ff:ff:ff:ff:ff:ff
: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP mode DEFAULT
link/ether ::e8::c7:6c brd ff:ff:ff:ff:ff:ff
: vethd03ae3e@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue master docker0 state UP mode DEFAULT
link/ether 9e:f7:f6:f6:fe: brd ff:ff:ff:ff:ff:ff link-netnsid
: vethbb1dfcd@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue master docker0 state UP mode DEFAULT #新增的link 有mac地址,state DOWN
link/ether b2:::9f:e5: brd ff:ff:ff:ff:ff:ff link-netnsid
: veth-test2@veth-test1: <BROADCAST,MULTICAST,M-DOWN> mtu qdisc noop state DOWN mode DEFAULT qlen
link/ether ::bb:4a:fc: brd ff:ff:ff:ff:ff:ff
: veth-test1@veth-test2: <BROADCAST,MULTICAST,M-DOWN> mtu qdisc noop state DOWN mode DEFAULT qlen
link/ether 4a:0c:::: brd ff:ff:ff:ff:ff:ff
把一对veth-pair分别添加到两个namespace中
[root@localhost ~]# ip link set v
vethbb1dfcd vethd03ae3e veth-test1 veth-test2 virbr0 virbr0-nic
[root@localhost ~]# ip link set v
vethbb1dfcd vethd03ae3e veth-test1 veth-test2 virbr0 virbr0-nic
[root@localhost ~]# ip link set veth-test1 n
name netns
[root@localhost ~]# ip link set veth-test1 netns test1 #把本地veth-test1 link添加到namespace test1中
[root@localhost ~]# ip link set veth-test2 netns test2 #把本地veth-test2 link添加到namespace test2中
[root@localhost ~]# ip link #查看本地link发现veth-test1和veth-test2已经消失
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP mode DEFAULT qlen
link/ether :0c:::e1:eb brd ff:ff:ff:ff:ff:ff
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN mode DEFAULT qlen
link/ether ::::5a:be brd ff:ff:ff:ff:ff:ff
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen
link/ether ::::5a:be brd ff:ff:ff:ff:ff:ff
: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP mode DEFAULT
link/ether ::e8::c7:6c brd ff:ff:ff:ff:ff:ff
: vethd03ae3e@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue master docker0 state UP mode DEFAULT
link/ether 9e:f7:f6:f6:fe: brd ff:ff:ff:ff:ff:ff link-netnsid
: vethbb1dfcd@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue master docker0 state UP mode DEFAULT
link/ether b2:::9f:e5: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]# ip netns exec test1 ip link #查看namespace test1 link,已经添加veth-test1,state down 没IP地址
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: veth-test1@if10: <BROADCAST,MULTICAST> mtu qdisc noop state DOWN mode DEFAULT qlen
link/ether 4a:0c:::: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]# ip netns exec test2 ip link #查看namespace test2 link,已经添加veth-test2,state down 没IP地址
: lo: <LOOPBACK> mtu qdisc noop state DOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: veth-test2@if11: <BROADCAST,MULTICAST> mtu qdisc noop state DOWN mode DEFAULT qlen
link/ether ::bb:4a:fc: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]#
给两个namespace的veth-test1 2 分配IP地址
[root@localhost ~]# ip netns exec test1 ip addr add 192.168.1.1/ dev veth-test1 #给veth-test1添加IP地址
[root@localhost ~]# ip netns exec test2 ip addr add 192.168.1.2/ dev veth-test2 #给veth-testg2添加IP地址
[root@localhost ~]# ip netns exec test1 ip link #状态是down,需要up
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: veth-test1@if10: <BROADCAST,MULTICAST> mtu qdisc noop state DOWN mode DEFAULT qlen
link/ether 4a:0c:::: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]# ip netns exec test2 ip link #状态是down,需要up
: lo: <LOOPBACK> mtu qdisc noop state DOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: veth-test2@if11: <BROADCAST,MULTICAST> mtu qdisc noop state DOWN mode DEFAULT qlen
link/ether ::bb:4a:fc: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]#
up veth-test1 和 veth-test2
[root@localhost ~]# ip netns exec test1 ip link set dev veth-test1 up #up veth-test1
[root@localhost ~]# ip netns exec test2 ip link set dev veth-test2 up #up veth-test2
[root@localhost ~]# ip netns exec test1 ip link #查看link状态
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: veth-test1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP mode DEFAULT qlen
link/ether 4a:0c:::: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]# ip netns exec test2 ip link
: lo: <LOOPBACK> mtu qdisc noop state DOWN mode DEFAULT qlen
link/loopback ::::: brd :::::
: veth-test2@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP mode DEFAULT qlen
link/ether ::bb:4a:fc: brd ff:ff:ff:ff:ff:ff link-netnsid
[root@localhost ~]# ip netns exec test2 ip a #查看是否有IP
: lo: <LOOPBACK> mtu qdisc noop state DOWN qlen
link/loopback ::::: brd :::::
: veth-test2@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP qlen
link/ether ::bb:4a:fc: brd ff:ff:ff:ff:ff:ff link-netnsid
inet 192.168.1.2/ scope global veth-test2
valid_lft forever preferred_lft forever
inet6 fe80:::bbff:fe4a:fc76/ scope link
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec test1 ip a #查看是否有IP
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: veth-test1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc noqueue state UP qlen
link/ether 4a:0c:::: brd ff:ff:ff:ff:ff:ff link-netnsid
inet 192.168.1.1/ scope global veth-test1
valid_lft forever preferred_lft forever
inet6 fe80::480c:80ff:fe98:/ scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
测试两个namespace直接是否能ping通
[root@localhost ~]# ip netns exec test1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) () bytes of data.
bytes from 192.168.1.2: icmp_seq= ttl= time=0.050 ms
bytes from 192.168.1.2: icmp_seq= ttl= time=0.080 ms
bytes from 192.168.1.2: icmp_seq= ttl= time=0.078 ms
bytes from 192.168.1.2: icmp_seq= ttl= time=0.080 ms
^C
--- 192.168.1.2 ping statistics ---
packets transmitted, received, % packet loss, time 2999ms
rtt min/avg/max/mdev = 0.050/0.072/0.080/0.012 ms
[root@localhost ~]# ip netns exec test2 ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) () bytes of data.
bytes from 192.168.1.1: icmp_seq= ttl= time=0.037 ms
bytes from 192.168.1.1: icmp_seq= ttl= time=0.036 ms
bytes from 192.168.1.1: icmp_seq= ttl= time=0.034 ms
bytes from 192.168.1.1: icmp_seq= ttl= time=0.034 ms
^C
--- 192.168.1.1 ping statistics ---
packets transmitted, received, % packet loss, time 2999ms
rtt min/avg/max/mdev = 0.034/0.035/0.037/0.004 ms
[root@localhost ~]#
docker使用image busybox 为例:
[root@localhost ~]# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
8e674ad76dce: Pull complete
Digest: sha256:c94cf1b87ccb80f2e6414ef913c748b105060debda482058d2b8d0fce39f11b9
Status: Downloaded newer image for busybox:latest
创建两个在后台运行的container test1 和test2
docker run -d --name test1 busybox /bin/sh -c "while true;do sleep 3600;done"
docker run -d --name test2 busybox /bin/sh -c "while true;do sleep 3600;done"
[root@localhost ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 4c108a37151f weeks ago .2MB
busybox latest e4db68de4ff2 weeks ago .22MB
ubuntu 14.04 2c5e00d77a67 months ago 188MB
centos latest 9f38484d220f months ago 202MB
[root@localhost ~]# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
811f815caa94 busybox "/bin/sh -c 'while t…" hours ago Up hours test2
18bd8b5f3841 busybox "/bin/sh -c 'while t…" hours ago Up hours test1
[root@localhost ~]# docker exec test1 ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu qdisc noqueue
link/ether ::ac::: brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/ brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@localhost ~]# docker exec test2 ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu qdisc noqueue
link/ether ::ac::: brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/ brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@localhost ~]# docker exec test2 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): data bytes
bytes from 172.17.0.3: seq= ttl= time=0.045 ms
bytes from 172.17.0.3: seq= ttl= time=0.104 ms
bytes from 172.17.0.3: seq= ttl= time=0.112 ms
^C
[root@localhost ~]# docker exec test1 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): data bytes
bytes from 172.17.0.2: seq= ttl= time=0.043 ms
bytes from 172.17.0.2: seq= ttl= time=0.103 ms
bytes from 172.17.0.2: seq= ttl= time=0.105 ms
^C
[root@localhost ~]#
docker使用的namespace原因和 linux network namespace一样。
docker--linux network namespace的更多相关文章
- 【转】理解Docker容器网络之Linux Network Namespace
原文:理解Docker容器网络之Linux Network Namespace 由于2016年年中调换工作的原因,对容器网络的研究中断过一段时间.随着当前项目对Kubernetes应用的深入,我感觉之 ...
- Netruon 理解(11):使用 NAT 将 Linux network namespace 连接外网
学习 Neutron 系列文章: (1)Neutron 所实现的虚拟化网络 (2)Neutron OpenvSwitch + VLAN 虚拟网络 (3)Neutron OpenvSwitch + GR ...
- Netruon 理解(12):使用 Linux bridge 将 Linux network namespace 连接外网
学习 Neutron 系列文章: (1)Neutron 所实现的虚拟化网络 (2)Neutron OpenvSwitch + VLAN 虚拟网络 (3)Neutron OpenvSwitch + GR ...
- 一文搞懂 Linux network namespace
本文首发于我的公众号 Linux云计算网络(id: cloud_dev),专注于干货分享,号内有 10T 书籍和视频资源,后台回复「1024」即可领取,欢迎大家关注,二维码文末可以扫. 本文通过 IP ...
- 【转】linux network namespace 学习
原文地址:https://segmentfault.com/a/1190000004059167 介绍 在专业的网络世界中,经常使用到Virtual Routing and Forwarding(VR ...
- Linux Network Namespace
Linux Network Namespaces Linux kernel在2.6.29中加入了namespaces,用于支持网络的隔离,我们看一下namespace是如何使用的 创建与配置 创建一个 ...
- Linux network namespace源码分析
一.network namespace的创建 在对iproute2的源码进行分析后,我们可以知道,当我们调用命令`ip netns add ns1`时,本质上就是调用`unshare(CLONE_NE ...
- linux network name space
linux network namespace概念类似于网络中的 VRF (virtual routing and forwarding).但是,你不知道VRF的概念也没关系,下面我们通过一个简单的介 ...
- linux 网络虚拟化: network namespace 简介
linux 网络虚拟化: network namespace 简介 network namespace 是实现网络虚拟化的重要功能,它能创建多个隔离的网络空间,它们有独自的网络栈信息.不管是虚拟机还是 ...
随机推荐
- 20191107PHP创建数组练习
数组练习 <?php //创建的方式 //1 $arr=array(20,30,50); $arr1=[30,60,70]; //3 //当遇到这种情况的时候键(key)是相同的,会取后面的赋值 ...
- openstack stein部署手册 2. 基础应用
1. chrony # 安装程序包 yum install -y chrony # 变更配置文件 /etc/chrony.conf 增加 server 192.168.123.200 iburst # ...
- rpmbuild - 构建 RPM 打包
SYNOPSIS 构建打包: rpmbuild {-ba|-bb|-bp|-bc|-bi|-bl|-bs} [rpmbuild-options] SPECFILE ... rpmbuild {-ta| ...
- rabbitmq 客户端崩溃退出
1.创建1个队列 和 创建另1个独占队列 名称相同 即崩溃退出 2..rabbitmq是为了实现实时消息推送的吗?
- 记录每个action执行时间
import org.apache.commons.lang.time.StopWatch; import org.aspectj.lang.JoinPoint; import org.aspectj ...
- java ajax返回 Json 的 几种方式
原文:https://blog.csdn.net/qq_26289533/article/details/78749057 方式 1. : 自写代码转 Json 需要 HttpHttpServlet ...
- 190行代码实现mvvm模式
前言 网上讲 vue 原理,mvvm 模式的实现,数据双向绑定的文章一搜一大堆,不管写的谁好谁坏,都是写的自己的理解,我也发一篇文章记录自己的理解,如果对看官有帮助,那也是我莫大的荣幸,不过看完之后, ...
- hive的数据定义之创建数据库和表
1.对数据库的操作 create database hive_db //创建数据库hive_db create table hive_db.test(字段内容及其格式省略) //在数据库hive_db ...
- ASP.NET MVC中的捆绑和压缩技术
概述 在众多Web性能优化的建议中有两条: 减少Http请求数量:大多数的浏览器同时处理向网站处理6个请求(参见下图),多余的请求会被浏览器要求排队等待,如果我们减少这些请求数,其他的请求等待的时间将 ...
- 纯JSP简单登录实例
记一下,免得以后忘记了,又要去查. 文件共有四个web.xml.login.jsp.logout.jsp.welcome.jsp四个文件 测试环境:Tomcat 6.0.x 假设项目名称是LoginS ...