最近又开始弄openvswitch,网上也有不少的资料,但是发觉都集中在openvswitch安装及简单使用或者一些原码分析,从心里来讲,感觉这些和心里得到的差距有点大啊,其实更希望能类似资料在openvswitch和其它硬件结合起来的总体架构资料,openvswitch的实际生产应用的架构和要点方面,找到一个日本对于这个研究很完善的PDF文件,更是除了一些图,其它完全看不懂,心里不是滋味,最近估计会转载一些比较偏近研究方面的比较好的openvswitch文章,希望能让自己对openvswith能有更深的理解。

  转载地址:http://blog.scottlowe.org/2013/05/15/examining-open-vswitch-traffic-patterns/

  以上也是一个对openvswitch研究很深的国外的blog,地址一定可以访问,不能访问的话,就要翻墙了。

  Examining Open vSwitch Traffic Patterns

  In this post, I want to provide some additional insight on how the use of Open vSwitch (OVS) affects—or doesn’t affect, in some cases—how a Linux host directs traffic through physical interfaces, OVS internal interfaces, and OVS bridges. This is something that I had a hard time understanding as I started exploring more advanced OVS configurations, and hopefully the information I share here will be helpful to others.

  To help structure this discussion, I’m going to walk through a few different OVS configurations and scenarios. In these scenarios, I’ll use the following assumptions:

  • The physical host has four interfaces (eth0, eth1, eth2, and eth3)
  • The host is running Linux with KVM, libvirt, and OVS installed

Scenario 1: Simple OVS Configuration

In this first scenario let’s look at a relatively simple OVS configuration, and examine how Linux host and guest domain traffic moves into or out of the network.

Let’s assume that our OVS configuration looks something like this (this is the output from ovs-vsctl show):

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "eth0"
Interface "eth0"
Bridge "br1"
Port "br1"
Interface "br1"
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "1.7.1"

This is a pretty simple configuration; there are two bridges, each with a single physical interface. Let’s further assume, for the purposes of this scenario, that eth2 has an IP address and is working properly to communicate with other hosts on the network. The eth3 interface is shutdown.

So, in this scenario, how does traffic move into or out of the host?

  1. Traffic from a guest domain: Traffic from a guest domain will travel through the OVS bridge to which it is attached (you’d see an additional “vnet0″ port and interface appear on that bridge when you start the guest domain). So, a guest domain attached to br0 would communicate via eth0, and a guest domain attached to br1 would communicate via eth1. No real surprises here.

  2. Traffic from the Linux host: Traffic from the Linux host itself will not communicate over any of the configured OVS bridges, but will instead use its native TCP/IP stack and any configured interfaces. Thus, since eth2 is configured and operational, all traffic to/from the Linux host itself will travel through eth2.

The interesting point (to me, at least) about #2 above is that this includes traffic from the OVS process itself. In other words, if the OVS process(es) need to communicate across the network, they won’t use the bridges—they’ll use whatever interfaces the Linux host uses to communicate. This is one thing that threw me off: because OVS is itself a Linux process, when OVS needs to communicate across the network it will use the Linux network stack to do so. In this scenario, then, OVS would not communicate over any configured bridge, but instead using eth2. (This makes perfect sense now, but I recall that it didn’t earlier. Maybe it’s just me.)

Scenario 2: Adding Bonding

In this second scenario, our OVS configuration changes only slightly:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "bond0"
Interface "eth0"
Interface "eth1"
ovs_version: "1.7.1"

In this case, we’re now leveraging a bond that contains two physical interfaces (eth0 and eth1). (By the way, I have a write-up on configuring OVS and bonds, if you need/want more information.) The eth2 interface still has an IP address assigned and is up and communicating properly. The physical eth3 interface is shutdown.

How does this affect the way in which traffic is handled? It doesn’t, really. Traffic from guest domains will still travel across br0 (since this is the only configured OVS bridge), and traffic from the Linux host—including traffic from OVS itself—will still use whatever interfaces are determined by the host’s TCP/IP stack. In this case, that would be eth2.

Scenario 3: The Isolated Bridge

Let’s look at another OVS configuration, the so-called “isolated bridge”. This is a configuration that is commonly found in implementations using NVP, OpenStack, and others, and it’s a configuration that I recently discussed in my post on GRE tunnels and OVS.

Here’s the configuration:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "bond0"
Interface "eth0"
Interface "eth1"
Bridge "br-int"
Port "br-int"
Interface "br-int"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.1.100"}
ovs_version: "1.7.1"

As with previous configurations, we’ll assume that eth2 is up and operational, and eth3 is shutdown. So how does traffic get directed in this configuration?

  1. Traffic from guest domains attached to br0: This is as before—traffic will go out one of the physical interfaces in the bond, according to the bonding configuration (active-standby, LACP, etc.). Nothing unusual here.

  2. Traffic from the Linux host: As before, traffic from processes on the Linux host will travel out according to the host’s TCP/IP stack. There are no changes from previous configurations.

  3. Traffic from guest domains attached to br-int: Now, this is where it gets interesting. Guest domains attached to br-int (named “br-int” because in this configuration the isolated bridge is often called the “integration bridge”) don’t have any physical interfaces they can use; they can only use the GRE tunnel. Here’s the “gotcha”, so to speak: the GRE tunnel is created and maintained by the OVS process, and therefore it uses the host’s TCP/IP stack to communicate across the network. Thus, traffic from guest domains attached to br-int would hit the GRE tunnel, which would travel through eth2.

I’ll give you a second to let that sink in.

Ready now? Good! The key to understanding #3 is, in my opinion, understanding that the tunnel (a GRE tunnel in this case, but the same would apply to a VXLAN or STT tunnel) is created and maintained by the OVS process. Thus, because it is created and maintained by a process on the Linux host (OVS itself), the traffic for the tunnel is directed according to the host’s TCP/IP stack and IP routing table(s). In this configuration, the tunnels don’t travel through any of the configured OVS bridges.

Scenario 4: Leveraging an OVS Internal Interface

Let’s keep ramping up the complexity. For this scenario, we’ll use an OVS configuration that is the same as in the previous scenario:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "bond0"
Interface "eth0"
Interface "eth1"
Bridge "br-int"
Port "br-int"
Interface "br-int"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.1.100"}
ovs_version: "1.7.1"

The difference, this time, is that we’ll assume that eth2 and eth3 are both shutdown. Instead, we’ve assigned an IP address to the br0 interface on bridge br0. OVS internal interfaces, like br0, can appear as “physical” interfaces to the Linux host, and therefore can be assigned IP addresses and used for communication. This is the approach I used in describing how to run host management across OVS.

Here’s how this configuration affects traffic flow:

  1. Traffic from guest domains attached to br0: No change here. Traffic from guest domains attached to br0 will continue to travel across the physical interfaces in the bond (eth0 and eth1, in this case).

  2. Traffic from the Linux host: This time, the only interface that the Linux host has is the br0 internal interface. The br0 internal interface is attached to br0, so all traffic from the Linux host will travel across the physical interfaces attached to the bond (again, eth0 and eth1).

  3. Traffic from guest domains attached to br-int: Because Linux host traffic is directed through br0 by virtue of using the br0 internal interface, this means that tunnel traffic is also directed through br0, as dictated by the Linux host’s TCP/IP stack and IP routing table(s).

As you can see, assigning an IP address to an OVS internal interface has a real impact on the way in which the Linux host directs traffic through OVS. This has both positive and negative impacts:

  • One positive impact is that it allows for Linux host traffic (such as management or tunnel traffic) to take advantage of OVS bonds, thus gaining some level of NIC redundancy.
  • A negative impact is that OVS is now “in band,” so upgrades to OVS will be disruptive to all traffic moving through OVS—which could potentially include host management traffic.

Let’s take a look at one final scenario.

Scenario 5: Using Multiple Bridges and Internal Interfaces

In this configuration, we’ll use an OVS configuration that is very similar to the configuration I showed in my post on GRE tunnels with OVS:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "mgmt0"
Interface "mgmt0"
type: internal
Port "bond0"
Interface "eth0"
Interface "eth1"
Bridge "br1"
Port "br1"
Interface "br1"
type: internal
Port "tep0"
Interface "tep0"
type: internal
Port "bond1"
Interface "eth2"
Interface "eth3"
Bridge "br-int"
Port "br-int"
Interface "br-int"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.1.100"}
ovs_version: "1.7.1"

In this configuration, we have three bridges. br0 uses a bond that contains eth0 and eth1; br1 uses a bond that contains eth2 and eth3; and br-int is an isolated bridge with no physical interfaces. We have two “custom” internal interfaces, mgmt0 (on br0) and tep0 (on br1), to which IP addresses have been assigned and which are successfully communicating across the network. We’ll assume that mgmt0 and tep0 are on different subnets, and that tep0 is assigned to the 192.168.1.0/24 subnet.

How does traffic flow in this scenario?

  1. Traffic from guest domains attached to br0: The behavior here is as it has been in previous configurations—guest domains attached to br0 will communicate across the physical interfaces in the bond.

  2. Traffic from the Linux host: As it has been in previous scenarios, traffic from the Linux host is driven by the host’s TCP/IP stack and IP routing table(s). Because mgmt0 and tep0 are on different subnets, traffic from the Linux host will go out either br0 (for traffic moving through mgmt0) or br1 (for traffic moving through tep0), and thus will utilize the corresponding physical interfaces in the bonds on those bridges.

  3. Traffic from guest domains attached to br-int: Because the GRE tunnel is on the 192.168.1.0/24 subnet, traffic for the GRE tunnel—which is created and maintained by the OVS process on the Linux host itself—will travel through tep0, which is attached to br1. Thus, the physical interfaces eth2 and eth3 would be leveraged for the GRE tunnel traffic.

Summary

The key takeaway from this post, in my mind, is understanding where traffic originates, and separating the idea of OVS as a switching mechanism (to handle guest domain traffic) as well as a Linux host process itself (to create and maintain tunnels between hosts).

Hopefully this information is helpful. I am, of course, completely open to your comments, questions, and corrections, so feel free to speak up in the comments below. Courteous comments are always welcome!

(转载)研究openvswitch的流量模式的更多相关文章

  1. 【转载】GlusterFS六大卷模式說明

    本文转载自翱翔的水滴<GlusterFS六大卷模式說明> GlusterFS六大卷說明 第一,分佈卷 在分布式卷文件被随机地分布在整个砖的体积.使用分布式卷,你需要扩展存储,冗余是重要或提 ...

  2. [转载]LVS+Keepalived之三大模式

    LVS + Keepalived之三大模式 ============================================================================== ...

  3. (转载)设计模式之-策略模式(Strategy)

    原文:http://blog.sina.com.cn/s/blog_48df74430100t2m7.html 前言 部门组织培训,<Effective Java>,每人每天给大家讲解一节 ...

  4. [转载] 虚拟机3种网络模式(NAT, Host-only, Bridged)

    实例讲解虚拟机3种网络模式(桥接.nat.Host-only) 转载自:http://www.cnblogs.com/ggjucheng/archive/2012/08/19/2646007.html ...

  5. 【转载】 jmeter 命令行模式(非GUI)运行脚本,察看结果树结果为空,解决办法

    转载地址:https://www.cnblogs.com/canglongdao/p/12636403.html jmeter 命令行模式(非GUI)运行脚本,察看结果树结果为空,解决办法: jmet ...

  6. ACTIVITI 研究代码 之 模版模式

    模板方法模式需要开发抽象类和具体子类的设计师之间的协作.一个设计师负责给出一个算法的轮廓和骨架,另一些设计师则负责给出这个算法的各个逻辑步骤.代表这些具体逻辑步骤的方法称做基本方法(primitive ...

  7. 转载-MySQL binlog三种模式及设置方法

    原文地址:http://www.cnblogs.com/yangliheng/p/6187327.html 1.1 Row Level  行模式 日志中会记录每一行数据被修改的形式,然后在slave端 ...

  8. 【转载】VMware虚拟机NAT模式网络配置图文教程

    原文:https://blog.csdn.net/dingguanyi/article/details/77829085 一.引言 在Windows上搭建集群实验环境时,为能够让集群结点之间相互通信, ...

  9. HTML:meta标签使用总结 [转载] [360浏览器 指定极速模式]

    meta标签作用 META标签是HTML标记HEAD区的一个关键标签,提供文档字符集.使用语言.作者等基本信息,以及对关键词和网页等级的设定等,最大的作用是能够做搜索引擎优化(SEO). PS:便于搜 ...

随机推荐

  1. 【暑假】[深入动态规划]UVAlive 3983 Robotruck

     UVAlive 3983 Robotruck 题目: Robotruck   Time Limit: 3000MS   Memory Limit: Unknown   64bit IO Format ...

  2. Kotlin 学习

    http://kotlinlang.cn/ 资料: https://segmentfault.com/a/1190000004494727 http://blog.csdn.net/u01413448 ...

  3. 二叉搜索树算法详解与Java实现

    二叉查找树可以递归地定义如下,二叉查找树或者是空二叉树,或者是满足下列性质的二叉树: (1)若它的左子树不为空,则其左子树上任意结点的关键字的值都小于根结点关键字的值. (2)若它的右子树不为空,则其 ...

  4. python 继承和多态

    在OOP程序设计中,当我们定义一个class的时候,可以从某个现有的class继承,新的class称为子类(Subclass),而被继承的class称为基类.父类或超类(Base class.Supe ...

  5. hdoj 2051 Bitset

    Bitset Time Limit: 1000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)Total Subm ...

  6. [iOS基础控件 - 5.2] 查看大图、缩放图片代码(UIScrollView制作)

    原图: 900 x 1305      拖曳滚动:   缩放:           主要代码: // // ViewController.m // ImageZoom // // Created by ...

  7. Integer的缓存机制

    Java api 中为了提高效率,减少资源的浪费,对内部的Integer类进行了缓存的优化,通俗的说就是把-127至128这个范围内的数提前加载到内存,当我们需要的时候,如果正好在这个范围之内,就会直 ...

  8. android 数据存储操作之SQLite

    一. SQLite介绍 SQLite是android内置的一个很小的关系型数据库. 二. SQLiteOpenHelper的使用方法 ①SQLiteOpenHelper是一个辅助类来管理数据库的创建和 ...

  9. CentOS6.5配置MySQL主从同步

    原文地址:http://www.cnblogs.com/zhongshengzhen/   修改主MySQL的配置 [root@localhost etc] vi /etc/my.cnf 添加以下配置 ...

  10. 【STL源码学习】STL算法学习之一

    第一章:引子 STL包含的算法头文件有三个:<algorithm><numeric><functional>,其中最大最常用的是<algorithm>, ...