http://tropicaldevel.wordpress.com/2013/07/15/quality-of-service-in-openstack/

In this post I will be exploring the current state of quality of service (QoS) in OpenStack.  I will be looking at both what is possible now and what is on the horizon and targeted for the Havana release.  Note that I am truly only intimately familiar with Glance and thus part of the intention of this post is to gather information from the community.  Please let me know what I have missed, what I have gotten incorrect, and what else might be out there.

Introduction

The term quality of service traditionally refers to the users reservation, or guarantee of a certain amount of network bandwidth.  Instead of letting current network traffic and TCP flow control and back off algorithms dictate the rate of a users transfer across a network, the user would request N bits/second over a period of time.  If the request is granted the user could expect to have that amount of bandwidth at their disposal.  It is quite similar to resource reservation.

When considering quality of service in OpenStack we really should look beyond networks and at all of the resources on which there is contention, the most important of which are:

  • CPU
  • Memory
  • Disk IO
  • Network IO
  • System bus

Let us take a look at QoS in some of the prominent OpenStack components.

Keystone and Quotas

While quotas are quite different from QoS they do have some overlapping concepts and thus will be discussed here briefly.  A quota is a set maximum amount of a resource that a user is allowed to use.  This does not necessarily mean that the user is guaranteed that much of the given resource, it just means that is the most they can have.  That said quotas can sometimes be manipulated to provide a type of QoS (ex: set a bandwidth quota to 50% of your network resources per user and then only allow two users at a time).

Currently there is an effort in the keystone community to add centralized quota management for all OpenStack components to keystone.  Keystone will provide management interfaces to the quota information.  When a user attempts to use a resource OpenStack components will query Keystone for the particular resource’s quota.  Enforcement of the quota will be done by that OpenStack service, not by Keystone.

The design for quota management in keystone seems fairly complete and is described here.  The implementation does not appear to be targeted for the Havana release but hopefully we will see it some time in the I cycle.  Note that once this is in Keystone the other OpenStack components must be modified to use it so it will likely be some time before this is available across OpenStack.

Glance

Glance is the image registry and delivery component of OpenStack.  The main resources that it uses is network bandwidth when uploading/downloading images and the storage capacity of backend storage systems (like swift and GlusterFS).  A user of Glance may wish to get a guarantee from the server that when it starts uploading or downloading an image that server will deliver N bits/second.  In order to achieve this Glance does not only have to reserve bandwidth on the workers NIC and the local network, but it also has to get a similar QoS guarantee from the storage system which houses its data (swift, GlusterFS, etc).

Current State

Glance provides no first class QoS features.  There is no way at all for a client to negotiate or discover the amount of bandwidth which can be dedicated to them.  Even using outside OS level services to work around this issue is unlikely.  The main problem is reserving the end to end path (from the network all the way through to the storage system).

Looking forward

In my opinion the solution to adding QoS to Glance is to get Glance out of the Image delivery business.  Efforts are well underway (and should be available in the Havana release) to expose the underlying physical locations of a given image (things like http:// and swift://).  In this way the user can negotiate directly with the storage system for some level of QoS, or it can Staccato to handle the transfer for it.

Cinder

QoS for Cinder appears to be underway for the Havana release.  Users of Cinder can ask for a specific volume type.  Part of that volume type is a string that defines the QoS of the volume IO (fast, normal, or slow).  Backends that can handle all of the demands of the volume type become candidates for scheduling.

More information about QoS in cinder can be found in the following links:

Quantum/Neutron

Neutron (formerly known as Quantum) provides network connectivity as a service.  A blueprint for QoS in Neutron can be found here and additional information can be found here.

This effort is targeted for the Havana release.  In the presence of Neutron plugins that support QoS (Cisco, Nicira, ?) this will allow users reservation of network bandwidth.

Nova

In nova all of the resources in the above list are used.  User VMs necessarily use some amount of CPU, memory, IO, and network resources. Users truly interested in a guaranteed level of quality of service need a way to pin all of those resources.  An effort for this in Nova is documented here with thisblueprint.

While this effort appear to be what is needed in Nova it is unfortunately quite old and currently marked as obsolete.  However the effort seems to have new life recently as shown by this email exchange. A definition of work can be found here with the blueprint here.

This effort will operate similarly to how Cinder is proposing QoS. A set of string will be defined: High (1 vCPU per CPU), Normal (2 vCPUs per CPU), low (4 vCPUs per CPU).  This type string would then be added as part of the instance type when requesting a new VM instance.  Memory commitment is not addressed in this effort, nor is network and disk IO (however those are best handled by Neutron and
Cinder respectively).

Unfortunately nothing seems to be scheduled for Havana.

Current State

Currently in nova there is the following configuration option:

# cpu_allocation_ratio=16.0

This sets the ratio of virtual CPUs to physical CPUs.  If this value is set to 1.0 then the user will know that the number of CPUs in its requested instance type maps to full system CPUs.  Similarly there is:

# ram_allocation_ratio=1.5

which does the same thing for RAM.  While these do give a notion of QoS to the user they are too coarsely grained and can be inefficient when considering users that do not need/want such QoS.

Swift

Swift does not have any explicit QoS options.  However it does have a rate limiting middleware which provides a sort of quota on bandwidth for users.  How to set these values can be found here.

[转] Quality Of Service In OpenStack的更多相关文章

  1. neutron qos Quality of Service

    Quality of Service advanced service is designed as a service plugin. The service is decoupled from t ...

  2. Quality of Service 0, 1 & 2

    来自:http://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels Quality of Servi ...

  3. MQTT协议QoS服务质量 (Quality of Service 0, 1 & 2)概念学习

    什么是 QoS ? QoS (Quality of Service) 是发送者和接收者之间,对于消息传递的可靠程度的协商. QoS 的设计是 MQTT 协议里的重点.作为专为物联网场景设计的协议,MQ ...

  4. Quality of service

    w https://en.wikipedia.org/wiki/Quality_of_service Quality of service (QoS) is the overall performan ...

  5. Quality of Service (QoS) in LTE

    Background: Why we need QoS ? There are premium subscribers who always want to have better user expe ...

  6. [译]Ocelot - Quality of Service

    原文 可以针对每个ReRoute设置对下游服务的熔断器circuit breaker.这部分是通过Polly实现的. 将下面的配置添加到一个ReRoute下面去. "QoSOptions&q ...

  7. 别以为真懂Openstack: 虚拟机创建的50个步骤和100个知识点(5)

    八.KVM 这一步,像virsh start命令一样,将虚拟机启动起来了.虚拟机启动之后,还有很多的步骤需要完成. 步骤38:从DHCP Server获取IP 有时候往往数据库里面,VM已经有了IP, ...

  8. OpenStack (1) - Keystone OpenStack Identity Service

    echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc ...

  9. openstack setup demo Compute service

    本文包含以下部分 Compute service overview Install and configure controller node Prerequisites Install and co ...

随机推荐

  1. IIS 运行ASP.Net的基本配置(编辑中。。。)

    今天在新建的IIS上运行Asp.net 程序,发现IIS根本没有走asp的路由系统,直接返回了404,后来发现是IIS没有正确安装,需要安装以下的组件: 未安装前,IIS里的样子: 安装后,IIS的样 ...

  2. python中单例模式的四种实现方式

    配置文件settings.py IP='100.0.0.2' PORT=3302 方式一:绑定给类的方法 class Mysql: __instance = None def __init__(sel ...

  3. Python利用os模块批量修改文件名

    初学Python.随笔记录自己的小练习. 通过查阅资料os模块中rename和renames都可以做到 他们的区别为.rename:只能修改文件名   renames:可以修改文件名,还可以修改文件上 ...

  4. import cv2出现“ImportError: DLL load failed: 找不到指定的模块”

    操作系统:windows server 2008 r2 enterprise 64位 Python版本:3.7.0 64位 这个问题坑了我一天,看了不少博客,用了好多方法,也没用.不多说了,介绍我的方 ...

  5. Mac smartsvn破解及license文件

    第一步:去官网下载自己系统smartsvn版本文件 下载地址:http://www.smartsvn.com/download 第二步:破解 (1) 将文件解压到系统路径:/opt/smartsvn ...

  6. kafka 客户端 producer 配置参数

    属性 描述 类型 默认值 bootstrap.servers 用于建立与kafka集群的连接,这个list仅仅影响用于初始化的hosts,来发现全部的servers.格式:host1:port1,ho ...

  7. Linux环境下vi/vim编辑器常用命令

    使用vi文本编辑器 配置文件是Linux系统中的显著特征之一,其作用有点类似于Windows系统中的注册表,只不过注册表是集中管理,而配置文件采用了分散的自由管理方式.那么如何使用Linux字符操作界 ...

  8. python_str的应用

    name = "fsafalk" #nam是个变量名  fsafalk是变量  也是字符串 name.startswith('fs')#判断是否是fs开头 name.endswit ...

  9. Linux-共享内存通信

    Linux共享存储通信 内容 创建共享存储区实现进程通信 机理说明 共享存储区(Share Memory)是Linux系统中通信速度最高的通信机制.该机制中共享内存空间和进程的虚地址空间满足多对多的关 ...

  10. SpringBoot使用ELK日志收集

    本文介绍SpringBoot应用配合ELK进行日志收集. 1.有关ELK 1.1 简介 在之前写过一篇文章介绍ELK日志收集方案,感兴趣的可以去看一看,点击这里-----> <ELK日志分 ...