[转] Quality Of Service In OpenStack
http://tropicaldevel.wordpress.com/2013/07/15/quality-of-service-in-openstack/
In this post I will be exploring the current state of quality of service (QoS) in OpenStack. I will be looking at both what is possible now and what is on the horizon and targeted for the Havana release. Note that I am truly only intimately familiar with Glance and thus part of the intention of this post is to gather information from the community. Please let me know what I have missed, what I have gotten incorrect, and what else might be out there.
Introduction
The term quality of service traditionally refers to the users reservation, or guarantee of a certain amount of network bandwidth. Instead of letting current network traffic and TCP flow control and back off algorithms dictate the rate of a users transfer across a network, the user would request N bits/second over a period of time. If the request is granted the user could expect to have that amount of bandwidth at their disposal. It is quite similar to resource reservation.
When considering quality of service in OpenStack we really should look beyond networks and at all of the resources on which there is contention, the most important of which are:
- CPU
- Memory
- Disk IO
- Network IO
- System bus
Let us take a look at QoS in some of the prominent OpenStack components.
Keystone and Quotas
While quotas are quite different from QoS they do have some overlapping concepts and thus will be discussed here briefly. A quota is a set maximum amount of a resource that a user is allowed to use. This does not necessarily mean that the user is guaranteed that much of the given resource, it just means that is the most they can have. That said quotas can sometimes be manipulated to provide a type of QoS (ex: set a bandwidth quota to 50% of your network resources per user and then only allow two users at a time).
Currently there is an effort in the keystone community to add centralized quota management for all OpenStack components to keystone. Keystone will provide management interfaces to the quota information. When a user attempts to use a resource OpenStack components will query Keystone for the particular resource’s quota. Enforcement of the quota will be done by that OpenStack service, not by Keystone.
The design for quota management in keystone seems fairly complete and is described here. The implementation does not appear to be targeted for the Havana release but hopefully we will see it some time in the I cycle. Note that once this is in Keystone the other OpenStack components must be modified to use it so it will likely be some time before this is available across OpenStack.
Glance
Glance is the image registry and delivery component of OpenStack. The main resources that it uses is network bandwidth when uploading/downloading images and the storage capacity of backend storage systems (like swift and GlusterFS). A user of Glance may wish to get a guarantee from the server that when it starts uploading or downloading an image that server will deliver N bits/second. In order to achieve this Glance does not only have to reserve bandwidth on the workers NIC and the local network, but it also has to get a similar QoS guarantee from the storage system which houses its data (swift, GlusterFS, etc).
Current State
Glance provides no first class QoS features. There is no way at all for a client to negotiate or discover the amount of bandwidth which can be dedicated to them. Even using outside OS level services to work around this issue is unlikely. The main problem is reserving the end to end path (from the network all the way through to the storage system).
Looking forward
In my opinion the solution to adding QoS to Glance is to get Glance out of the Image delivery business. Efforts are well underway (and should be available in the Havana release) to expose the underlying physical locations of a given image (things like http:// and swift://). In this way the user can negotiate directly with the storage system for some level of QoS, or it can Staccato to handle the transfer for it.
Cinder
QoS for Cinder appears to be underway for the Havana release. Users of Cinder can ask for a specific volume type. Part of that volume type is a string that defines the QoS of the volume IO (fast, normal, or slow). Backends that can handle all of the demands of the volume type become candidates for scheduling.
More information about QoS in cinder can be found in the following links:
- https://etherpad.openstack.org/grizzly-cinder-volumetypes
- https://blueprints.launchpad.net/cinder/+spec/cinder-nfs-driver-qos
- https://blueprints.launchpad.net/cinder/+spec/3par-qos-support
- https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-driver-qos
Quantum/Neutron
Neutron (formerly known as Quantum) provides network connectivity as a service. A blueprint for QoS in Neutron can be found here and additional information can be found here.
This effort is targeted for the Havana release. In the presence of Neutron plugins that support QoS (Cisco, Nicira, ?) this will allow users reservation of network bandwidth.
Nova
In nova all of the resources in the above list are used. User VMs necessarily use some amount of CPU, memory, IO, and network resources. Users truly interested in a guaranteed level of quality of service need a way to pin all of those resources. An effort for this in Nova is documented here with thisblueprint.
While this effort appear to be what is needed in Nova it is unfortunately quite old and currently marked as obsolete. However the effort seems to have new life recently as shown by this email exchange. A definition of work can be found here with the blueprint here.
This effort will operate similarly to how Cinder is proposing QoS. A set of string will be defined: High (1 vCPU per CPU), Normal (2 vCPUs per CPU), low (4 vCPUs per CPU). This type string would then be added as part of the instance type when requesting a new VM instance. Memory commitment is not addressed in this effort, nor is network and disk IO (however those are best handled by Neutron and
Cinder respectively).
Unfortunately nothing seems to be scheduled for Havana.
Current State
Currently in nova there is the following configuration option:
# cpu_allocation_ratio=16.0
This sets the ratio of virtual CPUs to physical CPUs. If this value is set to 1.0 then the user will know that the number of CPUs in its requested instance type maps to full system CPUs. Similarly there is:
# ram_allocation_ratio=1.5
which does the same thing for RAM. While these do give a notion of QoS to the user they are too coarsely grained and can be inefficient when considering users that do not need/want such QoS.
Swift
Swift does not have any explicit QoS options. However it does have a rate limiting middleware which provides a sort of quota on bandwidth for users. How to set these values can be found here.
[转] Quality Of Service In OpenStack的更多相关文章
- neutron qos Quality of Service
Quality of Service advanced service is designed as a service plugin. The service is decoupled from t ...
- Quality of Service 0, 1 & 2
来自:http://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels Quality of Servi ...
- MQTT协议QoS服务质量 (Quality of Service 0, 1 & 2)概念学习
什么是 QoS ? QoS (Quality of Service) 是发送者和接收者之间,对于消息传递的可靠程度的协商. QoS 的设计是 MQTT 协议里的重点.作为专为物联网场景设计的协议,MQ ...
- Quality of service
w https://en.wikipedia.org/wiki/Quality_of_service Quality of service (QoS) is the overall performan ...
- Quality of Service (QoS) in LTE
Background: Why we need QoS ? There are premium subscribers who always want to have better user expe ...
- [译]Ocelot - Quality of Service
原文 可以针对每个ReRoute设置对下游服务的熔断器circuit breaker.这部分是通过Polly实现的. 将下面的配置添加到一个ReRoute下面去. "QoSOptions&q ...
- 别以为真懂Openstack: 虚拟机创建的50个步骤和100个知识点(5)
八.KVM 这一步,像virsh start命令一样,将虚拟机启动起来了.虚拟机启动之后,还有很多的步骤需要完成. 步骤38:从DHCP Server获取IP 有时候往往数据库里面,VM已经有了IP, ...
- OpenStack (1) - Keystone OpenStack Identity Service
echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc ...
- openstack setup demo Compute service
本文包含以下部分 Compute service overview Install and configure controller node Prerequisites Install and co ...
随机推荐
- Python———pandas数据处理
pandas模块 更高级的数据分析工具基于NumPy构建包含Series和DataFrame两种数据结构,以及相应方法 调用方法:from pandas import Series, DataFra ...
- Js获取客户端用户Ip地址
利用搜狐查询接口查询Ip地址: <!DOCTYPE html> <html> <head> <meta charset="utf-8" / ...
- python vs C++ 类
1. 什么是动态语言(wikipedia) 在运行时,可以进行一些操作(静态语言在编译时执行),比如扩展对象的定义.修改类型等 2. 定义类和创建对象 C++ python class A{ publ ...
- IntelliJ IDEA配置Maven
- WIN10远程连接,报错身份验证错误,要求的函数不受支持
我电脑是win10系统,我办公时经常需要远程连接其他电脑.突然间远程连接时就开始报错以下错误,导致无法远程连接. 这可能是由于CredSSP加密Oracle修正. 解决方法: 运行 gpedit.ms ...
- Jmeter组成结构及运行原理
Jmeter结构主要组成要素包括:测试计划,线程组,采样器以及监听器.对于各部件的作用域关系如下图: Jmeter是纯Java程序,使用JVM,运行采用多线程完成,往往单台负载机由于机器配置有限,支持 ...
- python迭代器Itertools
https://docs.python.org/3.6/library/itertools.html 一无限迭代器: Iterator Arguments Results Example count( ...
- 路漫漫其修远兮,吾将上下而求索--2019OKR规划
一.前言 加入博客园半年多,认识了很多优秀上进,乐于分享的人,我的男神:EdisonZhou,还有张队长,叶伟民,腾飞,梁桐铭 等等. 半年来写了26篇随笔,我的第一篇随笔 C# DynamicObj ...
- MyCat读写分离-笔记(四)
概述 Mycat能够实现数据库读写分离,不能实现主从同步,数据库的备份还是基于数据库层面的.Mycat只是数据库的中间件: Mycat读写分离配置 在MySQL中间件出现之前,对于MySQL主从集群, ...
- 激活windows专业版(激活windows10专业版,解决“我们无法在此设备上激活windows因为无法连接到你的组织的激活服务器 ”)
本来系统用的好好的,但是前几天系统突然提示我要去取设置里面激活windows,我就想:我的系统好像是原厂正版的吧,怎么就过期了呢?没办法只能搜索下怎么激活,去系统城,各大网站什么的试了好多密钥全部不行 ...