ceph对接openstack
一、使用rbd方式提供存储如下数据:
(1)image(glance):保存glanc中的image;
(2)volume(cinder)存储:保存cinder的volume;保存创建虚拟机时选择创建新卷;

(3)vms(nova)的存储:保存创建虚拟机时不选择创建新卷;

二、实施步骤:
(1)客户端也要有cent用户:(比如说我openstack环境有100多个节点,我不可能说所有的节点都创建cent这个用户吧,那要选择的去创建,在我节点部署像cinder、nova、glance这三个服务的节点上去创建cent用户)
useradd cent && echo "" | passwd --stdin cent
echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod /etc/sudoers.d/ceph
(2)openstack要用ceph的节点(比如compute-node和storage-node)安装下载的软件包:
yum localinstall ./* -y
或则:每个节点安装 clients(要访问ceph集群的节点):
yum install python-rbd yum install ceph-common #ceph的命令工具
如果先采用上面的方式安装客户端,其实这两个包在rpm包中早已经安装过了
(3)部署节点上执行,为openstack节点安装ceph:
ceph-deploy install controller ceph-deploy admin controller
(4)客户端执行
sudo chmod /etc/ceph/ceph.client.admin.keyring
(5)create pools,只需在一个ceph节点上操作即可:
在ceph环境里创建三个pool,这三个pool是分别保存我们openstack平台的镜像、虚拟机、卷。
ceph osd pool create images ceph osd pool create vms ceph osd pool create volumes
显示pool的状态
ceph osd lspools
(6)在ceph集群中,创建glance和cinder用户, 只需在一个ceph节点上操作即可:
我ceph集群要给你openstack平台的glance和cinder用。
部署节点创建glance和cinder用户
useradd glance useradd cinder
而后做授权
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=images'
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
controller compute storage三个节点已经有了这两个用户
nova使用cinder用户,就不单独创建了
(7)拷贝ceph-ring(生成令牌环), 只需在一个ceph节点上操作即可:
ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring

使用scp拷贝到其他节点(ceph集群节点和openstack的要用ceph的节点比如compute-node和storage-node,本次对接的是一个all-in-one的环境,所以copy到controller节点即可 )
[root@yunwei ceph]# ls
ceph.client.admin.keyring ceph.client.cinder.keyring ceph.client.glance.keyring ceph.conf rbdmap tmpR3uL7W
[root@yunwei ceph]#
[root@yunwei ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:/etc/ceph/
(8)更改文件的权限(所有客户端节点均执行)
chown glance:glance /etc/ceph/ceph.client.glance.keyring chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
(9)更改libvirt权限(只需在nova-compute节点上操作即可,每个计算节点都做)
uuidgen 940f0485-e206-4b49-b878-dcd0cb9c70a4
在/etc/ceph/目录下(在什么目录没有影响,放到/etc/ceph目录方便管理):
cat > secret(认证的意思).xml <<EOF <secret ephemeral='no' private='no'> <uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
将 secret.xml 拷贝到所有compute节点,并执行::
virsh secret-define --file secret.xml ceph auth get-key client.cinder > ./client.cinder.key virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
最后所有compute节点的client.cinder.key和secret.xml都是一样的, 记下之前生成的uuid:940f0485-e206-4b49-b878-dcd0cb9c70a4
如遇如下错误:
[root@controller ceph]# virsh secret-define --file secret.xml
错误:使用 secret.xml 设定属性失败
错误:internal error: 已将 UUID 为d448a6ee-60f3-42a3-b6fa-6ec69cab2378 的 secret 定义为与 client.cinder secret 一同使用 [root@controller ~]# virsh secret-list
UUID 用量
-------------------------------------------------------------------------------- d448a6ee-60f3-42a3-b6fa-6ec69cab2378 ceph client.cinder secret [root@controller ~]# virsh secret-undefine d448a6ee-60f3-42a3-b6fa-6ec69cab2378
已删除 secret d448a6ee-60f3-42a3-b6fa-6ec69cab2378 [root@controller ~]# virsh secret-list
UUID 用量
-------------------------------------------------------------------------------- [root@controller ceph]# virsh secret-define --file secret.xml
生成 secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 [root@controller ~]# virsh secret-list
UUID 用量
-------------------------------------------------------------------------------- 940f0485-e206-4b49-b878-dcd0cb9c70a4 ceph client.cinder secret virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
(10)配置Glance, 在所有的controller节点上做如下更改:
vim /etc/glance/glance-api.conf
[DEFAULT]
default_store = rbd
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size =
[image_format]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
在所有的controller节点上做如下更改
systemctl restart openstack-glance-api.service systemctl status openstack-glance-api.service
创建image验证:
[root@controller ~]# openstack image create "cirros" --file cirros-0.3.-x86_64-disk.img.img --disk-format qcow2 --container-format bare --public [root@controller ~]# rbd ls images
9ce5055e--44b4-a237-e7b577a20dac **********有输出镜像说明成功了
(8)配置 Cinder:
vim /etc/cinder/cinder.conf [DEFAULT]
my_ip = #当前主机IP
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
state_path = /var/lib/cinder
transport_url = rabbit://openstack:admin@controller
[backend]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth =
rbd_store_chunk_size =
rados_connect_timeout = -
glance_api_version =
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
volume_backend_name=ceph
重启cinder服务:
#控制节点重启
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
#存储节点重启
openstack-cinder-volume.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
创建volume验证:
[root@controller gfs]# rbd ls volumes volume-43b7c31d-a773--8e4a-9ed78ec18996
(9)配置Nova:
vim /etc/nova/nova.conf [DEFAULT]
my_ip=#当前主机IP
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:admin@controller
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionOne
[cloudpipe]
[conductor]
[console]
[consoleauth]
[cors]
[cors.subdomain]
[crypto]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[image_file_url]
[ironic]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
virt_type=qemu
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
auth_type = password
auth_url = http://controller:35357/v3
project_name = service
project_domain_name = Default
username = placement
password = placement
user_domain_name = Default
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
重启nova服务:
#控制节点重启
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-compute.service
#计算节点重启
openstack-nova-compute.service
#存储节点重启
openstack-nova-compute.service systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-compute.service
创建虚机验证:
ceph对接openstack的更多相关文章
- ceph对接openstack环境(4)
ceph对接openstack环境 环境准备: 保证openstack节点的hosts文件里有ceph集群的各个主机名,也要保证ceph集群节点有openstack节点的各个主机名 一.使用rbd方式 ...
- ceph 对接openstack liberty
Ceph 准备工作 官方文档:http://docs.ceph.com/docs/master/rbd/rbd-openstack/ 官方中文文档:http://docs.ceph.org.cn/rb ...
- ceph对接openstack环境
环境准备: 保证openstack节点的hosts文件里有ceph集群的各个主机名,也要保证ceph集群节点有openstack节点的各个主机名 一.使用rbd方式提供存储如下数据: (1)image ...
- OpenStack学习系列之十二:安装ceph并对接OpenStack
Ceph 是一种为优秀的性能.可靠性和可扩展性而设计的统一的.分布式文件系统.Ceph 的统一体现在可以提供文件系统.块存储和对象存储,分布式体现在可以动态扩展.在国内一些公司的云环境中,通常 ...
- “CEPH浅析”系列之六——CEPH与OPENSTACK
在 <"Ceph浅析"系列之二--Ceph概况>中即已提到,关注Ceph的原因之一,就是OpenStack社区对于Ceph的重视.因此,本文将对Ceph在OpenSta ...
- Ceph在OpenStack中的地位
对Ceph在OpenStack中的价值进行简要介绍,并且对Ceph和Swift进行对比. 对于一个IaaS系统,涉及到存储的部分主要是块存储服务模块.对象存储服务模块.镜像管理模块和计算服务模块.具体 ...
- Ceph与OpenStack整合(仅为云主机提供云盘功能)
1. Ceph与OpenStack整合(仅为云主机提供云盘功能) 创建: linhaifeng,最新修改: 大约1分钟以前 ceph ceph osd pool create volumes 128 ...
- The Dos and Don'ts for Ceph for OpenStack
Ceph和OpenStack是一个非常有用和非常受欢迎的组合. 不过,部署Ceph / OpenStack经常会有一些容易避免的缺点 - 我们将帮助你解决它们 使用 show_image_direct ...
- ceph与openstack对接
对接分为三种,也就是存储为openstack提供的三类功能1.云盘,就好比我们新加的硬盘2.原本的镜像也放在ceph里,但是我没有选择这种方式,原因是因为后期有要求,但是我会把这个也写出来,大家自己对 ...
随机推荐
- anaconda 安装pyspider出错
注释Lib\mimetypes.py里面的 try: mimetype = mimetype.encode(default_encoding) except UnicodeEncodeErr ...
- [VBA]去重汇总
问题描述:汇总多个工作表的指定字段到sheet1里面,并去除重复的字段内容. Sub 去重汇总() Dim sht As Worksheet, j As Integer, x As Integer S ...
- 第四章 SpringCloud之Eureka-Client实现服务(Jpa,H2)
1.pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="h ...
- Jmeter+Ant+Jenkins接口自动化持续集成环境搭建(Linux)
一.安装说明 系统环境:CentOS release 6.4 JDK版本:jdk1.8.0_181 Jmeter版本:apache-jmeter-3.0 Ant版本:apache-ant-1.9.13 ...
- 【Deep Learning Nanodegree Foundation笔记】第 9 课:Model Evaluation and Validation
In this lesson, you'll learn some of the basics of training models. You'll learn the power of testin ...
- 【VS开发】Windows上的音频采集技术
前一段时间接到一个任务,需要采集到声卡的输出信号,以便与麦克风的输入信号进行混音. 之前一直没有研究过音频的相关技术,这次就顺便抽出一点时间去了解了一下Windows上采集音频的相关技术. 对于音频处 ...
- PERCONA-TOOLKIT 安装 使用
1.基于MySQL主从环境 可以参考https://www.cnblogs.com/xianglei_/p/12068241.html 上传rpm包 并安装 1 2 cd /usr/local/src ...
- HDU 2809 God of War (状压DP)
God of War Time Limit: 6000/2000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)Total ...
- SQL中前置0和后置0的处理问题
在sql语句中经常遇到处理前置和后置数据的问题 1.首先使用convert转化函数对预处理的数据进行转化,CONVERT()函数可以将制定的数据类型转换为另一种数据类型 MySQL 的CAST()和C ...
- Spring boot 整合 shiro 出现 org.apache.shiro.UnavailableSecurityManagerException: 错误
最开始参考的是这个 文档 但是并没有解决我的问题,因为他的配置和我的是一样(差不多)的 https://www.cnblogs.com/ginponson/p/6217057.html 然后看到此篇博 ...
