前言:本次部署采用系统的是Centos 8-Stream版,存储库为OpenStack-Victoria版,除基础配置,五大服务中的时间同步服务,七大组件中的nova服务,neutron服务,cinder服务需要在双节点配置外,其他服务配置均在控制节点,neutron配置从公有网络私有网络中选择一种即可,大多数情况还是选公有网络的配置,此次部署所有密码均为111111,可按自身需要自行配置

安装环境

  • 采用虚拟化软件:VMware Workstation 16 Pro
  • 操作系统:Centos 8-Stream
  • 控制节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎
  • 计算节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎

基础配置(双节点)

Yum源仓库配置

阿里云镜像仓库地址:https://mirrors.aliyun.com,有需要可自行配置,但是这里用不到

(1) 配置Centos 8的源只需改yum仓库.repo文件参数即可如下

#更改CentOS-Stream-AppStream.repo文件,将baseurl参数中的地址改为https://mirrors.aliyun.com

[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# vi CentOS-Stream-AppStream.repo
[appstream]
name=CentOS Stream $releasever - AppStream
#mirrorlist=http://mirrorlist.centos.org/? release=$stream&arch=$basearch&repo=AppStream&infra=$infra
baseurl=https://mirrors.aliyun.com/$contentdir/$stream/AppStream/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial #更改CentOS-Stream-BaseOS.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com [root@localhost yum.repos.d]# vi CentOS-Stream-BaseOS.repo
[baseos]
name=CentOS Stream $releasever - BaseOS
#mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=BaseOS&infra=$infra
baseurl=https://mirrors.aliyun.com/$contentdir/$stream/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial #更改CentOS-Stream-Extras.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com [root@localhost yum.repos.d]# vi CentOS-Stream-Extras.repo
[extras]
name=CentOS Stream $releasever - Extras
#mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=extras&infra=$infra
baseurl=https://mirrors.aliyun.com/$contentdir/$stream/extras/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

(2)配置openstack源

#在yum仓库文件夹下面创建openstack-victoria.repo文件

[root@localhost ~]# vi /etc/yum.repos.d/openstack-victoria.repo
#写入以下内容
[virctoria]
name=virctoria
baseurl=https://mirrors.aliyun.com/centos/8-stream/cloud/x86_64/openstack-victoria/
gpgcheck=0
enabled=1

(3)清除缓存,重建缓存

[root@controller ~]# yum clean all
[root@controller ~]# yum makecache

网络配置

  • 控制节点双网卡-------> 仅主机IP:10.10.10.10 Net外网IP:10.10.20.10
  • 计算节点双网卡-------> 仅主机IP:10.10.10.20 Net外网IP:10.10.20.20

(1)安装network网络服务

#安装network,由于8系统自带的服务为NetworkManager,它会与neutron服务有冲突,所以安装network,关闭NetworkManager,并设置disable状态

[root@localhost ~]# dnf -y install network-scripts
[root@localhost ~]# systemctl disable --now NetManager #启动network服务,设为开机自启动 [root@localhost ~]# systemctl enable --now network

(2) 配置静态IP

#ens33,以控制节点为例
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=static #修改
ONBOOT=yes #修改
IPADDR=10.10.10.10 #添加
NETMASK=255.255.255.0 #添加 #ens34,以控制节点为例
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
BOOTPROTO=static #修改
ONBOOT=yes #修改
IPADDR=10.10.20.10 #添加
NETMASK=255.255.255.0 #添加
GATEWAY=10.10.20.2 #添加
DNS1=8.8.8.8 #添加
DNS2=114.114.114.114 #添加

(3)重启网络,测试外网连通性

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ping -c 3 www.baidu.com

主机配置

(1)修改主机名

#控制节点
[root@localhost ~]# hostnamectl set-hostname controller
[root@localhost ~]# bash
[root@controller ~]# #计算节点
[root@localhost ~]# hostnamectl set-hostname compute
[root@localhost ~]# bash
[root@compute ~]#

(2)关闭防火墙

#关防火墙并设置disable开机禁启动
[root@controller ~]# systemctl disable --now firewalld

(3)关闭selinux安全子系统

#设置selinux并设置disable开机禁启动
[root@controller ~]# vi /etc/selinux/config
SELINUX=disabled #可通过getenforce命令查看selinux状态
[root@controller ~]# getenforce
Disabled

(4)配置host主机映射

#控制节点
[root@controller ~]# cat >>etc/hosts<<EOF
> 10.10.10.10 controller
> 10.10.10.20 computer
> EOF #计算节点
[root@compute ~]# cat >>etc/hosts<<EOF
> 10.10.10.10 controller
> 10.10.10.20 compute
> EOF

openstack存储库

#安装openstack-victoria版存储库
[root@controller ~]# dnf -y install centos-release-openstack-victoria #升级节点上所有的安装包
[root@controller ~]# dnf -y upgrade #安装openstack客户端和openstack-selinux
[root@controller ~]# dnf -y install python3-openstackclient openstack-selinux

五大服务

Chrony时间同步(双节点)

(1)查看系统是否安装chrony

[root@controller ~]# rpm -qa |grep chrony

#没有的话就安装
[root@controller ~]# dnf -y install chrony

(2)编辑chrony配置文件

#控制节点
[root@controller ~]# vim /etc/chrony.conf
server ntp6.aliyun.com iburst #添加与阿里云时间同步
allow 10.10.10.0/24 #添加 #计算节点
[root@controller ~]# vim /etc/chrony.conf
server controller iburst #添加与控制节点时间同步

(3)重启时间同步服务,设置开机自启

[root@controller ~]# systemctl restart chronyd && systemctl enable chronyd

Mariadb数据库

(1)安装mariadb数据库

[root@controller ~]# dnf -y install mariadb mariadb-server python3-PyMySQL 

#启动mariadb数据库
[root@controller ~]# systemctl start mariadb

(2)创建openstack.cnf文件,编辑它

[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.10.10.10 #绑定IP,如果后面换IP,这行可以删掉
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

(3)初始化数据库

[root@controller ~]# mysql_secure_installation
Enter current password for root (enter for none): #输入当前用户root密码,若为空直接回车
OK, successfully used password, moving on...
Set root password? [Y/n] y # 是否设置root密码
New password: # 输入新密码
Re-enter new password: # 再次输入新密码
Remove anonymous users? [Y/n] y # 是否删除匿名用户
Disallow root login remotely? [Y/n] n # 是否禁用远程登录
Remove test database and access to it? [Y/n] y # 是否删除数据库并访问它
Reload privilege tables now? [Y/n] y # 是否重新加载权限表

(4)重启数据库服务并设置开机自启

[root@controller ~]# systemctl restart mariadb && systemctl enable mariadb

RabbitMQ消息队列

注意:安装rabbitmq-server时,可能会报错,这是安装源里面没有libSDL,下载所需包,再安装rabbitmq-server就行了

下载命令:wget http://rpmfind.net/linux/centos/8-stream/PowerTools/x86_64/os/Packages/SDL2-2.0.10-2.el8.x86_64.rpm

安装命令:dnf -y install SDL2-2.0.10-2.el8.x86_64.rpm

(1)安装rabbitmq软件包

[root@controller ~]# dnf -y install rabbitmq-server

(2)启动消息队列服务并设置开机自启动

[root@controller ~]# systemctl start rabbitmq-server && systemctl enable rabbitmq-server

(3) 添加openstack用户并设置密码

[root@controller ~]# rabbitmqctl add_user openstack 111111

(4) 配置openstack用户权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

(5)启用消息队列Web界面管理插件

[root@controller ~]# rabbitmq-plugins enable rabbitmq_management

#这一步启动后,ss -antlu命令查看端口会有一个15672的端口开启,可通过web界面登录RabbitMQ查看,网站地址:http://10.10.10.10:15672,用户和密码默认都是guest

Memcached缓存

(1)安装memcache软件包

[root@controller ~]# dnf -y install memcached python3-memcached

(2)编辑memcache配置文件

[root@controller ~]# vim /etc/sysconfig/memcached
..........
OPTIONS="-l 127.0.0.1,::1,controller" #修改这一行

(3)重启缓存服务并设置开机自启

[root@controller ~]# systemctl start memcached && systemctl enable memcached

Etcd集群

(1)安装etcd软件包

[root@controller ~]# dnf -y install etcd

(2)编辑etcd配置文件

[root@controller ~]# vim /etc/etcd/etcd.conf
#修改如下
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.10.10.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.10.10.10:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.10:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.10.10.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

(3)启动etcd服务并设置开机自启动

[root@controller ~]# systemctl start etcd && systemctl enable etcd

七大组件

Keystone认证

(1)数据库创库授权

#进入数据库
[root@controller ~]# mysql -u root -p111111 #创建keystone数据库
MariaDB [(none)]> CREATE DATABASE keystone; #授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '111111';

(2)安装keystone软件包

[root@controller ~]# dnf -y install openstack-keystone httpd python3-mod_wsgi

(3)编辑配置文件

#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf #编辑
[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:111111@controller/keystone [token]
provider = fernet

(4)数据库初始化

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

(5)查看keystone数据库表信息

[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use keystone;
MariaDB [keystone]> show tables;
MariaDB [keystone]> quit

(6)初始化Fernet

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

(7)引导身份认证

[root@controller ~]# keystone-manage bootstrap --bootstrap-password 111111 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

(8)配置Apache HTTP服务

#编辑httpd.conf文件
[root@controller ~]# vim /etc/httpd/conf/httpd.conf
ServerName controller #添加这一行 <Directory />
AllowOverride none
Require all granted #这一行改成这样
</Directory> #创建wsgi-keystone.conf文件链接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

(9)重启httpd服务并设置开机自启动

[root@controller ~]# systemctl restart httpd && systemctl enable httpd

(10)创建admin环境变量脚本

[root@controller ~]# vim /admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=111111
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2 #可通过source /admin-openrc.sh命令来导入环境变量,或./admin-openrc.sh命令,如果不想每次手动导入,可以修改.bashrc配置文件实现开机启动导入
[root@controller ~]# vim .bashrc
source /admin-openrc.sh #添加这一行

(11)创建域,项目,用户和角色

#创建域,已有默认域default,自己可随便创一个
[root@controller ~]# openstack domain create --description "An Example Domain" example #创建service项目
[root@controller ~]# openstack project create --domain default --description "Service Project" service #创建测试项目
[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject #创建用户,此命令执行会要求输入密码,输两次即可
[root@controller ~]# openstack user create --domain default --password-prompt myuser #创建角色
[root@controller ~]# openstack role create myrole #添加角色与项目,用户绑定
[root@controller ~]# openstack role add --project myproject --user myuser myrole

(12)验证token令牌

[root@controller ~]# openstack token issue

Glance镜像

(1) 数据库创库授权

#进入数据库
[root@controller ~]# mysql -u root -p111111 #创建glance数据库
MariaDB [(none)]> CREATE DATABASE glance; #授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '111111';

(2) 安装glance软件包

注:安装报错,修改CentOS-Stream-PowerTools.repo源为enable=1,重新安装

[root@controller ~]# dnf install -y openstack-glance

(3) 编辑配置文件

#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf #编辑
[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:111111@controller/glance [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 111111 [paste_deploy]
flavor = keystone [glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

(4) 数据库初始化

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

(5) 查看glance数据库表信息

[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use glance;
MariaDB [keystone]> show tables;
MariaDB [keystone]> quit

(6) 创建glance用户和服务,关联admin角色

#创建glance用户
[root@controller ~]# openstack user create --domain default --password 111111 glance #关联admin角色
[root@controller ~]# openstack role add --project service --user glance admin #创建glance服务
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

(7) 注册API接口

#public
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292 #internal
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292 #admin
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

(8) 查看服务端点

[root@controller ~]# openstack endpoint list

(9) 启动glance服务并设置开机自启

[root@controller ~]# systemctl start openstack-glance-api && systemctl enable openstack-glance-api

(10) 测试镜像功能

#此次采用的镜像为cirros-0.5.1-x86_64-disk.img,创建命令如下
[root@controller ~]# openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public #创建成功后可通过openstack命令查看
[root@controller ~]# openstack image list #进入glance数据库查看,存放在images表中
[root@controller ~]# mysql -uroot -p111111 MariaDB [(none)]> use glance;
MariaDB [glance]> select * from images\G; #在/var/lib/glance/images/目录下可以看到镜像文件,如果要删除此镜像需要删除数据库信息,再删除镜像文件
[root@controller ~]# ls /var/lib/glance/images/

Placement放置

(1) 数据库创库授权

#进入数据库
[root@controller ~]# mysql -u root -p111111 #创建placement数据库
MariaDB [(none)]> CREATE DATABASE placement; #授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '111111';

(2) 安装placement软件包

[root@controller ~]# dnf install -y openstack-placement-api

(3) 编辑配置文件

#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak >/etc/placement/placement.conf #编辑
[root@controller ~]# vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:111111@controller/placement [api]
auth_strategy = keystone [keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 111111

(4) 数据库初始化

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

(5) 查看placement数据库表信息

[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use placement;
MariaDB [keystone]> show tables;
MariaDB [keystone]> quit

(6) 创建placement用户和服务,关联admin角色

#创建placement用户
[root@controller ~]# openstack user create --domain default --password 111111 placement #关联admin角色
[root@controller ~]# openstack role add --project service --user placement admin #创建placement服务
[root@controller ~]# openstack service create --name placement --description "Placement API" placement

(7) 注册API接口

#public
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 #internal
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 #admin
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

(8) 查看服务端点

[root@controller ~]# openstack endpoint list

(9) 重启httpd服务

[root@controller ~]# systemctl restart httpd

检测placement服务状态

[root@controller ~]# placement-status upgrade check

Nova计算

1,控制节点(1)

(1) 数据库创库授权
#进入数据库
[root@controller ~]# mysql -u root -p111111 #创建nova_api,nova和nova_cell0数据库
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0; #授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '111111';
(2) 安装nova软件包
[root@controller ~]# dnf install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
(3) 编辑配置文件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf #编辑
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:111111@controller:5672/
my_ip = 10.10.10.10 #本机IP,如果将来换IP,这地方一定要改 [api_database]
connection = mysql+pymysql://nova:111111@controller/nova_api [database]
connection = mysql+pymysql://nova:111111@controller/nova [api]
auth_strategy = keystone [keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 111111 [vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip [glance]
api_servers = http://controller:9292 [oslo_concurrency]
lock_path = /var/lib/nova/tmp [placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 111111
(4) 数据库初始化
# 同步nova_api数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova # 同步nova_cell0数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova # 创建cell1
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova # 同步nova数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
(5) 创建nova用户和服务,关联admin角色
#创建nova用户
[root@controller ~]# openstack user create --domain default --password 111111 nova #关联admin角色
[root@controller ~]# openstack role add --project service --user nova admin #创建nova服务
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
(6) 注册API接口
#public
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 #internal
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 #admin
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
(7) 查看服务端点
[root@controller ~]# openstack endpoint list
(8) 验证nova_cell0和cell1是否添加成功
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
(9) 启动nova所有服务并设为开机自启
[root@controller ~]# systemctl enable --now openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
(10) 查看nova服务是否启动
[root@controller ~]# nova service-list

#一般只会显示两个服务:nova-scheduler和nova-conductor,这是因为上面这条命令是由nova-api接收,而它控制着nova-scheduler和nova-conductor服务,如果nova-api未开启,那这两个服务也会down掉,nova-novncproxy服务则是通过查看端口号的形式,示例如下:
[root@controller ~]# netstat -lntup | grep 6080
tcp 0 0 0.0.0.0:6080 0.0.0.0:* LISTEN 1456/python3
[root@controller ~]# ps -ef | grep 1456
nova 1456 1 0 18:29 ? 00:00:05 /usr/bin/python3 /usr/bin/nova-novncproxy --web /usr/share/novnc/
root 27724 26054 0 20:51 pts/0 00:00:00 grep --color=auto 1456
(11) 如何通过web界面查看
#如果不配置域名解析,就直接用ip
http://10.10.10.10:6080 #如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加
10.10.10.10 controller
10.10.10.20 compute
#再访问
http://controller:6080

2,计算节点

(1) 安装nova软件包
[root@compute ~]# dnf install -y openstack-nova-compute
(2) 编辑配置文件
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf #编辑
[root@compute ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:111111@controller
my_ip = 10.10.10.20 #本机IP,如果将来换IP,这地方一定要改 [api]
auth_strategy = keystone [keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 111111 [vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html [glance]
api_servers = http://controller:9292 [oslo_concurrency]
lock_path = /var/lib/nova/tmp [placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 111111
(3) 确定计算节点是否支持虚拟机的硬件加速
#如果此命令返回值是别的数字,计算节点支持硬件加速;如果此命令返回值是0,计算节点不支持硬件加速,需要配置[libvirt]
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo #配置[libvirt]
[root@compute ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
(4) 启动计算节点nova服务并设置开机自启动
[root@compute ~]# systemctl enable --now libvirtd.service openstack-nova-compute.service

控制节点(2)

(5) 将计算节点添加到单元数据库
#确认数据库中存在计算主机
[root@controller ~]# openstack compute service list --service nova-compute #控制节点发现计算节点
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
(6) 设置发现间隔
[root@controller ~]# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

Neutron网络

(1) 数据库创库授权
#进入数据库
[root@controller ~]# mysql -u root -p111111 #创建neutron数据库
MariaDB [(none)] CREATE DATABASE neutron; #授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '111111';
(2) 创建neutron用户和服务,关联admin角色
#创建neutron用户
[root@controller ~]# openstack user create --domain default --password 111111 neutron #关联admin角色
[root@controller ~]# openstack role add --project service --user neutron admin #创建neutron服务
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
(3) 注册API接口
#public
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 #internal
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 #admin
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
(4) 查看服务端点
[root@controller ~]# openstack endpoint list

控制节点公有网络

(1) 安装neutron软件包
[root@controller ~]# dnf -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf #编辑
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true [database]
connection = mysql+pymysql://neutron:111111@controller/neutron [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111 [nova] #如果配置文件没有这个参数,就直接加
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 111111 [oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 编辑ml2插件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini #编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security [ml2_type_flat]
flat_networks = provider [securitygroup]
enable_ipset = true
(4) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡 [vxlan]
enable_vxlan = false [securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(5) 配置DHCP代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
(6) 设置网桥过滤器
#修改系统参数配置文件
[root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf #加载br_netfilter模块
[root@controller ~]# modprobe br_netfilter #检查
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(7) 配置元数据代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET #'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
(8) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET #密码要一致
(9) 创建网络服务初始化脚本链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(10) 数据库初始化
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(11) 重启nova的API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
(12) 启动neutron服务并设置开机自启
[root@controller ~]# systemctl enable --now neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

计算节点公有网络

(1) 安装neutron软件包
[root@compute ~]# dnf install -y openstack-neutron-linuxbridge ebtables ipset
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf #编辑
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111 [oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡 [vxlan]
enable_vxlan = false [securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4) 设置网桥过滤器
#修改系统参数配置文件
[root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf #加载br_netfilter模块
[root@compute ~]# modprobe br_netfilter #检查
[root@compute ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(5) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
(6) 重启nova的API服务
[root@compute ~]# systemctl restart openstack-nova-api.service
(7) 启动Linux网桥服务并设置开机自启
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

公有网络服务是否正常运行

(8) 控制节点查看网络代理服务列表
#控制节点查看网络代理服务列表
[root@controller ~]# openstack network agent list #一般成功后会出现Metadata agent,DHCP agent,两个Linux bridge agent一共四个代理,一个Linux bridge agent属于controlller,另一个属于compute

控制节点私有网络

(1) 安装neutron软件包
[root@controller ~]# dnf -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf #编辑
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true [database]
connection = mysql+pymysql://neutron:111111@controller/neutron [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111 [nova] #如果配置文件没有这个参数,就直接加
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 111111 [oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 编辑ml2插件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini #编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security [ml2_type_flat]
flat_networks = provider [ml2_type_vxlan]
vni_ranges = 1:1000 [securitygroup]
enable_ipset = true
(4) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡 [vxlan]
enable_vxlan = true
local_ip = 10.10.10.10
l2_population = true [securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(5) 设置网桥过滤器
#修改系统参数配置文件
[root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf #加载br_netfilter模块
[root@controller ~]# modprobe br_netfilter #检查
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(6) 配置DHCP代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
(7) 配置第三层代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
(8) 配置元数据代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET #'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
(9) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET #密码要一致
(10) 创建网络服务初始化脚本链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(11) 数据库初始化
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(12) 重启nova的API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
(13) 启动neutron服务并设置开机自启
[root@controller ~]# systemctl enable --now neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service neutron-l3-agent.service

计算节点私有网络

(1) 安装neutron软件包
[root@compute ~]# dnf install -y openstack-neutron-linuxbridge ebtables ipset
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf #编辑
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111 [oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini #编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡 [vxlan]
enable_vxlan = true
local_ip = 10.10.10.20
l2_population = true [securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4) 设置网桥过滤器
#修改系统参数配置文件
[root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf #加载br_netfilter模块
[root@compute ~]# modprobe br_netfilter #检查
[root@compute ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
(5) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
(6) 重启nova的API服务
[root@compute ~]# systemctl restart openstack-nova-api.service
(7) 启动Linux网桥服务并设置开机自启
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

私有网络服务是否正常运行

(8) 控制节点查看网络代理服务列表
#控制节点查看网络代理服务列表
[root@controller ~]# openstack network agent list #一般成功后会出现Metadata agent,DHCP agent,L3 agent,两个Linux bridge agent一共五个代理,一个Linux bridge agent属于controlller,另一个属于compute

Dashboard仪表盘

(1) 安装dashboard软件包

[root@controller ~]# dnf install -y openstack-dashboard

(2) 编辑dashboard配置文件

#此文件内所有选项与参数用命令模式搜索,有就修改,没有就添加
[root@controller ~]# vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" #不配域名解析就要把IP写进去
ALLOWED_HOSTS = ['controller',‘compute’,'10.10.10.10','10.10.10.20'] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
} OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
} OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
} TIME_ZONE = "Asia/Shanghai"

(3) 配置http服务

[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}			#添加这行

#编辑dashboard配置文件
[root@controller ~]# vim /etc/openstack-dashboard/local_settings WEBROOT = '/dashboard/' #添加这行

(4) 重启http和缓存服务

[root@controller ~]# systemctl restart httpd.service memcached.service

(5) 登录web界面

#如果不配置域名解析,就直接用ip
http://10.10.10.10/dashboard #如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加
10.10.10.10 controller
10.10.10.20 compute
#再访问
http://controller/dashboard

Cinder存储

控制节点

(1) 数据库创库授权
#进入数据库
[root@controller ~]# mysql -u root -p111111 #创建cinder数据库
MariaDB [(none)] CREATE DATABASE cinder; #授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '111111';
(2) 编辑配置文件
#复制一份去掉注释
[root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf #编辑
[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
my_ip = 10.10.10.10 [database]
connection = mysql+pymysql://cinder:111111@controller/cinder [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 111111 [oslo_concurrency]
lock_path = /var/lib/cinder/tmp
(3) 数据库初始化
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
(4) 查看cinder数据库表信息
[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use cinder;
MariaDB [cinder]> show tables;
MariaDB [cinder]> quit
(5) 创建cinder用户和服务,关联admin角色
#创建cinder用户
[root@controller ~]# openstack user create --domain default --password 111111 placement #关联admin角色
[root@controller ~]# openstack role add --project service --user cinder admin #创建cinderv2,cinderv3服务
[root@controller ~]# openstack service create --name cinderv2 \> --description "OpenStack Block Storage" volumev2
[root@controller ~]# openstack service create --name cinderv3 \> --description "OpenStack Block Storage" volumev3
(6) 注册API接口
cinderv2的服务端点
#public
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev2 public http://controller:8776/v2/%\(project_id\)s #internal
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev2 internal http://controller:8776/v2/%\(project_id\)s #admin
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev2 admin http://controller:8776/v2/%\(project_id\)s
cinderv3的服务端点
#public
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev3 public http://controller:8776/v3/%\(project_id\)s #internal
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev3 internal http://controller:8776/v3/%\(project_id\)s #admin
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev3 admin http://controller:8776/v3/%\(project_id\)s
(7) 查看服务端点
[root@controller ~]# openstack endpoint list
(8) 配置计算服务使用块存储
#编辑nova配置文件
[root@controller cinder]# vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne #重启nova
[root@controller ~]# systemctl restart openstack-nova-api.service
(9) 启动cinder服务并设置开机自启
[root@controller ~]# systemctl enable --now openstack-cinder-api.service openstack-cinder-scheduler.service

计算节点(关闭虚拟机添加一块50G硬盘)

(1) 查看磁盘
[root@compute ~]# fdisk --list
(2) 安装 LVM 包
[root@compute ~]# dnf -y install lvm2 device-mapper-persistent-data
(3) 创建 LVM 物理卷/dev/sdb
[root@compute ~]# pvcreate /dev/sdb
(4) 创建 LVM 卷组cinder-volumes
[root@compute ~]# vgcreate cinder-volumes /dev/sdb
(5) 修改LVM配置
#复制一份去掉注释
[root@compute ~]# cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/lvm/lvm.conf.bak > /etc/lvm/lvm.conf #编辑
[root@compute ~]# vi /etc/lvm/lvm.conf
devices {
filter = [ "a/sda/",a/sdb/", "r/.*/"]
}
(6) 安装cinder相关软件包
[root@compute ~]# dnf install -y openstack-cinder targetcli python3-keystone
(7) 编辑cinder配置文件
#复制一份去掉注释
[root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf #编辑
[root@compute ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
my_ip = 10.10.10.20
enabled_backends = lvm
glance_api_servers = http://controller:9292 [database]
connection = mysql+pymysql://cinder:111111@controller/cinder [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 111111 [lvm] #没有就添加
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes #要与创建的卷组名对应
target_protocol = iscsi
target_helper = lioadm [oslo_concurrency]
lock_path = /var/lib/cinder/tmp
(8) 启动cinder服务并设置开机自启
[root@compute ~]# systemctl enable --now openstack-cinder-volume.service target.service
(9) 返回控制节点,查看服务列表
[root@controller ~]# openstack volume service list
#显示这样就行
+------------------+-------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2023-05-11T08:12:03.000000 |
| cinder-volume | compute@lvm | nova | enabled | up | 2023-05-11T08:12:02.000000 |
+------------------+-------------+------+---------+-------+----------------------------+

至此,openstack云平台搭建V版已全部完成

OpenStack云平台部署的更多相关文章

  1. OpenStack——云平台部署

    一.配置网络 准备:安装两台最小化的CentOS7.2的虚拟机,分别添加两张网卡,分别为仅主机模式和NAT模式,并且计算节点设置为4G运行内存,50G硬盘 1.控制节点--配置网络 控制节点第一个网卡 ...

  2. OpenStack云平台网络模式及其工作机制

    转自:http://openstack.csdn.net/content.html?arcid=2808381 OpenStack云平台网络模式及其工作机制 网络,是OpenStack的部署中最容易出 ...

  3. 干货 | 手把手教你搭建一套OpenStack云平台

    1 前言 今天我们为一位朋友搭建一套OpenStack云平台. 我们使用Kolla部署stein版本的OpenStack云平台. kolla是用于自动化部署OpenStack的一个项目,它基于dock ...

  4. CloudFoundry 云平台部署

    CloudFoundry云平台部署 CloudFoundry(TheOpenSourceCloudOperatingSystem)距离发布已经一年多了作为第一个开源的PaaS平台日臻成熟.在这一年里C ...

  5. 干货|带你体验一次原生OpenStack云平台发放云主机的过程

    一个执着于技术的公众号 1 前言 上一章节我们完成了OpenStack云平台的搭建工作,今天就带大家一起学习下如何发放一台云主机 点击查看:如何搭建一套OpenStack云平台 2 发放OpenSta ...

  6. 【OpenStack云平台】安装Centos操作系统

    视频教程:https://live.csdn.net/v/236820 1.环境准备 准备实验所需要的环境,需要安装VMware Workstation.使用的系统镜像为CentOS-7.5-x86_ ...

  7. 搭建Openstack云平台

    实验室需要做一个大数据平台项目,临时接下需要部署实验室云平台的任务,由于之前没有接触过相关技术,仅以此篇作为纪录文,记录一下我的openstack的初步学习以及搭建过程. 1.openstcak及其组 ...

  8. 【OpenStack云平台】搭建openstack云平台

    1. 系统镜像 安装运行环境系统要求为CentOS7.5,内核版本不低于3.10. CentOS-7.5-x86_64-DVD-1804.iso Chinaskill_Cloud_iaas.iso 2 ...

  9. openStack 云平台管理节点管理网口流量非常大 出现丢包严重 终端总是时常中断问题调试及当前测试较有效方案

    tuning for Data Transfer hosts connected at speeds of 1Gbps or higher <一.本次OpenStack系统调试简单过程简单记录& ...

  10. 华为云平台部署教程之CNA\VRM的安装

    本教程仅含华为云平台搭建部署中CNA和VRM的安装,请按需求选择查看本文. 一.前期准备 1.硬件 服务器*4 交换机*3 网线 个人PC机 2.软件 PC机系统(win7/win10) KVM软件 ...

随机推荐

  1. mysql 的存储过程

    定义不带参数的存储过程 CREATE PROCEDURE s1() BEGINselect * from ecs_admin_action;End call s1; 2.带输入参数的 create P ...

  2. IPAD做电脑的绘图板

    方法:Microsoft远程桌面 即 RD CLIENT

  3. 从0搭建Vue3组件库(七):使用 glup 打包组件库并实现按需加载

    使用 glup 打包组件库并实现按需加载 当我们使用 Vite 库模式打包的时候,vite 会将样式文件全部打包到同一个文件中,这样的话我们每次都要全量引入所有样式文件做不到按需引入的效果.所以打包的 ...

  4. ASP.NET Core - 选项系统之选项使用

    上一篇 ASP.NET Core - 选项系统之选项配置 中提到 IOptions.IOptionsMonitor 和 IOptionsSnapshot 三个接口,通过这三个接口都可以从依赖注入容器中 ...

  5. Vue 相关整理

    一 谈谈对 keep-alive 的了解? keep-alive 是Vue内置的一个组件,可以使被包含的组件保留状态,避免重新渲染,其有以下特性: * 一般结合路由和动态组件一起使用,用于缓存组件: ...

  6. 聊聊Spring扩展点BeanPostProcessor和BeanFactoryPostProcessor

    介绍 今天聊一聊spring中很重要的两个扩展点BeanPostProcessor和BeanFactoryPostProcessor,spring之所以如次强大,是因为它提供了丰富的功能给我们使用,但 ...

  7. What's the best way to read and understand someone else's code?

    Find one thing you know the code does, and trace those actions backward, starting at the end Say, fo ...

  8. Python 人工智能 5秒钟偷走你的声音

    介绍 Python 深度学习AI - 声音克隆.声音模仿,是一个三阶段的深度学习框架,允许从几秒钟的音频中创建语音的数字表示,并用它来调节文本到语音模型,该模型经过培训,可以概括到新的声音. 环境准备 ...

  9. pandas之统计函数

    Pandas 的本质是统计学原理在计算机领域的一种应用实现,通过编程的方式达到分析.描述数据的目的.而统计函数则是统计学中用于计算和分析数据的一种工具.在数据分析的过程中,使用统计函数有助于我们理解和 ...

  10. python之PySimpleGUI(二)属性

    属性 Size• Key 相当于句柄/ID• Font• Pad• Colors• Enable Events• Visibility• Tooltip• Metadata• Right click ...