原文:http://www.cnblogs.com/yaohong/p/7601470.html

随笔-124  文章-2  评论-82 

Centos7上部署openstack ocata配置详解

 

正文

之前写过一篇《openstack mitaka 配置详解》然而最近使用发现阿里不再提供m版本的源,所以最近又开始学习ocata版本,并进行总结,写下如下文档

OpenStack ocata版本官方文档:https://docs.openstack.org/ocata/install-guide-rdo/environment.html

同时如果不想一步步安装,可以执行安装脚本:http://www.cnblogs.com/yaohong/p/7251852.html

一:环境

1.1主机网络

  • 系统版本 CentOS7

  • 控制节点: 1 处理器, 4 GB 内存, 及5 GB 存储

  • 计算节点: 1 处理器, 2 GB 内存, 及10 GB 存储

   说明:

  1:以CentOS7为镜像,安装两台机器(怎样安装详见http://www.cnblogs.com/yaohong/p/7240387.html)并注意配置双网卡和控制两台机器的内存。

  2:修改机器主机名分别为:controller和compute1

#hostnamectl set-hostname hostname

  3:编辑controller和compute1的 /etc/hosts 文件

#vi /etc/hosts

  4:验证

采取互ping以及ping百度的方式

 

1.2网络时间协议(NTP)

[控制节点安装NTP]

NTP主要为同步时间所用,时间不同步,可能造成你不能创建云主机

#yum install chrony              (安装软件包)

#vi /etc/chrony.conf              增加

server NTP_SERVER iburst

allow ip地址网段(可以去掉,指代允许你的ip地址网段可以访问NTP)

#systemctl enable chronyd.service     (设置为系统自启动)

#systemctl start chronyd.service       (启动NTP服务)

[计算节点安装NTP]

# yum install chrony

#vi /etc/chrony.conf             `` 释除``server`` 值外的所有内容。修改它引用控制节点:

server controller iburst

# systemctl enable chronyd.service     (加入系统自启动)

# systemctl start chronyd.service       (启动ntp服务)

[验证NTP]

控制节点和计算节点分别执行#chronyc sources,出现如下

[验证NTP]

    控制节点和计算节点分别执行#chronyc sources,出现如下

1.3Openstack包

[openstack packages安装在控制和计算节点]
  安装openstack最新的源:
  #yum install centos-release-openstack-ocata
  #yum install https://rdoproject.org/repos/rdo-release.rpm

#yum upgrade                          (在主机上升级包)
  #yum install python-openstackclient         (安装opentack必须的插件)
  #yum install openstack-selinux

1.4SQL数据库

    安装在控制节点,指南中的步骤依据不同的发行版使用MariaDB或 MySQL。OpenStack 服务也支持其他 SQL 数据库。
    #yum install mariadb mariadb-server python2-PyMySQL

         #vi /etc/mysql/conf.d/mariadb_openstack.cnf

    加入:
        [mysqld]
      bind-address = 192.168.1.73                         (安装mysql的机器的IP地址,这里为controller地址)
      default-storage-engine = innodb
      innodb_file_per_table
      collation-server = utf8_general_ci
      character-set-server = utf8
    
    #systemctl enable mariadb.service     (将数据库服务设置为自启动)
    #systemctl start mariadb.service          (将数据库服务设置为开启)
    设置mysql属性:
    #mysql_secure_installation  (此处参照http://www.cnblogs.com/yaohong/p/7352386.html,中坑一)

1.5消息队列

    消息队列在openstack整个架构中扮演着至关重要(交通枢纽)的作用,正是因为openstack部署的灵活性、模块的松耦合、架构的扁平化,反而使openstack更加依赖于消息队列(不一定使用RabbitMQ,

    可以是其他的消息队列产品),所以消息队列收发消息的性能和消息队列的HA能力直接影响openstack的性能。如果rabbitmq没有运行起来,你的整openstack平台将无法使用。rabbitmq使用5672端口。
    #yum install rabbitmq-server
    #systemctl enable rabbitmq-server.service(加入自启动)
    #systemctl start rabbitmq-server.service(启动)
    #rabbitmqctl add_user openstack RABBIT_PASS                       (增加用户openstack,密码自己设置替换掉RABBIT_PASS)
    #rabbitmqctl set_permissions openstack ".*" ".*" ".*"                   (给新增的用户授权,没有授权的用户将不能接受和传递消息)

1.6Memcached

memcache为选择安装项目。使用端口11211

[控制节点] 
  #yum install memcached python-memcached

修改/etc/sysconfig/memcached中的OPTIONS为。

OPTIONS="-l 127.0.0.1,::1,controller"

 

#systemctl enable memcached.service

 #systemctl start memcached.service

二:认证服务

2.1安装和配置

登录数据库创建keystone数据库。

【只在控制节点部署】
  #mysql -u root -p
  #CREATE DATABASE keystone;
设置授权用户和密码:
  #GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '自定义的密码';
  #GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '自定义的密码';
安全并配置组件:
#yum install openstack-keystone httpd mod_wsgi
#vi /etc/keystone/keystone.conf
  

[database]

connection = mysql+pymysql://keystone:密码@controller/keystone
       provider = fernet

初始化身份认证服务的数据库

# su -s /bin/sh -c "keystone-manage db_sync" keystone(一点要查看数据库是否生成表成功)
  初始化keys:
  #keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
  引导身份服务:

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \

--bootstrap-admin-url http://controller:35357/v3/ \

--bootstrap-internal-url http://controller:5000/v3/ \

--bootstrap-public-url http://controller:5000/v3/ \

--bootstrap-region-id RegionOne

配置apache:
  #vi  /etc/httpd/conf/httpd.conf

ServerName controller(将ServerName 后面改成主机名,防止启动报错)

创建一个指向/usr/share/keystone/wsgi-keystone.conf文件的链接:

#ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动httpd:
  #systemctl enable httpd.service
  #systemctl start httpd.service

配置管理账户

#vi admin加入

export OS_USERNAME=admin

export OS_PASSWORD=123456

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

2.2创建域、项目、用户和角色

创建Service Project:
  #penstack project create --domain default \

--description "Service Project" service
  创建Demo Project:
  #openstack project create --domain default \

--description "Demo Project" demo

创建 demo 用户:
  #openstack user create --domain default \

--password-prompt demo
  创建user角色:
  #openstack role create user
  将用户租户角色连接起来:
  #openstack role add --project demo --user demo user

2.3验证

vi /etc/keystone/keystone-paste.ini

从``[pipeline:public_api]``,[pipeline:admin_api]``和``[pipeline:api_v3]``部分删除``admin_token_auth

重置``OS_TOKEN``和``OS_URL`` 环境变量:

unset OS_AUTH_URL OS_PASSWORD

作为 admin 用户,请求认证令牌:
  #openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue

这里会遇到错误:

由于是Http错误,所以返回Apache HTTP 服务配置的地方,重启Apache 服务,并重新设置管理账户:

  # systemctlrestart httpd.service

  $ export OS_USERNAME=admin

  $ export OS_PASSWORD=ADMIN_PASS

  $ export OS_PROJECT_NAME=admin

  $ export OS_USER_DOMAIN_NAME=Default

  $ export OS_PROJECT_DOMAIN_NAME=Default

  $ export OS_AUTH_URL=http://controller:35357/v3

  $ export OS_IDENTITY_API_VERSION=3

执行完后再次执行

#openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue

 输入密码之后,有正确的输出即为配置正确。

图2.4 admin认证服务验证

作为``demo`` 用户,请求认证令牌:

#openstack --os-auth-url http://controller:5000/v3 \

--os-project-domain-name default --os-user-domain-name default \

--os-project-name demo --os-username demo token issue

2.4创建 OpenStack 客户端环境脚本

可将环境变量设置为脚本:
  #vi admin-openrc 加入:

export OS_PROJECT_DOMAIN_NAME=default
  export OS_USER_DOMAIN_NAME=default
  export OS_PROJECT_NAME=admin
  export OS_USERNAME=admin
  export OS_PASSWORD=123456(admin设置的密码)
  export OS_AUTH_URL=http://controller:35357/v3
  export OS_IDENTITY_API_VERSION=3
  export OS_IMAGE_API_VERSION=2

#vi demo-openrc 加入:

 export OS_PROJECT_DOMAIN_NAME=default
   export OS_USER_DOMAIN_NAME=default
   export OS_PROJECT_NAME=demo
   export OS_USERNAME=demo
   export OS_PASSWORD=123456(demo设置的密码)
   export OS_AUTH_URL=http://controller:35357/v3
   export OS_IDENTITY_API_VERSION=3
   export OS_IMAGE_API_VERSION=2

  

 

#. admin-openrc (加载``admin-openrc``文件来身份认证服务的环境变量位置和``admin``项目和用户证书)
   #openstack token issue(请求认证令牌)

图2.6 请求认证令牌

三:镜像服务

3.1安装配置

建立glance数据
  登录mysql
  #mysql -u root -p (用数据库连接客户端以 root 用户连接到数据库服务器)
  #CREATE DATABASE glance;(创建 glance 数据库)
  授权
   GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY '密码'; (对``glance``数据库授予恰当的权限)
   GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY '密码';(对``glance``数据库授予恰当的权限)
  运行环境变量:
  #. admin-openrc
  创建glance用户信息:
  #openstack user create --domain default --password-prompt glance

添加 admin 角色到 glance 用户和 service 项目上
    #openstack role add --project service --user glance admin
  创建``glance``服务实体:
  #openstack service create --name glance \
 --description "OpenStack Image" image

图3.1 创建glance服务实体

创建镜像服务的 API 端点:
  #penstack endpoint create --region RegionOne \
         image public http://controller:9292

图3.2 创建镜像服务API端点

#penstack endpoint create --region RegionOne \
image internal http://controller:9292

图3.3 创建镜像服务API端点

  #penstack endpoint create --region RegionOne \
image admin http://controller:9292

图3.4 创建镜像服务API端点

  安装:
  #yum install openstack-glance
  #vi  /etc/glance/glance-api.conf 配置
  

[database]  

connection = mysql+pymysql://glance:密码@controller/glance
  [keystone_authtoken](配置认证)
  加入:
   auth_uri = http://controller:5000
   auth_url = http://controller:35357
   memcached_servers = controller:11211
   auth_type = password
   project_domain_name = default
   user_domain_name = default
   project_name = service
   username = glance
   password = xxxx
   [paste_deploy]
   flavor = keystone
  [glance_store] 
   stores = file,http
   default_store = file
   filesystem_store_datadir = /var/lib/glance/images/
 

#vi /etc/glance/glance-registry.conf

   

[database]
   connection = mysql+pymysql://glance:密码@controller/glance
   [keystone_authtoken](配置认证)
   加入:
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = control:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = glance
      password = xxxx
  [paste_deploy]
      flavor = keystone

   

 同步数据库:
      #su -s /bin/sh -c "glance-manage db_sync" glance
    启动glance:
      #systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
      # systemctl start openstack-glance-api.service \
openstack-glance-registry.service

3.2验证

运行环境变量:
  #. admin-openrc
  下载一个比较小的镜像:
  #wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

解决办法:

yum -y install wget

再执行

wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

上传镜像:
  #openstack image create "cirros" \

--file cirros-0.3.5-x86_64-disk.img \

--disk-format qcow2 --container-format bare \

--public

图3.5 上传镜像

  查看:
   #openstack image list

图3.6 确认镜像上传

有输出证明glance配置正确

四:计算服务

4.1安装并配置控制节点

建立nova的数据库:
  #mysql -u root -p (用数据库连接客户端以 root 用户连接到数据库服务器)
  #CREATE DATABASE nova_api;
  #CREATE DATABASE nova; (创建 nova_api 和 nova 数据库:)

#CREATE DATABASE nova_cell0;

  对数据库进行正确的授权:
  #GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY '密码';
  #GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY '密码';
  #GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '密码';
  #GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY '密码';

#GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \

IDENTIFIED BY '密码';

#GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \

IDENTIFIED BY '密码';

运行环境变量:
  #. admin-openrc
  创建nova用户:
  #openstack user create --domain default \
 --password-prompt nova
  #openstack role add --project service --user nova admin
  创建 nova 服务实体:
  #openstack service create --name nova \
--description "OpenStack Compute" compute
  创建 Compute 服务 API 端点:
  #openstack endpoint create --region RegionOne \

compute public http://controller:8774/v2.1

#openstack endpoint create --region RegionOne \

compute internal http://controller:8774/v2.1

#openstack endpoint create --region RegionOne \

compute admin http://controller:8774/v2.1

#openstack user create --domain default --password-prompt placement

#openstack role add --project service --user placement admin

#openstack service create --name placement --description "Placement API" placement

#openstack endpoint create --region RegionOne placement public http://controller:8778

# openstack endpoint create --region RegionOne placement internal http://controller:8778

#openstack endpoint create --region RegionOne placement admin http://controller:8778

安装:
  # yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \

openstack-nova-scheduler openstack-nova-placement-api
  #vi /etc/nova/nova.conf

[DEFAULT].

enabled_apis = osapi_compute,metadata

[api_database]

# connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]

# connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[DEFAULT]

#transport_url = rabbit://openstack:RABBIT_PASS@controller

[api]

#auth_strategy = keystone

[keystone_authtoken]

#auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = 密码

[DEFAULT]

#my_ip = 10.0.0.11

[DEFAULT]

# use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]

enabled = true

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

[glance]

#api_servers = http://controller:9292

[oslo_concurrency]

#lock_path = /var/lib/nova/tmp

[placement]

#os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = PLACEMENT_PASS

#vi  /etc/httpd/conf.d/00-nova-placement-api.conf

加入:

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

重启httpd 服务:

#systemctl restart httpd

填充nova-api数据库:

#su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库:

#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1单元格

#su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

填充新星数据库:

su -s /bin/sh -c "nova-manage db sync" nova

验证nova cell0和cell1是否正确注册:

nova-manage cell_v2 list_cells

#systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

4.2安装并配置计算节点

#yum install openstack-nova-compute

编辑

#vi /etc/nova/nova.conf

[DEFAULT]

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:RABBIT_PASS@controller

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS(计算节点ip地址)

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[placement]

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = PLACEMENT_PASS

#egrep -c '(vmx|svm)' /proc/cpuinfo (确定您的计算节点是否支持虚拟机的硬件加速)

  如果为0则需要修改#vi /etc/nova/nova.conf

[libvirt]
  virt_type = qemu

启动计算服务及其依赖,并将其配置为随系统自动启动:
启动:
 #systemctl enable libvirtd.service openstack-nova-compute.service
 #systemctl start libvirtd.service openstack-nova-compute.service
将计算节点添加到单元数据库

这个在控制节点上执行

#. admin-openrc

# openstack hypervisor list

#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

vi /etc/nova/nova.conf

  [scheduler]

  discover_hosts_in_cells_interval = 300

4.3验证

在控制节点验证:
  运行环境变量:
#. admin-openrc
#openstack compute service list
 输出正常即为配置正确

#openstack catalog list

#openstack image list

#nova-status upgrade check

五:Networking服务

5.1安装并配置控制节点

创建neutron数据库
  #mysql -u root -p
  #CREATE DATABASE neutron;

对``neutron`` 数据库授予合适的访问权限,使用合适的密码替换``NEUTRON_DBPASS``:
  #GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
 IDENTIFIED BY 'NEUTRON_DBPASS';
  #GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
  运行环境变量:
  #. admin-openrc
  创建``neutron``用户:
  #openstack user create --domain default --password-prompt neutron
  #openstack role add --project service --user neutron admin
  添加``admin`` 角色到``neutron`` 用户:
  #openstack service create --name neutron \
--description "OpenStack Networking" network
  创建网络服务API端点

#openstack endpoint create --region RegionOne \
network public http://controller:9696
  #openstack endpoint create --region RegionOne \
 network internal http://controller:9696
  #openstack endpoint create --region RegionOne \
network admin http://controller:9696
  创建vxlan网络:
  #yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
  #vi /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:密码@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[database]

connection = mysql+pymysql://neutron:密码@controller/neutron

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password =密码

[nova]

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = 密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

   
 

 

配置ml2扩展:
  #vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

  

配置网桥:

  #vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:“第二张网卡名称”

[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = true

local_ip = 192.168.1.146(本地网络ip)

l2_population = true

配置3层网络:
  #vi /etc/neutron/l3_agent.ini

[DEFAULT]
  interface_driver = linuxbridge

配置dhcp:
  #vi /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

配置metadata agent
 #vi /etc/neutron/metadata_agent.ini

[DEFAULT]
  nova_metadata_ip = controller
  metadata_proxy_shared_secret = METADATA_SECRET

为计算机节点配置网络服务

#vi /etc/nova/nova.conf

[neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = xxxx
      service_metadata_proxy = True
      metadata_proxy_shared_secret = METADATA_SECRET

创建扩展连接:
   #ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    同步数据库

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算API 服务:
   #systemctl restart openstack-nova-api.service
   #systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
 neutron-metadata-agent.service
   #systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

启用layer-3服务并设置其随系统自启动
    # systemctl enable neutron-l3-agent.service
   #systemctl start neutron-l3-agent.service

5.2安装并配置计算节点

#yum install openstack-neutron-linuxbridge ebtables ipset
   #vi  /etc/neutron/neutron.conf

[DEFAULT]

transport_url = rabbit://openstack:密码@controller

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

配置vxlan
  #vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
  physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME(第二个网卡名称)
  [vxlan]
  enable_vxlan = True
  local_ip = OVERLAY_INTERFACE_IP_ADDRESS(本地网络地址)
  l2_population = True
  [securitygroup]
  enable_security_group = True
  firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#vi /etc/nova/nova.conf

[neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = xxxx

重启计算服务
  #systemctl restart openstack-nova-compute.service
  #systemctl enable neutron-linuxbridge-agent.service
  #systemctl enable neutron-linuxbridge-agent.service

5.3验证

运行环境变量:
  #. admin-openrc

#openstack extension list --network

#openstack network agent list

六:Dashboard

6.1配置

#yum install openstack-dashboard
  #vi /etc/openstack-dashboard/local_settings

 OPENSTACK_HOST = "controller"
     ALLOWED_HOSTS = ['one.example.com', 'two.example.com']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
   'BACKEND':  'django.core.cache.backends.memcached.MemcachedCache',
      'LOCATION': 'controller:11211',
    }
   }
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
      OPENSTACK_API_VERSIONS = {
        "identity": 3,
        "image": 2,
        "volume": 2,
        }
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {

'enable_router': False,

'enable_quotas': False,

'enable_distributed_router': False,

'enable_ha_router': False,

'enable_lb': False,

'enable_firewall': False,

'enable_vpn': False,

'enable_fip_topology_check': False,

}

TIME_ZONE = "TIME_ZONE"

启动:
  #systemctl restart httpd.service memcached.service

6.2登录

在网页上输入网址http://控制节点ip/dashboard/auth/login

域:default

用户名:admin或者demo

密码:自己设置的

图6.1 登录页面

(转)Centos7上部署openstack ocata配置详解的更多相关文章

  1. Centos7上部署openstack ocata配置详解

    之前写过一篇<openstack mitaka 配置详解>然而最近使用发现阿里不再提供m版本的源,所以最近又开始学习ocata版本,并进行总结,写下如下文档 OpenStack ocata ...

  2. Centos7上部署openstack mitaka配置详解(将疑难点都进行划分)

    在配置openstack项目时很多人认为到处是坑,特别是新手,一旦进坑没有人指导,身体将会感觉一次次被掏空,作为菜鸟的我也感同身受,因为已经被掏空n次了. 以下也是我将整个openstack配置过程进 ...

  3. 在CentOS7上部署OpenStack 步骤详解

    OpenStack作为一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,开放源代码项目的云计算管理平台项目.具体知识我会在后面文章中做出介绍,本章主要按步骤给大家演示在Cent ...

  4. nginx在linux上的安装与配置详解(一)

    Nginx的安装与配置详解 (1)nginx简介     nginx概念: Nginx是一款轻量级的Web 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器,并在一个BSD-like ...

  5. MySql在Mac上的安装与配置详解

    Mac下安装mysql5.7 完整步骤(图文详解) 转载---原文地址:https://www.jb51.net/article/103841.htm 本篇文章主要介绍了Mac下安装mysql5.7 ...

  6. 云计算之openstack mitaka 配置详解(将疑难点都进行划分)

    在配置openstack项目时很多人认为到处是坑,特别是新手,一旦进坑没有人指导,身体将会感觉一次次被掏空,作为菜鸟的我也感同身受,因为已经被掏空n次了. 以下也是我将整个openstack配置过程进 ...

  7. OpenVPN CentOS7 安装部署配置详解

    一 .概念相关 1.vpn 介绍 vpn 虚拟专用网络,是依靠isp和其他的nsp,在公共网络中建立专用的数据通信网络的技术.在vpn中任意两点之间的链接并没有传统的专网所需的端到端的物理链路,而是利 ...

  8. 在 CentOS7 上部署 MySQL 主从

    在 CentOS7 上部署 MySQL 主从 通过 SecureCRT 连接至 MySQL 主服务器: 找到 my.cnf 文件所在的目录: mysql --help | grep my.cnf 一般 ...

  9. 在 CentOS7 上部署 zookeeper 服务

    在 CentOS7 上部署 zookeeper 服务 1 用 SecureCRT 或 XShell 等 Linux 客户端工具连接至 CentOS7 服务器: 2 进入到 /usr/local/too ...

随机推荐

  1. SpringMVC源码解读 - HandlerMapping

    SpringMVC在请求到handler处理器的分发这步是通过HandlerMapping模块解决的.handlerMapping 还处理拦截器. 先看看HandlerMapping的继承树吧 可以大 ...

  2. 如何打开Tango的ADF文件?

    3ds max? opengl? ... Excel? vs? UltraEdit OpenGL Android API ADF文件数据结构:链接

  3. Linux 基础教程 31-tcpdump命令-3

        经过前面的学习,tcpdump的用法相信应该都掌握了,今天我们来学习对tcpdump输出内容的学习和了解.我们以第一个示例进行讲解如下所示: IP协议包分析 [root@localhost ~ ...

  4. Linux 基础教程 25-命令和文件查找

    which     不管是在Windows还是Linux系统中,我们都会偶尔执行一些系统命令,比如Windows常见的cmd.ping.ipconfig等,它们的位置都在%systemdrive%中. ...

  5. Tomcat 系统架构与设计模式1

    从 Tomcat 如何分发请求.如何处理多用户同时请求,还有它的多级容器是如何协调工作的角度来分析 Tomcat 的工作原理,这也是一个 Web 服务器首要解决的关键问题 Tomcat 总体结构 To ...

  6. Oracle EBS Add Responsibility to User by the Responsibility reference of Other User.

    Oracle EBS 11i Add Responsibility to User by the Responsibility reference of Other User. Warning: R1 ...

  7. Java和.net对比分析

    .Net和Java是国内市场占有率最高的两门技术,对于准备学习编程语言的初学者来说,.Net和Java是初学者首先考虑的两门技术,因此很多人一遍遍的问“学.Net还是学Java”,社区中也每天都有“. ...

  8. ISO in CSS content

    Name   Numeric Description Hex ISO in CSS content Octal       no-break space %A0 p:before { content: ...

  9. 执行计划--在存储过程中使用SET对执行计划的影响

    --如果在存储过程中定义变量,并为变量SET赋值,该变量的值无法为执行计划提供参考(即执行计划不考虑该变量),将会出现预估行数和实际行数相差过大导致执行计划不优的情况--如果在存储过程中使用SET为存 ...

  10. /usr/bin/curl: Argument list too long的解决方法

    使用curl发送http请求时,会出现-bash: /usr/bin/curl: Argument list too long的错误,此时,可用采用httpie代替curl发送请求: pip inst ...