Win10+VirtualBox+Openstack Mitaka
首先VirtualBox安装的话,没有什么可演示的,去官网(https://www.virtualbox.org/wiki/Downloads)下载,或者可以去(https://www.virtualbox.org/wiki/Download_Old_Builds)下载旧版本。
接下来设置virtualbox的网络

这里需要注意的是IP地址栏中的信息,必须全部删除然后切换为英文输入法,再次输入。
接下来配置Host-Only


以下是确认没有启用DHCP

接下来就是安装ubuntu了,
点击新建虚拟机,选择linux,发行版本选择ubuntu 64 bit
这里安装过程不再演示,但是在配置网络的时候要安装如下所示配置

网卡2的配置如下

接下来就是添加存储,选择之前下载好的ubuntu-14.04.5-server-amd64.iso镜像文件,下载地址(http://mirrors.aliyun.com/ubuntu-releases/14.04/ubuntu-14.04.5-server-amd64.iso)

点击“OK”之后,开启虚拟机即可开始安装
语言:English(回车)
Ubuntu: Install Ubuntu Server(回车)
接下来直接敲回车即可,直到:

由于需要使用Nat访问外网,所以这里选择eth0.回车之后,直接选择‘cancel’,回车会告警,忽略这个告警直接点击“continue”,会提示让配置网络,选择手动配置,回车:
IP address:10.0.3.10
Netmask: 255.255.255.0
Gateway:10.0.30.1
Name server addresses: 114.114.114.114
Hostname: controller
Domain name: 不设置,直接回车即可,
Full name for the new user: openstack
Username for your account: openstack
Choose a password for the new user: 123456
Re-enter password to verify: 123456
Use weak password? 选择“yes”,回车
Encrypt your home directory? 选择“No”,回车
接下来需要确认当前的时区是上海,如果是上海,选择“yes”进行下一步;不是上海选择“No”,然后在列表中选择上海。

在Partition disks选项中,选择“Guided - user entire disk",然后回车,回车,出现如下所示,选择“Yes”,回车

Configure the package manager: 不设置HTTP proxy,直接选择continue,回车

Configuring apt两步直接回车取消掉即可
Configuring taskel: No automatic updates, 回车之后选择安装OpenSSH server



安装已完成,系统会自动重启,重启完成,关机,然后进行克隆操作:

选择“完全复制”。

接下来开始配置系统环境,选择刚刚创建好的虚拟机,点击启动,然后找到这个网址(https://github.com/JiYou/openstack-m/blob/master/os/interfaces)这是网卡配置文件,接下来开始查看并编辑网卡配置文件interfaces
openstack@controller:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(). # The loopback network interface
auto lo
iface lo inet loopback # The primary network interface
auto eth0
iface eth0 inet static
address 10.0.3.10
netmask 255.255.255.0
network 10.0.3.0
broadcast 10.0.3.255
gateway 10.0.3.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 114.114.114.114
auto eth1
iface eth1 inet static
address 192.168.56.10
netmask 255.255.255.0
gateway 192.168.56.1
dns-nameservers 114.114.114.114
重启系统生效,然后使用xshell、putty或其他远程管理工具,我这里使用的是Gitbash,连接测试
xueji@xueji MINGW64 ~
$ ssh openstack@192.168.56.10
The authenticity of host '192.168.56.10 (192.168.56.10)' can't be established.
ECDSA key fingerprint is SHA256:DvbqAHwl6bcmX3FcvaJZ1REpRR8Oup89ST+a8WFBY7Y.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.10' (ECDSA) to the list of known hosts.
openstack@192.168.56.10's password:
Welcome to Ubuntu 14.04. LTS (GNU/Linux 4.4.--generic x86_64) * Documentation: https://help.ubuntu.com/ System information as of Tue Jan :: CST System load: 0.11 Processes:
Usage of /: 0.6% of .78GB Users logged in:
Memory usage: % IP address for eth0: 10.0.3.10
Swap usage: % IP address for eth1: 192.168.56.10 Graph this data and manage this system at:
https://landscape.canonical.com/ packages can be updated.
updates are security updates. New release '16.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it. Last login: Tue Jan ::
openstack@controller:~$ ifconfig
登录成功,
接下来开始准备openstack的包
openstack@controller:~$ sudo -s
[sudo] password for openstack:
root@controller:~# apt-get update
root@controller:~# apt-get install -y software-properties-common
root@controller:~# add-apt-repository cloud-archive:mitaka
Ubuntu Cloud Archive for OpenStack Mitaka
More info: https://wiki.ubuntu.com/ServerTeam/CloudArchive
Press [ENTER] to continue or ctrl-c to cancel adding it
# 回车
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
ubuntu-cloud-keyring
upgraded, newly installed, to remove and not upgraded.
Need to get , B of archives.
After this operation, 34.8 kB of additional disk space will be used.
Get: http://us.archive.ubuntu.com/ubuntu/ trusty/universe ubuntu-cloud-keyring all 2012.08.14 [5,086 B]
Fetched , B in 0s (11.0 kB/s)
Selecting previously unselected package ubuntu-cloud-keyring.
(Reading database ... files and directories currently installed.)
Preparing to unpack .../ubuntu-cloud-keyring_2012..14_all.deb ...
Unpacking ubuntu-cloud-keyring (2012.08.) ...
Setting up ubuntu-cloud-keyring (2012.08.) ...
Importing ubuntu-cloud.archive.canonical.com keyring
OK
Processing ubuntu-cloud.archive.canonical.com removal keyring
gpg: /etc/apt/trustdb.gpg: trustdb created
OK root@controller:~# apt-get update && apt-get dist-upgrade
root@controller:~# apt-get install -y python-openstackclient
安装NTP、MySQL
root@controller:~# hostname -I
10.0.3.10 192.168.56.10
root@controller:~# tail -n - /etc/hosts
10.0.3.10 controller
192.168.56.10 controller root@controller:~# vim /etc/chrony/chrony.conf
# 注释掉以下四行,接着在下面添加server controller iburst
#server .debian.pool.ntp.org offline minpoll
#server .debian.pool.ntp.org offline minpoll
#server .debian.pool.ntp.org offline minpoll
#server .debian.pool.ntp.org offline minpoll
server controller iburst root@controller:~# chronyc sources
Number of sources =
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller 10y +0ns[ +0ns] +/- 0ns 安装mysql
root@controller:~# apt-get install -y mariadb-server python-pymysql
在弹出的mysql数据库密码输入框中输入123456
root@controller:~# cd /etc/mysql/
root@controller:/etc/mysql# ls
conf.d debian.cnf debian-start my.cnf
root@controller:/etc/mysql# cp my.cnf{,.bak}
root@controller:/etc/mysql# vim my.cnf
[mysqld] #该行下面添加如下四行内容
default-storage-engine = innodb
innodb_file_per_table
max_connections =
collation-server = utf8_general_ci
character-set-server = utf8 bind-address = 0.0.0.0 #原值是127.0.0.
重启mysql
root@controller:/etc/mysql# service mariadb restart
mariadb: unrecognized service
root@controller:/etc/mysql# service mysql restart
* Stopping MariaDB database server mysqld [ OK ]
* Starting MariaDB database server mysqld [ OK ]
* Checking for corrupt, not cleanly closed and upgrade needing tables.
安全初始化
root@controller:/etc/mysql# mysql_secure_installation
/usr/bin/mysql_secure_installation: : /usr/bin/mysql_secure_installation: find_mysql_client: not found NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here. Enter current password for root (enter for none):
OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation. You already have a root password set, so you can safely answer 'n'. Change the root password? [Y/n] n
... skipping. By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment. Remove anonymous users? [Y/n] n
... skipping. Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] n
... skipping. By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment. Remove test database and access to it? [Y/n] n
... skipping. Reloading the privilege tables will ensure that all changes made so far
will take effect immediately. Reload privilege tables now? [Y/n] y
... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB
installation should now be secure. Thanks for using MariaDB!
测试连接
root@controller:/etc/mysql# mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye
root@controller:/etc/mysql# mysql -uroot -p123456 -h10.0.3.
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye
root@controller:/etc/mysql# mysql -uroot -p123456 -h192.168.56.
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye root@controller:/etc/mysql# mysql -uroot -p123456 -h127.0.0.
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye
安装mongodb
root@controller:~# apt-get install -y mongodb-server mongodb-clients python-pymongo
root@controller:~# cp /etc/mongodb.conf{,.bak}
root@controller:~# vim /etc/mongodb.conf bind_ip = 0.0.0.0 #原值127.0.0.
smallfiles = true #添加此行内容
root@controller:~# service mongodb stop
mongodb stop/waiting
root@controller:~# ls /var/lib/mongodb/journal/
# 如果这个目录下有prealloc开头的文件,全部删除
root@controller:~# service mongodb start
mongodb start/running, process
安装rabbitmq
root@controller:~# apt-get install -y rabbitmq-server
添加openstack用户
root@controller:~# rabbitmqctl add_user openstack
Creating user "openstack" ...
赋予“openstack”用户读写权限
root@controller:~# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
安装memecached
root@controller:~# apt-get install -y memcached python-memcache
root@controller:~# cp /etc/memcached.conf{,.bak}
root@controller:~# vim /etc/memcached.conf -l 0.0.0.0 #原值127.0.0.
重启memcache
root@controller:~# service memcached restart
Restarting memcached: memcached.
root@controller:~# service memcached status
* memcached is running
root@controller:~# ps aux | grep memcached
memcache 0.0 0.0 ? Sl : : /usr/bin/memcached -m -p -u memcache -l 0.0.0.0
root 0.0 0.0 pts/ S+ : : grep --color=auto memcached
























开始安装keystone
root@controller:~# mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> create database keystone;
Query OK, row affected (0.00 sec) MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> \q
Bye
root@controller:~# mysql -ukeystone -p123456 -h 127.0.0.1
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye
root@controller:~# mysql -ukeystone -p123456 -h 10.0.3.10
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye
root@controller:~# mysql -ukeystone -p123456 -h 192.168.56.10
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+
rows in set (0.00 sec) MariaDB [(none)]> \q
Bye
# 连接都没问题
接着安装keystone软件包
root@controller:~# echo "manual" > /etc/init/keystone.override
root@controller:~# apt-get install keystone apache2 libapache2-mod-wsgi 配置keystone.conf
root@controller:~# cp /etc/keystone/keystone.conf{,.bak}
root@controller:~# vim /etc/keystone/keystone.conf admin_token =
connection = mysql+pymysql://keystone:123456@controller/keystone provider = fernet
# 同步数据库
root@controller:~# su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化fernet-keys
root@controller:~# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
-- ::34.134 INFO keystone.token.providers.fernet.utils [-] [fernet_tokens] key_repository does not appear to exist; attempting to create it
-- ::34.135 INFO keystone.token.providers.fernet.utils [-] Created a new key: /etc/keystone/fernet-keys/
-- ::34.135 INFO keystone.token.providers.fernet.utils [-] Starting key rotation with key files: ['/etc/keystone/fernet-keys/0']
-- ::34.135 INFO keystone.token.providers.fernet.utils [-] Current primary key is:
-- ::34.136 INFO keystone.token.providers.fernet.utils [-] Next primary key will be:
-- ::34.136 INFO keystone.token.providers.fernet.utils [-] Promoted key to be the primary:
-- ::34.137 INFO keystone.token.providers.fernet.utils [-] Created a new key: /etc/keystone/fernet-keys/
root@controller:~# echo $? 配置Apache HTTP
root@controller:~# cp /etc/apache2/apache2.conf{,.bak}
root@controller:~# vim /etc/apache2/apache2.conf
root@controller:~# grep 'ServerName' /etc/apache2/apache2.conf
ServerName controller #末尾添加此行
配置Apache HTPP
root@controller:~# cp /etc/apache2/apache2.conf{,.bak}
root@controller:~# vim /etc/apache2/apache2.conf
root@controller:~# grep 'ServerName' /etc/apache2/apache2.conf
ServerName controller
接着创建wsgi-keystone.conf文件
root@controller:~# vim /etc/apache2/sites-available/wsgi-keystone.conf
Listen
Listen
<VirtualHost *:>
WSGIDaemonProcess keystone-public processes= threads= user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:>
WSGIDaemonProcess keystone-admin processes= threads= user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
~
开启认证服务虚拟主机
root@controller:~# ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled
重启apache
root@controller:~# service apache2 restart
* Restarting web server apache2 [ OK ]
root@controller:~# rm -rf /var/lib/keystone/keystone.db
root@controller:~# lsof -i:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 root 6u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
root@controller:~# lsof -i:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 root 8u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)

安装python-openstackclient
root@controller:~# apt-get install -y python-openstackclient
配置rootrc环境
root@controller:~# vim rootrc
root@controller:~# cat rootrc
export OS_TOKEN=
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=
export PS1="rootrc@\u@\h:\w\$" # 加载rootrc环境
root@controller:~# source rootrc
向keystone中注册服务

值得注意的是:35357一般为管理员登录使用,5000端口一般发布到外部用户使用
创建服务实体和API端点
adminrc@root@controller:~$source rootrc
rootrc@root@controller:~$openstack service create --name keystone --description "OpenStack Identify" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identify |
| enabled | True |
| id | 7052e2715c874ae18dc520ec21026a34 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ac731860b374450484034b024e643004 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7052e2715c874ae18dc520ec21026a34 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
rootrc@root@controller:~$openstack endpoint create --region RegionOne identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d1f7296477a748ef82ad4970580d50b2 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7052e2715c874ae18dc520ec21026a34 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
rootrc@root@controller:~$openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | df4eb1f2b08f474fa7b83ef979ebd0fb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7052e2715c874ae18dc520ec21026a34 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v3 |
+--------------+----------------------------------+

接着创建域、项目、用户和角色
rootrc@root@controller:~$openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| name | default |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | 29577090a0e8466ab49cc30a4305f5f8 |
| is_domain | False |
| name | admin |
| parent_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack user create --domain default --password admin admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | 653177098fac40a28734093706299e66 |
| name | admin |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 6abd897a6f134b8ea391377d1617a2f8 |
| name | admin |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role add --project admin --user admin admin
rootrc@root@controller:~$ #没有提示就是最好的提示了


创建service项目
rootrc@root@controller:~$openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | 006a1ed36a0e4cbd8947d853b79d522c |
| is_domain | False |
| name | service |
| parent_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | ffc560f6a2604c3896df922115c6fc2a |
| is_domain | False |
| name | demo |
| parent_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
+-------------+----------------------------------+
rootrc@root@controller:~$openstack user create --domain default --password demo demo
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | c4de9fac882740838aa26e9119b30cb9 |
| name | demo |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | e69817f50d6448fe888a64e51e025351 |
| name | user |
+-----------+----------------------------------+
rootrc@root@controller:~$openstack role add --project demo --user demo user
rootrc@root@controller:~$echo $?
验证adminrc
rootrc@root@controller:~$vim adminrc
rootrc@root@controller:~$cat adminrc
unset OS_TOKEN
unset OS_URL
unset OS_IDENTITY_API_VERSION export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=
export OS_IMAGE_API_VERSION=
export PS1="adminrc@\u@\h:\w\$"
加载adminrc环境并尝试获取keystone token
rootrc@root@controller:~$source adminrc
adminrc@root@controller:~$openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | --14T21::.000000Z |
| id | gAAAAABcPPIQK270ipb9EgRW7feWYLunIVPaX9cTjhvgvTvMmpG8j8K_AkwPv5UL4WUFFzfDnO30A7WflnaOyufilAi7DCmbQ2YLlsGuAzgbCRYooV5pIJTkuqbhmRJDmFX068zliOri_rXL2CsTq9um3UtCPnOj7-7LxmXcFm5LwsP6OyzY4Ts |
| project_id | 29577090a0e8466ab49cc30a4305f5f8 |
| user_id | 653177098fac40a28734093706299e66 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
adminrc@root@controller:~$date
Tue Jan :: CST
验证demorc
adminrc@root@controller:~$vim demorc
adminrc@root@controller:~$cat demorc
unset OS_TOKEN
unset OS_URL
unset OS_IDENTITY_API_VERSION export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=
export OS_IMAGE_API_VERSION=
export PS1="demorc@\u@\h:\w\$"
获取demo用户的token
adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | --14T21::.000000Z |
| id | gAAAAABcPPPSLXi6E581bb8P0MpmHOLg-p0_vt9YLNWXn6feHLF6QONWq3Ny8JT4ceOvkKiv5TltLA4WRyn6XghcvZn-X0tuhOl07Eh6KXxGiGtEwgZyPFO-AFhykXims1FH0Tz4lp-fI_ExelOAcT50OFeKC3bB5vlGlYgR0pmdiVj8L73Boiw |
| project_id | ffc560f6a2604c3896df922115c6fc2a |
| user_id | c4de9fac882740838aa26e9119b30cb9 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
demorc@root@controller:~$date
Tue Jan :: CST


开始安装glance服务
demorc@root@controller:~$mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> create database glance;
Query OK, row affected (0.00 sec) MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> \q
Bye
demorc@root@controller:~$source adminrc
adminrc@root@controller:~$
1111
rootrc@root@controller:~$source adminrc
adminrc@root@controller:~$openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 24eba17c530946fea53413104b8d2035 |
| name | glance |
| type | image |
+-------------+----------------------------------+
adminrc@root@controller:~$ps -aux | grep -v "grep" | grep keystone
keystone 0.0 0.2 ? Sl : : (wsgi:keystone-pu -k start
keystone 0.0 3.0 ? Sl : : (wsgi:keystone-pu -k start
keystone 0.0 2.1 ? Sl : : (wsgi:keystone-pu -k start
keystone 0.0 0.2 ? Sl : : (wsgi:keystone-pu -k start
keystone 0.0 0.2 ? Sl : : (wsgi:keystone-pu -k start
keystone 0.0 3.1 ? Sl : : (wsgi:keystone-ad -k start
keystone 0.0 3.0 ? Sl : : (wsgi:keystone-ad -k start
keystone 0.0 2.2 ? Sl : : (wsgi:keystone-ad -k start
keystone 0.0 3.1 ? Sl : : (wsgi:keystone-ad -k start
keystone 0.0 3.1 ? Sl : : (wsgi:keystone-ad -k start
adminrc@root@controller:~$lsof -i:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 root 6u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 6u IPv6 0t0 TCP *: (LISTEN)
adminrc@root@controller:~$lsof -i:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 root 8u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)
apache2 www-data 8u IPv6 0t0 TCP *: (LISTEN)
adminrc@root@controller:~$tail /var/log/keystone/keystone-wsgi-admin.log
11111
adminrc@root@controller:~$openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 83d13b44fbae4abbb89b7f1a9f1519d6 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 24eba17c530946fea53413104b8d2035 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c9708f196a6946f987652cb40b9a8aea |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 24eba17c530946fea53413104b8d2035 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+

111
adminrc@root@controller:~$openstack user create --domain default --password glance glance
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | b9c7a987bc494e72899d6ffa7c68c3d0 |
| name | glance |
+-----------+----------------------------------+
adminrc@root@controller:~$openstack role add --project service --user glance admin
adminrc@root@controller:~$sudo -s
root@controller:~# apt-get install -y glance
root@controller:~# echo $?
配置glance-api.conf
root@controller:~# cp /etc/glance/glance-api.conf{,.bak}
root@controller:~# vim /etc/glance/glance-api.conf
......
connection = mysql+pymysql://glance:123456@controller/glance
......
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
配置
root@controller:~# cp /etc/glance/glance-registry.conf{,.bak}
root@controller:~# vim /etc/glance/glance-registry.conf
.......
connection = mysql+pymysql://glance:123456@localhost/glance
.......
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
........
[paste_deploy]
flavor = keystone
写入镜像服务数据库中
root@controller:~# su -s /bin/sh -c "glance-manage db_sync" glance
............
-- ::43.570 INFO migrate.versioning.api [-] done
配置完成重启服务
root@controller:~# service glance-registry restart
glance-registry stop/waiting
glance-registry start/running, process
root@controller:~# service glance-api restart
glance-api stop/waiting
glance-api start/running, process
获取admin凭证来获取只有管理员能执行的命令的访问权限
root@controller:~# source adminrc
adminrc@root@controller:~$wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
adminrc@root@controller:~$ls -al cirros-0.3.-x86_64-disk.img
-rw-r--r-- root root May cirros-0.3.-x86_64-disk.img
adminrc@root@controller:~$file cirros-0.3.-x86_64-disk.img
cirros-0.3.-x86_64-disk.img: QEMU QCOW Image (v2), bytes
adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | --14T22::08Z |
| disk_format | qcow2 |
| file | /v2/images/39d73bcf-e60b-4caf--cca17de00d7e/file |
| id | 39d73bcf-e60b-4caf--cca17de00d7e |
| min_disk | |
| min_ram | |
| name | cirrors |
| owner | 29577090a0e8466ab49cc30a4305f5f8 |
| protected | False |
| schema | /v2/schemas/image |
| size | |
| status | active |
| tags | |
| updated_at | --14T22::08Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
查看镜像列表
adminrc@root@controller:~$openstack image list
+--------------------------------------+---------+--------+
| ID | Name | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
也可以直接去机器上glance对应的的images目录下查看
adminrc@root@controller:~$ls /var/lib/glance/images/
39d73bcf-e60b-4caf--cca17de00d7e
遇到的问题
错误信息
adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --public
Service Unavailable: The server is currently unavailable. Please try again at a later time. (HTTP )
adminrc@root@controller:~$cd /var/log/glance/
adminrc@root@controller:/var/log/glance$ls
glance-api.log glance-registry.log
adminrc@root@controller:/var/log/glance$tail glance-api.log
-- ::06.887 INFO glance.common.wsgi [-] Started child
-- ::06.889 INFO eventlet.wsgi.server [-] () wsgi starting up on http://0.0.0.0:9292
-- ::59.019 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
-- ::59.071 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
-- ::59.071 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data
-- ::59.078 INFO eventlet.wsgi.server [-] 10.0.3.10 - - [/Jan/ ::] "GET /v2/schemas/image HTTP/1.1" 0.170589
-- ::01.259 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
-- ::01.301 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
-- ::01.302 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data
-- ::01.306 INFO eventlet.wsgi.server [-] 10.0.3.10 - - [/Jan/ ::] "GET /v2/schemas/image HTTP/1.1" 0.089388
adminrc@root@controller:/var/log/glance$grep -rHn "ERROR"
adminrc@root@controller:/var/log/glance$grep -rHn "error"
glance-api.log::-- ::59.019 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
glance-api.log::-- ::59.071 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
glance-api.log::-- ::01.259 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
glance-api.log::-- ::01.301 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": , "title": "Unauthorized"}}
adminrc@root@controller:~$openstack image create "cirrors" --file cirros-0.3.-x86_64-disk.img --disk-format qcow2 --container-format bare --public
Service Unavailable: The server is currently unavailable. Please try again at a later time. (HTTP )
adminrc@root@controller:~$tail /var/log/keystone/keystone-wsgi-admin.log
-- ::32.353 INFO keystone.token.providers.fernet.utils [req-749b2de5-d2be-47e8--083c54fe488d - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
-- ::32.358 INFO keystone.common.wsgi [req-62e3bb30-ef7b-476a-8f49-dc062c1a9452 - - - - -] POST http://controller:35357/v3/auth/tokens
-- ::32.552 INFO keystone.token.providers.fernet.utils [req-62e3bb30-ef7b-476a-8f49-dc062c1a9452 - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
-- ::32.561 INFO keystone.token.providers.fernet.utils [req-2540636c-0a56--adbc-deeaf0063210 - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
-- ::32.682 INFO keystone.common.wsgi [req-2540636c-0a56--adbc-deeaf0063210 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services/image
-- ::32.686 WARNING keystone.common.wsgi [req-2540636c-0a56--adbc-deeaf0063210 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] Could not find service: image
-- ::32.691 INFO keystone.token.providers.fernet.utils [req-c4a9af14-d206--a693-23055fcb16e3 - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
-- ::32.807 INFO keystone.common.wsgi [req-c4a9af14-d206--a693-23055fcb16e3 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services?name=image
-- ::32.816 INFO keystone.token.providers.fernet.utils [req-cc99a9ba-db21--9c32-4eb39b931efa - - - - -] Loaded encryption keys (max_active_keys=) from: /etc/keystone/fernet-keys/
-- ::32.939 INFO keystone.common.wsgi [req-cc99a9ba-db21--9c32-4eb39b931efa 653177098fac40a28734093706299e66 29577090a0e8466ab49cc30a4305f5f8 - 1495769d2bbb44d192eee4c9b2f91ca3 1495769d2bbb44d192eee4c9b2f91ca3] GET http://controller:35357/v3/services?type=image
解决办法
在glance-api.conf和glance-registry.conf文件中
[keystone_authtoken]
username = glance
password =
这里跟glance数据库密码搞混了,应该是glance
因为上面这条命令openstack user create --domain default --password glance glance
安装nova
MariaDB [(none)]> create database nova_api;
Query OK, row affected (0.00 sec) MariaDB [(none)]> create database nova;
Query OK, row affected (0.00 sec) MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> \q
Bye
创建nova用户
adminrc@root@controller:~$openstack user create --domain default --password nova nova
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | e4fc73ea1f6d47269ae4ab95ff999326 |
| name | nova |
+-----------+----------------------------------+
给nova用户添加admin角色
adminrc@root@controller:~$openstack role add --project service --user nova admin
创建nova服务实体
adminrc@root@controller:~$openstack role add --project service --user nova admin
adminrc@root@controller:~$openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 872de5b67b1547adb4826ca1f7ef96b3 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
创建compute服务api端点
adminrc@root@controller:~$openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 8e42256f67e446cc88568903286ed462 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 872de5b67b1547adb4826ca1f7ef96b3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+ adminrc@root@controller:~$openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | b07f3be5fff4444db57323bb04376d33 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 872de5b67b1547adb4826ca1f7ef96b3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 91dc56e437e640c397696318ee1dcc21 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 872de5b67b1547adb4826ca1f7ef96b3 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
安装nova组件包
adminrc@root@controller:~$apt-get install -y nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler
配置
adminrc@root@controller:~$cp /etc/nova/nova.conf{,.bak}
adminrc@root@controller:~$vim /etc/nova/nova.conf
[DEFAULT]
........
rpc_backend=rabbit
auth_strategy=keystone
my_ip=10.0.3.10
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[database]
connection=mysql+pymysql://nova:123456@controller/nova
[api_database]
connection=mysql+pymysql://nova:123456@controller/nova_api
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[vnc]
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 0.0.0.0
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
同步数据库
adminrc@root@controller:~$su -s /bin/sh -c "nova-manage api_db sync" nova
Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
...........
-- ::43.731 INFO migrate.versioning.api [-] done
adminrc@root@controller:~$echo $? adminrc@root@controller:~$su -s /bin/sh -c "nova-manage db sync" nova
.......
-- ::19.955 INFO migrate.versioning.api [-] done
adminrc@root@controller:~$echo $?
重启服务
adminrc@root@controller:~$service nova-api restart
nova-api stop/waiting
nova-api start/running, process
adminrc@root@controller:~$service nova-consoleauth restart
nova-consoleauth stop/waiting
nova-consoleauth start/running, process
adminrc@root@controller:~$service nova-scheduler restart
nova-scheduler stop/waiting
nova-scheduler start/running, process
adminrc@root@controller:~$service nova-conductor restart
nova-conductor stop/waiting
nova-conductor start/running, process
adminrc@root@controller:~$service nova-novncproxy restart
nova-novncproxy stop/waiting
nova-novncproxy start/running, process
查看服务是否启动起来
adminrc@root@controller:/var/log/nova$openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| | nova-consoleauth | controller | internal | enabled | up | --14T23::50.000000 |
| | nova-scheduler | controller | internal | enabled | up | --14T23::46.000000 |
| | nova-conductor | controller | internal | enabled | up | --14T23::49.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
安装nova-compute节点,因为这里是单节点安装,所以nova-compute也是安装在controller节点上
adminrc@root@controller:~$apt-get install nova-compute
重新配置nova.conf
adminrc@root@controller:~$cp /etc/nova/nova.conf{,.back}
adminrc@root@controller:~$vim /etc/nova/nova.conf #其他项保持不变
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.56.10:6080/vnc_auto.html
确定计算节点是否支持虚拟机硬件加速
adminrc@root@controller:~$egrep -c '(vmx|svm)' /proc/cpuinfo # 不支持
需要更改nova-compute.conf文件
adminrc@root@controller:~$cp /etc/nova/nova-compute.conf{,.bak}
adminrc@root@controller:~$vim /etc/nova/nova-compute.conf
[libvirt]
virt_type=qemu #原值是kvm
重启计算服务
adminrc@root@controller:~$service nova-compute restart
nova-compute stop/waiting
nova-compute start/running, process
adminrc@root@controller:~$openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| | nova-consoleauth | controller | internal | enabled | up | --15T00::51.000000 |
| | nova-scheduler | controller | internal | enabled | up | --15T00::57.000000 |
| | nova-conductor | controller | internal | enabled | up | --15T00::50.000000 |
| | nova-compute | controller | nova | enabled | up | --15T00::54.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
如果查看nova-api服务的话,需要
adminrc@root@controller:~$service nova-api status
nova-api start/running, process
安装网络neutron服务
MariaDB [(none)]> create database neutron;
Query OK, row affected (0.00 sec) MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> \q
创建neutron用户
adminrc@root@controller:~$openstack user create --domain default --password neutron neutron
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | 081dc309806c45198a3bd6c39bf9947f |
| name | neutron |
+-----------+----------------------------------+
adminrc@root@controller:~$openstack role add --project service --user neutron admin
adminrc@root@controller:~$
创建neutron服务实体
adminrc@root@controller:~$openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | c661b602f11d45cfb068027c77fd519e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
创建neutron服务端点
adminrc@root@controller:~$openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0192ba47a7b348ec88bb5f71c82f8f4c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c661b602f11d45cfb068027c77fd519e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | bdf4b9663ccb4ef695cde0638231943a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c661b602f11d45cfb068027c77fd519e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ffc7a793985e494fa839fd76ea5bdcef |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c661b602f11d45cfb068027c77fd519e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
配置网络选项,网络选项有两种:
1.公共网络
2.私有网络
对于公共网络
首先安装安全组件
adminrc@root@controller:~$apt-get install -y neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
adminrc@root@controller:~$cp /etc/neutron/neutron.conf{,.bak}
adminrc@root@controller:~$vim /etc/neutron/neutron.conf
#需要更改的地方
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[DEFAULT]
rpc_backend = rabbit
core_plugin = ml2
service_plugins =
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
配置ML2插件
adminrc@root@controller:~$cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
# 需要更改的项
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = True
配置linuxbridge.ini
adminrc@root@controller:~$cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = False
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置dhcp_agent.ini
adminrc@root@controller:~$cp /etc/neutron/dhcp_agent.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/dhcp_agent.ini
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
配置元数据代理
adminrc@root@controller:~$cp /etc/neutron/metadata_agent.ini{,.bak}
adminrc@root@controller:~$vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
配置计算节点网络服务
adminrc@root@controller:~$vim /etc/nova/nova.conf
[neutron] 末尾添加这些内容
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启计算API服务、Networking服务
adminrc@root@controller:~$service nova-api restart
nova-api stop/waiting
nova-api start/running, process
adminrc@root@controller:~$service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process
adminrc@root@controller:~$service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process
adminrc@root@controller:~$service neutron-linuxbridge-agent restart
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process
adminrc@root@controller:~$service neutron-dhcp-agent restart
neutron-dhcp-agent stop/waiting
neutron-dhcp-agent start/running, process
adminrc@root@controller:~$service neutron-metadata-agent restart
neutron-metadata-agent stop/waiting
neutron-metadata-agent start/running, process
重启neutron-l3-agent
adminrc@root@controller:~$service neutron-l3-agent restart
neutron-l3-agent stop/waiting
neutron-l3-agent start/running, process
重启
adminrc@root@controller:~$service nova-compute restart
nova-compute stop/waiting
nova-compute start/running, process
adminrc@root@controller:~$service neutron-linuxbridge-agent restart
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process
查看是否有网络创建
adminrc@root@controller:~$openstack network list
输出为空,因为还没有创建任何网络
验证neutron-server是否正常启动
adminrc@root@controller:~$neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| availability_zone | Availability Zone |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| net-mtu | Network MTU |
| network-ip-availability | Network IP Availability |
| quotas | Quota management support |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| timestamp_core | Time Stamp Fields addition for core resources |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| security-group | security-group |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
+---------------------------+-----------------------------------------------+
验证
adminrc@root@controller:~$neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 0cafd3ff-6da0--a6dd-9a60136af93a | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
| 53fce606-311d--8af0-efd6f9087e34 | Open vSwitch agent | controller | | :-) | True | neutron-openvswitch-agent |
| b5dffa68-a505-448f-8fa6-7d8bb16eb07a | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
| dc161e12-8b23-4f49--b7d68cfe2197 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
创建一个实例
首先需要创建一个虚拟网络
创建一个提供者网络
adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Invalid input for operation: network_type value 'flat' not supported.
Neutron server returns request_ids: ['req-e9d3cb26-4156-4eb1-bc9e-9528dbbd1dc9']
根据错误提示,需要检查下ml2.conf.ini文件
[ml2] type_drivers = flat,vlan #确认这行内容有flat
重启服务再次运行创建网络
adminrc@root@controller:~$service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process
adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | --15T12:: |
| description | |
| id | ab73ff8f-2d19--811c-85c068290eeb |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | |
| name | provider |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
| updated_at | --15T12:: |
+---------------------------+--------------------------------------+
接着创建一个子网
adminrc@root@controller:~$neutron subnet-create --name provider --allocation-pool start=10.0.3.50,end=10.0.3.253 --dns-nameserver 114.114.114.114 --gateway 10.0.3.1 provider 10.0.3.0/
Created a new subnet:
+-------------------+---------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------+
| allocation_pools | {"start": "10.0.3.50", "end": "10.0.3.253"} |
| cidr | 10.0.3.0/ |
| created_at | --15T12:: |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 10.0.3.1 |
| host_routes | |
| id | 48faef6d-ee9d-4b46-a56d-3c196a766224 |
| ip_version | |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider |
| network_id | ab73ff8f-2d19--811c-85c068290eeb |
| subnetpool_id | |
| tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
| updated_at | --15T12:: |
+-------------------+---------------------------------------------+
接着创建一个虚拟主机
adminrc@root@controller:~$openstack flavor create --id --vcpus --ram --disk m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | |
| disk | |
| id | |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | |
+----------------------------+---------+
生成一个键值对
adminrc@root@controller:~$pwd
/home/openstack
adminrc@root@controller:~$ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
8a:e5:a2:f3:f4:1e::1a:c1:8d::d1:fd:fa:4b: root@controller
The key's randomart image is:
+--[ RSA ]----+
| |
| . . |
| . . . |
| . o . . |
| + = S . . E|
| B o . . . |
| = * . . |
| .o = o o |
| .oo.o o. |
+-----------------+
adminrc@root@controller:~$ls -al /root/.ssh/id_rsa.pub
-rw-r--r-- root root Jan : /root/.ssh/id_rsa.pub
添加密钥对
adminrc@root@controller:~$openstack keypair create --public-key /root/.ssh/id_rsa.pub rootkey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 8a:e5:a2:f3:f4:1e::1a:c1:8d::d1:fd:fa:4b: |
| name | rootkey |
| user_id | 653177098fac40a28734093706299e66 |
+-------------+-------------------------------------------------+
验证密钥对
adminrc@root@controller:~$openstack keypair list
+---------+-------------------------------------------------+
| Name | Fingerprint |
+---------+-------------------------------------------------+
| rootkey | 8a:e5:a2:f3:f4:1e::1a:c1:8d::d1:fd:fa:4b: |
+---------+-------------------------------------------------+
增加安全组规则
adminrc@root@controller:~$openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | a4c8ad46-42eb--b09f-af5dcfef2ad1 |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/ |
| parent_group_id | 968f5f33-c569-46b4--8a3f614ae670 |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
adminrc@root@controller:~$openstack security group rule create --proto tcp --dst-port default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 8ed34a22----94ec284e4764 |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/ |
| parent_group_id | 968f5f33-c569-46b4--8a3f614ae670 |
| port_range | : |
| remote_security_group | |
+-----------------------+--------------------------------------+
开始创建实例
# 列出可用类型
adminrc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| | m1.nano | | | | | True |
| | m1.tiny | | | | | True |
| | m1.small | | | | | True |
| | m1.medium | | | | | True |
| | m1.large | | | | | True |
| | m1.xlarge | | | | | True |
+----+-----------+-------+------+-----------+-------+-----------+
# 列出可用镜像
adminrc@root@controller:~$openstack image list
+--------------------------------------+---------+--------+
| ID | Name | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
# 列出可用网络
adminrc@root@controller:~$openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| ab73ff8f-2d19--811c-85c068290eeb | provider | 48faef6d-ee9d-4b46-a56d-3c196a766224 |
+--------------------------------------+----------+--------------------------------------+
# 列出可用安全组规则
adminrc@root@controller:~$openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 968f5f33-c569-46b4--8a3f614ae670 | default | Default security group | 29577090a0e8466ab49cc30a4305f5f8 |
+--------------------------------------+---------+------------------------+----------------------------------+
# 创建实例
adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirros --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
No image with a name or ID of 'cirros' exists.
# 好吧 又有事情了
# 再次查看可用镜像,好像发现问题所在了,我输入的是cirros,而可用镜像的name的值cirrors。
adminrc@root@controller:~$openstack image list
+--------------------------------------+---------+--------+
| ID | Name | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
+--------------------------------------+------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance- |
| OS-EXT-STS:power_state | |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | WeVy7yd6BXcc |
| config_drive | |
| created | --15T13::19Z |
| flavor | m1.nano () |
| hostId | |
| id | 9eb49f96-7d68--bb37-7583e457edc6 |
| image | cirrors (39d73bcf-e60b-4caf--cca17de00d7e) |
| key_name | rootkey |
| name | test-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | |
| project_id | 29577090a0e8466ab49cc30a4305f5f8 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | --15T13::20Z |
| user_id | 653177098fac40a28734093706299e66 |
+--------------------------------------+------------------------------------------------+
创建成功
查看相关实例
adminrc@root@controller:~$openstack server list
+--------------------------------------+---------------+--------+--------------------+
| ID | Name | Status | Networks |
+--------------------------------------+---------------+--------+--------------------+
| 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+--------------------+
adminrc@root@controller:~$nova image-list
+--------------------------------------+---------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------+--------+--------+
| 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | ACTIVE | |
+--------------------------------------+---------+--------+--------+
adminrc@root@controller:~$glance image-list
+--------------------------------------+---------+
| ID | Name |
+--------------------------------------+---------+
| 39d73bcf-e60b-4caf--cca17de00d7e | cirrors |
+--------------------------------------+---------+
adminrc@root@controller:~$nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | - | Running | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
启动实例的命令
adminrc@root@controller:~$openstack boot --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
debug
adminrc@root@controller:~$openstack --debug server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
使用虚拟控制台访问实例
adminrc@root@controller:~$openstack console url show test-instance
+-------+------------------------------------------------------------------------------------+
| Field | Value |
+-------+------------------------------------------------------------------------------------+
| type | novnc |
| url | http://192.168.56.10:6080/vnc_auto.html?token=ce586e5f-ceb1-4f7d-b039-0e44ae273686 |
+-------+------------------------------------------------------------------------------------+

提示很明显
用户名:cirros
密码:cubswin:)
使用sudo切换至root用户。
接下来查看

测试网络连通性

接着创建第二个
adminrc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=ab73ff8f-2d19--811c-85c068290eeb --security-group default --key-name rootkey test-instance
+--------------------------------------+------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance- |
| OS-EXT-STS:power_state | |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | QrFxY7UnvuJV |
| config_drive | |
| created | --15T14::15Z |
| flavor | m1.nano () |
| hostId | |
| id | 203a1f48-1f98-44ca-a3fa-883a9cea514a |
| image | cirrors (39d73bcf-e60b-4caf--cca17de00d7e) |
| key_name | rootkey |
| name | test-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | |
| project_id | 29577090a0e8466ab49cc30a4305f5f8 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | --15T14::15Z |
| user_id | 653177098fac40a28734093706299e66 |
+--------------------------------------+------------------------------------------------+
查看
adminrc@root@controller:~$nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | - | Running | provider=10.0.3.52 |
| 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | - | Running | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
此时已经创建好了两台虚拟实例,并且已经处于running状态。
实例2我们使用命令行演示下
adminrc@root@controller:~$ping -c 10.0.3.52
PING 10.0.3.52 (10.0.3.52) () bytes of data.
bytes from 10.0.3.52: icmp_seq= ttl= time=28.5 ms
bytes from 10.0.3.52: icmp_seq= ttl= time=0.477 ms --- 10.0.3.52 ping statistics ---
packets transmitted, received, % packet loss, time 1001ms
rtt min/avg/max/mdev = 0.477/14.505/28.534/14.029 ms
adminrc@root@controller:~$nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | - | Running | provider=10.0.3.52 |
| 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | - | Running | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
使用openstack console url show 查看
adminrc@root@controller:~$openstack console url show test-instance
More than one server exists with the name 'test-instance'.
# 因为此时有两个server,所以使用id来展示即可
adminrc@root@controller:~$openstack console url show 203a1f48-1f98-44ca-a3fa-883a9cea514a
+-------+------------------------------------------------------------------------------------+
| Field | Value |
+-------+------------------------------------------------------------------------------------+
| type | novnc |
| url | http://192.168.56.10:6080/vnc_auto.html?token=42c43635-884c-482e-ac08-d1e6c6d2789b |
+-------+------------------------------------------------------------------------------------+
# 注意这里不知道为什么ssh不可以,按说配置了安全组规则后可以使用ssh cirros@10.0.3.52直接登上去,但是会提示输入密码,这一步暂时是个问题。。。。
哦...目前只知道使用这种方法获取用户名及密码

使用命令行测试
adminrc@root@controller:~$ssh cirros@10.0.3.52
cirros@10.0.3.52's password:# cubswin:) $ ifconfig
eth0 Link encap:Ethernet HWaddr FA::3E:::DE
inet addr:10.0.3.52 Bcast:10.0.3.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe07:21de/ Scope:Link
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (17.4 KiB) TX bytes: (16.8 KiB) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::/ Scope:Host
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 B) TX bytes: (0.0 B) $ ping -c 10.0.3.1
PING 10.0.3.1 (10.0.3.1): data bytes
bytes from 10.0.3.1: seq= ttl= time=45.026 ms
bytes from 10.0.3.1: seq= ttl= time=1.050 ms --- 10.0.3.1 ping statistics ---
packets transmitted, packets received, % packet loss
round-trip min/avg/max = 1.050/23.038/45.026 ms
$ ping -c www.qq.com
PING www.qq.com (61.129.7.47): data bytes
bytes from 61.129.7.47: seq= ttl= time=5.527 ms
bytes from 61.129.7.47: seq= ttl= time=5.363 ms --- www.qq.com ping statistics ---
packets transmitted, packets received, % packet loss
round-trip min/avg/max = 5.363/5.445/5.527 ms
测试两个实例之间的连通性
$ sudo -s
$ hostname
cirros
$ ping -c 10.0.3.51
PING 10.0.3.51 (10.0.3.51): data bytes
bytes from 10.0.3.51: seq= ttl= time=28.903 ms
bytes from 10.0.3.51: seq= ttl= time=1.205 ms --- 10.0.3.51 ping statistics ---
packets transmitted, packets received, % packet loss
round-trip min/avg/max = 1.205/15.054/28.903 m
对于私有网络服务


安装组件
root@controller:~# apt-get install -y neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
由于这步是在原有的公共网络服务基础上配置的,有些配置文件需要更改
确认配置neutron.conf文件信息
root@controller:~# ls /etc/neutron/neutron.*
neutron.conf neutron.conf.bak
root@controller:~# vim default
root@controller:~# cat default
core_plugin = ml2 #注意首行顶格写没有空行才行
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True root@controller:~# grep "`cat default`" /etc/neutron/neutron.conf
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit root@controller:~# grep "^connection" /etc/neutron/neutron.conf
connection = mysql+pymysql://neutron:123456@controller/neutron
root@controller:~# grep "core_plugin" /etc/neutron/neutron.conf
core_plugin = ml2
root@controller:~# grep "service_plugins" /etc/neutron/neutron.conf
service_plugins =
root@controller:~# sed -i "s/service_plugins\=/service_plugins\ =\ router/g" /etc/neutron/neutron.conf
root@controller:~# grep "service_plugins" /etc/neutron/neutron.conf service_plugins = router
root@controller:~# grep "allow_overlapping_ips" /etc/neutron/neutron.conf #allow_overlapping_ips = false
root@controller:~# sed -i "s/\#allow_overlapping_ips\ =\ false/allow_overlapping_ips\ =\ True/g" /etc/neutron/neutron.conf
root@controller:~# grep "allow_overlapping_ips" /etc/neutron/neutron.conf allow_overlapping_ips = True
root@controller:~# grep "rpc_backend = rabbit" /etc/neutron/neutron.conf
rpc_backend = rabbit
root@controller:~# grep "rabbit_host = controller" /etc/neutron/neutron.conf
rabbit_host = controller
root@controller:~# grep "rabbit_userid = openstack" /etc/neutron/neutron.conf
rabbit_userid = openstack
root@controller:~# grep "rabbit_password = 123456" /etc/neutron/neutron.conf rabbit_password =
root@controller:~# cat keystone_authtoken
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
root@controller:~# grep "`cat keystone_authtoken`" /etc/neutron/neutron.conf
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service root@controller:~# grep "`cat oslo_messaging_rabbit`" /etc/neutron/neutron.conf
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = root@controller:~# vim nova
root@controller:~# cat nova
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova root@controller:~# grep "`cat nova`" /etc/neutron/neutron.conf
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
也可以这样
root@controller:~# vim neutron
root@controller:~# cat neutron
^\[database\]
connection = mysql+pymysql://neutron:123456@controller/neutron
^\[DEFAULT\]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
^\[oslo_messaging_rabbit\]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
^\[keystone_authtoken\]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
^\[nova\]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova root@controller:~# grep "`cat neutron`" /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
确认ml2_conf.ini
root@controller:~# cat ml2
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = :
enable_ipset = True
# 在/etc/neutron/plugins/ml2/ml2_conf.ini添加上述内容,也可以一项一项找,然后取消注释更改为上述对应的值
完事之后配置linuxbridge_agent.ini
root@controller:~# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.public_net}
root@controller:~# vim linuxbridge
root@controller:~# cat linuxbridge
# 将linuxbridge_agent.ini文件中的以下选项按以下配置,没有的选项请添加
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = True
local_ip = 10.0.3.10
l2_population = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置layer-3代理
root@controller:~# cat l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
root@controller:~# vim /etc/neutron/l3_agent.ini
#将此文件中的与l3_agent.ini文件中的对应的选项按如上配置
配置DHCP代理
root@controller:~# cp /etc/neutron/dhcp_agent.ini{,.back}
root@controller:~# vim /etc/neutron/dhcp_agent.ini
root@controller:~# cat dhcp_agent
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
# 将dhcp_agent.ini文件中的选项内容按dehcp_agent中的内容填写
配置元数据代理
root@controller:~# cat metadata_agent
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
root@controller:~# grep "`cat metadata_agent`" /etc/neutron/metadata_agent.ini
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
为计算节点配置网络服务
root@controller:~# cp /etc/nova/nova.conf{,.public_net}
root@controller:~# vim nova
root@controller:~# cat nova
^\[neutron\]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
root@controller:~# grep "`cat nova`" /etc/nova/nova.conf
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
完成安装,同步数据库
root@controller:~# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "oslo_config.cfg"
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron ...
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
OK
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron-fwaas ...
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
OK
root@controller:~# echo $?
重启服务
root@controller:~# ls /etc/init.d/ | grep nova
nova-api
nova-compute
nova-conductor
nova-consoleauth
nova-novncproxy
nova-scheduler
root@controller:~# ls /etc/init.d/ | grep nova | xargs -i service {} restart
nova-api stop/waiting
nova-api start/running, process
nova-compute stop/waiting
nova-compute start/running, process
nova-conductor stop/waiting
nova-conductor start/running, process
nova-consoleauth stop/waiting
nova-consoleauth start/running, process
nova-novncproxy stop/waiting
nova-novncproxy start/running, process
nova-scheduler stop/waiting
nova-scheduler start/running, process
重启网络服务
root@controller:~# ls /etc/init.d/ | grep neutron
neutron-dhcp-agent
neutron-l3-agent
neutron-linuxbridge-agent
neutron-linuxbridge-cleanup
neutron-metadata-agent
neutron-openvswitch-agent
neutron-ovs-cleanup
neutron-server
root@controller:~# ls /etc/init.d/ | grep neutron | xargs -i service {} restart
neutron-dhcp-agent stop/waiting
neutron-dhcp-agent start/running, process
neutron-l3-agent stop/waiting
neutron-l3-agent start/running, process
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process
stop: Unknown instance:
start: Job failed to start
neutron-metadata-agent stop/waiting
neutron-metadata-agent start/running, process
neutron-openvswitch-agent stop/waiting
neutron-openvswitch-agent start/running, process
neutron-ovs-cleanup stop/waiting
neutron-ovs-cleanup start/running
neutron-server stop/waiting
neutron-server start/running, process
验证
root@controller:~# source adminrc
adminrc@root@controller:~$neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| net-mtu | Network MTU |
| availability_zone | Availability Zone |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| extraroute | Neutron Extra Route |
| timestamp_core | Time Stamp Fields addition for core resources |
| router | Neutron L3 Router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| router_availability_zone | Router Availability Zone |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| dvr | Distributed Virtual Router |
+---------------------------+-----------------------------------------------+
adminrc@root@controller:~$neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 0cafd3ff-6da0--a6dd-9a60136af93a | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
| 53fce606-311d--8af0-efd6f9087e34 | Open vSwitch agent | controller | | :-) | True | neutron-openvswitch-agent |
| 7afb1ed4---b1f8-4e0c6f06fe71 | L3 agent | controller | nova | :-) | True | neutron-l3-agent |
| b5dffa68-a505-448f-8fa6-7d8bb16eb07a | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
| dc161e12-8b23-4f49--b7d68cfe2197 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
adminrc@root@controller:~$
创建虚拟网络,这里首先需要创建提供者网络,创建提供者网络的步骤与公有网络创建提供者网络的步骤一样,这里由于没有进行虚拟机快照还原操作,所以之前在公有网络配置的时候provider已经存在了,这里为了方便,首先删除掉公有网络创建的虚拟网络和两个实例
# 删除实例
adminrc@root@controller:~$openstack server list
+--------------------------------------+---------------+--------+--------------------+
| ID | Name | Status | Networks |
+--------------------------------------+---------------+--------+--------------------+
| 203a1f48-1f98-44ca-a3fa-883a9cea514a | test-instance | ACTIVE | provider=10.0.3.52 |
| 9eb49f96-7d68--bb37-7583e457edc6 | test-instance | ACTIVE | provider=10.0.3.51 |
+--------------------------------------+---------------+--------+--------------------+ adminrc@root@controller:~$openstack server delete 203a1f48-1f98-44ca-a3fa-883a9cea514a
adminrc@root@controller:~$echo $? adminrc@root@controller:~$openstack server delete 9eb49f96-7d68--bb37-7583e457edc6
adminrc@root@controller:~$echo $? # 删除虚拟网络
adminrc@root@controller:~$neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+--------------------------------------------------+
| ab73ff8f-2d19--811c-85c068290eeb | provider | 48faef6d-ee9d-4b46-a56d-3c196a766224 10.0.3.0/ |
+--------------------------------------+----------+--------------------------------------------------+
adminrc@root@controller:~$neutron net-delete ab73ff8f-2d19--811c-85c068290eeb
Deleted network: ab73ff8f-2d19--811c-85c068290eeb
adminrc@root@controller:~$neutron net-list adminrc@root@controller:~$neutron subnet-list
创建网络提供者
adminrc@root@controller:~$neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | --16T00:: |
| description | |
| id | a600cdf0-352a-4c85-b90a-eba0ee4282fd |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | |
| name | provider |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
| updated_at | --16T00:: |
+---------------------------+--------------------------------------+
创建子网
adminrc@root@controller:~$neutron subnet-create --name provider --allocation-pool start=10.0.3.50,end=10.0.3.254 --dns-nameserver 114.114.114.114 --gateway 10.0.3.1 provider 10.0.3.0/
Created a new subnet:
+-------------------+---------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------+
| allocation_pools | {"start": "10.0.3.50", "end": "10.0.3.254"} |
| cidr | 10.0.3.0/ |
| created_at | --16T00:: |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 10.0.3.1 |
| host_routes | |
| id | b19d9f26-e32e-4bb8-a53e-55eb1154cefe |
| ip_version | |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider |
| network_id | a600cdf0-352a-4c85-b90a-eba0ee4282fd |
| subnetpool_id | |
| tenant_id | 29577090a0e8466ab49cc30a4305f5f8 |
| updated_at | --16T00:: |
+-------------------+---------------------------------------------+
接着创建私有网络,这里遇到一个小错误
adminrc@root@controller:~$source demorc
demorc@root@controller:~$neutron net-create selfservice
Unable to create the network. No tenant network is available for allocation.
Neutron server returns request_ids: ['req-c2deaa15-c2eb-48b7-9510-644b3ae4f686']
# 排错
demorc@root@controller:~$ neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+--------------------------------------------------+
| a600cdf0-352a-4c85-b90a-eba0ee4282fd | provider | b19d9f26-e32e-4bb8-a53e-55eb1154cefe 10.0.3.0/ |
+--------------------------------------+----------+--------------------------------------------------+
demorc@root@controller:~$neutron subnet-list
+--------------------------------------+----------+-------------+---------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+----------+-------------+---------------------------------------------+
| b19d9f26-e32e-4bb8-a53e-55eb1154cefe | provider | 10.0.3.0/ | {"start": "10.0.3.50", "end": "10.0.3.254"} |
+--------------------------------------+----------+-------------+---------------------------------------------+
demorc@root@controller:~$tail /var/log/neutron/neutron-server.log
-- ::14.834 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line , in create_network_segments
-- ::14.834 ERROR neutron.api.v2.resource segment = self._allocate_tenant_net_segment(session)
-- ::14.834 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line , in _allocate_tenant_net_segment
-- ::14.834 ERROR neutron.api.v2.resource raise exc.NoNetworkAvailable()
-- ::14.834 ERROR neutron.api.v2.resource NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.
-- ::14.834 ERROR neutron.api.v2.resource
-- ::14.846 INFO neutron.wsgi [req-c2deaa15-c2eb-48b7--644b3ae4f686 c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "POST /v2.0/networks.json HTTP/1.1" 0.565548
-- ::32.517 INFO neutron.wsgi [req-d15a0c85----6580c476d12a c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "GET /v2.0/networks.json HTTP/1.1" 0.559720
-- ::32.636 INFO neutron.wsgi [req-6d8fe235-340d-4fe5-897c-f8eee16e3b5e c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "GET /v2.0/subnets.json?fields=id&fields=cidr&id=b19d9f26-e32e-4bb8-a53e-55eb1154cefe HTTP/1.1" 0.115075
-- ::19.646 INFO neutron.wsgi [req-891d5624-a86e--a81d-641e5cfc0043 c4de9fac882740838aa26e9119b30cb9 ffc560f6a2604c3896df922115c6fc2a - - -] 10.0.3.10 - - [/Jan/ ::] "GET /v2.0/subnets.json HTTP/1.1" 0.436610
demorc@root@controller:~$
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
# 确保vni_ranges = :1000在[ml2_type_vxlan]下,而不是在其他项目下
[ml2_type_vxlan] vni_ranges = :
重启nova和neutron服务后再次创建
demorc@root@controller:~$grep -rHn "vni_ranges" /etc/neutron/
/etc/neutron/plugins/ml2/ml2_conf.ini::vni_ranges = :
/etc/neutron/plugins/ml2/ml2_conf.ini::#vni_ranges =
/etc/neutron/plugins/ml2/ml2_conf.ini.bak::#vni_ranges =
/etc/neutron/plugins/ml2/ml2_conf.ini.bak::#vni_ranges =
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
demorc@root@controller:~$vim /etc/neutron/plugins/ml2/ml2_conf.ini
demorc@root@controller:~$ls /etc/init.d/ | grep nova | xargs -i service {} restart
nova-api stop/waiting
nova-api start/running, process
nova-compute stop/waiting
nova-compute start/running, process
nova-conductor stop/waiting
nova-conductor start/running, process
nova-consoleauth stop/waiting
nova-consoleauth start/running, process
nova-novncproxy stop/waiting
nova-novncproxy start/running, process
nova-scheduler stop/waiting
nova-scheduler start/running, process
demorc@root@controller:~$ls /etc/init.d/ | grep neutron | xargs -i service {} restart
neutron-dhcp-agent stop/waiting
neutron-dhcp-agent start/running, process
neutron-l3-agent stop/waiting
neutron-l3-agent start/running, process
neutron-linuxbridge-agent stop/waiting
neutron-linuxbridge-agent start/running, process
stop: Unknown instance:
start: Job failed to start
neutron-metadata-agent stop/waiting
neutron-metadata-agent start/running, process
neutron-openvswitch-agent stop/waiting
neutron-openvswitch-agent start/running, process
neutron-ovs-cleanup stop/waiting
neutron-ovs-cleanup start/running
neutron-server stop/waiting
neutron-server start/running, process
demorc@root@controller:~$neutron net-list
+--------------------------------------+----------+--------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+--------------------------------------------------+
| b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e | provider | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 10.0.3.0/ |
+--------------------------------------+----------+--------------------------------------------------+
demorc@root@controller:~$neutron subnet-list
+--------------------------------------+----------+-------------+---------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+----------+-------------+---------------------------------------------+
| 68f14924-15c4-4b0d-bcfc-011fd5a6de12 | provider | 10.0.3.0/ | {"start": "10.0.3.50", "end": "10.0.3.254"} |
+--------------------------------------+----------+-------------+---------------------------------------------+
demorc@root@controller:~$neutron net-create selfservice
Created a new network:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | --16T01:: |
| description | |
| id | 66eb76af-e111-4cae-adc6-2df95ad29faf |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | |
| name | selfservice |
| port_security_enabled | True |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | ffc560f6a2604c3896df922115c6fc2a |
| updated_at | --16T01:: |
+-------------------------+--------------------------------------+
创建子网
demorc@root@controller:~$neutron subnet-create --name selfservice --dns-nameserver 114.114.114.114 --gateway 192.168.56.1 selfservice 192.168.56.0/
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.56.2", "end": "192.168.56.254"} |
| cidr | 192.168.56.0/ |
| created_at | --16T01:: |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 192.168.56.1 |
| host_routes | |
| id | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93 |
| ip_version | |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | selfservice |
| network_id | 66eb76af-e111-4cae-adc6-2df95ad29faf |
| subnetpool_id | |
| tenant_id | ffc560f6a2604c3896df922115c6fc2a |
| updated_at | --16T01:: |
+-------------------+----------------------------------------------------+
第二个子网
demorc@root@controller:~$neutron subnet-create --name selfservice --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 selfservice 172.16.1.0/ Created a new subnet:
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| allocation_pools | {"start": "172.16.1.2", "end": "172.16.1.254"} |
| cidr | 172.16.1.0/ |
| created_at | --16T01:: |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 172.16.1.1 |
| host_routes | |
| id | ec079b98-a585-40c0-9b4c-340c943642eb |
| ip_version | |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | selfservice |
| network_id | 66eb76af-e111-4cae-adc6-2df95ad29faf |
| subnetpool_id | |
| tenant_id | ffc560f6a2604c3896df922115c6fc2a |
| updated_at | --16T01:: |
+-------------------+------------------------------------------------+
创建路由
demorc@root@controller:~$source adminrc
adminrc@root@controller:~$neutron net-update provider --router:external
Updated network: provider
adminrc@root@controller:~$source demorc
demorc@root@controller:~$neutron router-create router
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| description | |
| external_gateway_info | |
| id | 8770421b-2f3b-4d33-9acf-562b36b5b31b |
| name | router |
| routes | |
| status | ACTIVE |
| tenant_id | ffc560f6a2604c3896df922115c6fc2a |
+-------------------------+--------------------------------------+
demorc@root@controller:~$neutron router-list
+--------------------------------------+--------+-----------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------+-----------------------+
| 8770421b-2f3b-4d33-9acf-562b36b5b31b | router | null |
+--------------------------------------+--------+-----------------------+
为路由器添加一个私网子网接口
demorc@root@controller:~$neutron router-interface-add router selfservice
Multiple subnet matches found for name 'selfservice', use an ID to be more specific.
demorc@root@controller:~$neutron subnet-list
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| 68f14924-15c4-4b0d-bcfc-011fd5a6de12 | provider | 10.0.3.0/ | {"start": "10.0.3.50", "end": "10.0.3.254"} |
| 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93 | selfservice | 192.168.56.0/ | {"start": "192.168.56.2", "end": "192.168.56.254"} |
| ec079b98-a585-40c0-9b4c-340c943642eb | selfservice | 172.16.1.0/ | {"start": "172.16.1.2", "end": "172.16.1.254"} |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
demorc@root@controller:~$neutron router-interface-add router 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93
Added interface 329ffea0-b8f2--a6b7-19556a312b75 to router router.
为路由器添加一个公有网络的网关
demorc@root@controller:~$neutron router-gateway-set router provider
Set gateway for router router
验证
列出网络命名空间
demorc@root@controller:~$source adminrc
adminrc@root@controller:~$ip netns
qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b
qdhcp-66eb76af-e111-4cae-adc6-2df95ad29faf
qdhcp-b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e
adminrc@root@controller:~$neutron router-port-list router
列出路由器上的端口来确定公网网关的IP地址
adminrc@root@controller:~$neutron router-port-list router
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 329ffea0-b8f2--a6b7-19556a312b75 | | fa::3e::8e:3c | {"subnet_id": "9c8f506c-46bd-44d8-a8a5-e160bf2ddf93", "ip_address": "192.168.56.1"} |
| a0b37442-a41b--b492-59f05637b371 | | fa::3e:::fd | {"subnet_id": "68f14924-15c4-4b0d-bcfc-011fd5a6de12", "ip_address": "10.0.3.51"} |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
ping测试
adminrc@root@controller:~$ping -c 192.168.56.1
PING 192.168.56.1 (192.168.56.1) () bytes of data.
bytes from 192.168.56.1: icmp_seq= ttl= time=0.221 ms
bytes from 192.168.56.1: icmp_seq= ttl= time=0.237 ms --- 192.168.56.1 ping statistics ---
packets transmitted, received, % packet loss, time 999ms
rtt min/avg/max/mdev = 0.221/0.229/0.237/0.008 ms
# 这里说明以下,上面创建了两个子网,一个192.168.56./24和172.16.1./,为路由器添加私网子网接口的时候的步骤中,我使用的是192.168.56./24这个网段,所以这里只能ping同192,不能ping同172
创建虚主机
# 由于环境还是公有网络的环境,所以这里先删除之前创建m1.nano(可能更改其他规格也可以,我没尝试)
adminrc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| | m1.nano | | | | | True |
| | m1.tiny | | | | | True |
| | m1.small | | | | | True |
| | m1.medium | | | | | True |
| | m1.large | | | | | True |
| | m1.xlarge | | | | | True |
+----+-----------+-------+------+-----------+-------+-----------+
adminrc@root@controller:~$openstack flavor delete m1.nano
adminrc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| | m1.tiny | | | | | True |
| | m1.small | | | | | True |
| | m1.medium | | | | | True |
| | m1.large | | | | | True |
| | m1.xlarge | | | | | True |
+----+-----------+-------+------+-----------+-------+-----------+
adminrc@root@controller:~$openstack flavor create --id --vcpus --ram --disk m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | |
| disk | |
| id | |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | |
+----------------------------+---------+
生成一个键值对
adminrc@root@controller:~$ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
:be::f6:be:9b::9b:db::e1:ee:1a:fb::b1 root@controller
The key's randomart image is:
+--[ RSA ]----+
| |
| . |
| o . |
| o . .|
| S + ..|
| + o ... |
| . . ..+. |
| .oE+. |
| oOB*o |
+-----------------+
adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack keypair create --public-key /root/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | :be::f6:be:9b::9b:db::e1:ee:1a:fb::b1 |
| name | mykey |
| user_id | c4de9fac882740838aa26e9119b30cb9 |
+-------------+-------------------------------------------------+
demorc@root@controller:~$openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | :be::f6:be:9b::9b:db::e1:ee:1a:fb::b1 |
+-------+-------------------------------------------------+
增加安全组规则
# 允许ICMP(ping)
demorc@root@controller:~$openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | b76e25be-c17e-48b3-8bbd-8505c3637900 |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/ |
| parent_group_id | 82cd1a2f-5eaa--a6d4-480daf27cf3d |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
# 允许SSH访问
demorc@root@controller:~$openstack security group rule create --proto tcp --dst-port default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 32096d51-9e2a-45f2-a65a-27ef3c1bb2b5 |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/ |
| parent_group_id | 82cd1a2f-5eaa--a6d4-480daf27cf3d |
| port_range | : |
| remote_security_group | |
+-----------------------+--------------------------------------+
开始创建实例
demorc@root@controller:~$openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| | m1.nano | | | | | True |
| | m1.tiny | | | | | True |
| | m1.small | | | | | True |
| | m1.medium | | | | | True |
| | m1.large | | | | | True |
| | m1.xlarge | | | | | True |
+----+-----------+-------+------+-----------+-------+-----------+
demorc@root@controller:~$openstack image list
+--------------------------------------+---------+--------+
| ID | Name | Status |
+--------------------------------------+---------+--------+
| 39d73bcf-e60b-4caf--cca17de00d7e | cirrors | active |
+--------------------------------------+---------+--------+
demorc@root@controller:~$openstack network list
+--------------------------------------+-------------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+----------------------------------------------------------------------------+
| 66eb76af-e111-4cae-adc6-2df95ad29faf | selfservice | 9c8f506c-46bd-44d8-a8a5-e160bf2ddf93, ec079b98-a585-40c0-9b4c-340c943642eb |
| b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e | provider | 68f14924-15c4-4b0d-bcfc-011fd5a6de12 |
+--------------------------------------+-------------+----------------------------------------------------------------------------+
demorc@root@controller:~$openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 82cd1a2f-5eaa--a6d4-480daf27cf3d | default | Default security group | ffc560f6a2604c3896df922115c6fc2a |
+--------------------------------------+---------+------------------------+----------------------------------+
#确保以上几项都可用
# flavor的话用的是m1.nano
# net-id的话用的是selservice对应的ID
demorc@root@controller:~$openstack server create --flavor m1.nano --image cirrors --nic net-id=66eb76af-e111-4cae-adc6-2df95ad29faf --security-group default --key-name mykey selfservice-instance
+--------------------------------------+------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | uFD7TkvHjsax |
| config_drive | |
| created | --16T02::45Z |
| flavor | m1.nano () |
| hostId | |
| id | 4c954e71-8e73-49e1-a67f-20c007d582d3 |
| image | cirrors (39d73bcf-e60b-4caf--cca17de00d7e) |
| key_name | mykey |
| name | selfservice-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | |
| project_id | ffc560f6a2604c3896df922115c6fc2a |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | --16T02::46Z |
| user_id | c4de9fac882740838aa26e9119b30cb9 |
+--------------------------------------+------------------------------------------------+
查看实例状态
demorc@root@controller:~$openstack server list
+--------------------------------------+----------------------+--------+--------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------------------+--------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+--------+--------------------------+
使用nova list查看
demorc@root@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+--------+------------+-------------+--------------------------+
关闭、启动、删除实例
demorc@root@controller:~$openstack server list
+--------------------------------------+----------------------+---------+--------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------------------+---------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | SHUTOFF | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+---------+--------------------------+
demorc@root@controller:~$openstack server list +--------------------------------------+----------------------+--------+--------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------------------+--------+--------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3 |
+--------------------------------------+----------------------+--------+--------------------------+
demorc@root@controller:~$openstack server stop 4c954e71-8e73-49e1-a67f-20c007d582d3
demorc@root@controller:~$openstack server delete 4c954e71-8e73-49e1-a67f-20c007d582d3
使用虚拟控制台访问实例
demorc@root@controller:~$openstack console url show selfservice-instance
+-------+------------------------------------------------------------------------------------+
| Field | Value |
+-------+------------------------------------------------------------------------------------+
| type | novnc |
| url | http://192.168.56.10:6080/vnc_auto.html?token=82177d68-c9fb-4c3c-85d6-6d42db50c864 |
+-------+------------------------------------------------------------------------------------+
浏览器直接粘贴上面的url即可


由于是单节点安装,所以这里想要ping实例的话需要
demorc@root@controller:~$ip netns
qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b #复制此行
qdhcp-66eb76af-e111-4cae-adc6-2df95ad29faf
qdhcp-b7369bde-908a-4dc4-b4af-a4bc5e1a2b8e
demorc@root@controller:~$ip netns exec qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b ip a | grep "inet"
inet 127.0.0.1/ scope host lo
inet6 ::/ scope host
inet 192.168.56.1/ brd 192.168.56.255 scope global qr-329ffea0-b8
inet6 fe80::f816:3eff:fe36:8e3c/ scope link
inet 10.0.3.51/ brd 10.0.3.255 scope global qg-a0b37442-a4
inet6 fe80::f816:3eff:fe02:33fd/ scope link
demorc@root@controller:~$ip netns exec qrouter-8770421b-2f3b-4d33-9acf-562b36b5b31b ping 192.168.56.3
PING 192.168.56.3 (192.168.56.3) () bytes of data.
bytes from 192.168.56.3: icmp_seq= ttl= time=8.95 ms
bytes from 192.168.56.3: icmp_seq= ttl= time=0.610 ms
bytes from 192.168.56.3: icmp_seq= ttl= time=0.331 ms
bytes from 192.168.56.3: icmp_seq= ttl= time=0.344 ms
^C
--- 192.168.56.3 ping statistics ---
packets transmitted, received, % packet loss, time 3000ms
rtt min/avg/max/mdev = 0.331/2.560/8.955/3.693 ms
创建浮动IP,用来远程连接
demorc@root@controller:~$source adminrc
adminrc@root@controller:~$openstack ip floating create provider
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | 00315ef2--42ae-825b-0f94ed098de8 |
| instance_id | None |
| ip | 10.0.3.52 |
| pool | provider |
+-------------+--------------------------------------+
为实例分配浮动IP
查看浮动IP
adminrc@root@controller:~$openstack ip floating list
+--------------------------------------+---------------------+------------------+------+
| ID | Floating IP Address | Fixed IP Address | Port |
+--------------------------------------+---------------------+------------------+------+
| 00315ef2--42ae-825b-0f94ed098de8 | 10.0.3.52 | None | None |
+--------------------------------------+---------------------+------------------+------+
为实例添加浮动IP
adminrc@root@controller:~$openstack ip floating add 10.0.3.52 4c954e71-8e73-49e1-a67f-20c007d582d3
Unable to associate floating IP 10.0.3.52 to fixed IP 192.168.56.3 for instance 4c954e71-8e73-49e1-a67f-20c007d582d3. Error: Bad floatingip request: Port 454451d2-6c5d-411c-8ad0-d6f5908259a6 is associated with a different tenant than Floating IP 00315ef2--42ae-825b-0f94ed098de8 and therefore cannot be bound..
Neutron server returns request_ids: ['req-58f751d8-ab56-41d3-bb99-de2307ed9c67'] (HTTP ) (Request-ID: req-330493bd-f040-4b24-a08b-8384b162ea60)
# 报错原因是admirc用户创建的floating ip是不能绑定给demorc用户实例
# 解决办法,删掉floating IP 使用demorc用户重新创建floating IP
adminrc@root@controller:~$ openstack ip floating list
+--------------------------------------+---------------------+------------------+------+
| ID | Floating IP Address | Fixed IP Address | Port |
+--------------------------------------+---------------------+------------------+------+
| 00315ef2--42ae-825b-0f94ed098de8 | 10.0.3.52 | None | None |
+--------------------------------------+---------------------+------------------+------+
adminrc@root@controller:~$openstack ip floating delete 00315ef2--42ae-825b-0f94ed098de8
adminrc@root@controller:~$openstack ip floating list adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack ip floating create provider
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | 72d37905-4e1d-45a4-a010-a041968a0220 |
| instance_id | None |
| ip | 10.0.3.53 |
| pool | provider |
+-------------+--------------------------------------+
demorc@root@controller:~$openstack ip floating add 10.0.3.53 selfservice-instance
demorc@root@controller:~$openstack server list
+--------------------------------------+----------------------+--------+-------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------------------+--------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+-------------------------------------+
测试浮动IP
demorc@root@controller:~$ping -c 10.0.3.53
PING 10.0.3.53 (10.0.3.53) () bytes of data.
bytes from 10.0.3.53: icmp_seq= ttl= time=3.40 ms
bytes from 10.0.3.53: icmp_seq= ttl= time=0.415 ms --- 10.0.3.53 ping statistics ---
packets transmitted, received, % packet loss, time 1001ms
rtt min/avg/max/mdev = 0.415/1.912/3.409/1.497 ms
demorc@root@controller:~$su -
root@controller:~# ssh cirros@10.0.3.53
The authenticity of host '10.0.3.53 (10.0.3.53)' can't be established.
RSA key fingerprint is e2::a9:e6:::a9:db::cb::5c::9a:4e:c7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.3.53' (RSA) to the list of known hosts.
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA::3E::6D:
inet addr:192.168.56.3 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe30:6d63/ Scope:Link
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (15.0 KiB) TX bytes: (14.6 KiB) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::/ Scope:Host
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 B) TX bytes: (0.0 B) $ ping -c www.qq.com
PING www.qq.com (61.129.7.47): data bytes
bytes from 61.129.7.47: seq= ttl= time=7.461 ms
bytes from 61.129.7.47: seq= ttl= time=6.463 ms --- www.qq.com ping statistics ---
packets transmitted, packets received, % packet loss
round-trip min/avg/max = 6.463/6.962/7.461 ms
$ exit
Connection to 10.0.3.53 closed.
浮动IP的意义:当用户创建的实例处于私有网络的时候,此时又想让实例访问外网,这就需要通过绑定floating IP来实现私有网络中的实例访问公网。
demorc@root@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
demorc@root@controller:~$ping -c 10.0.3.53
PING 10.0.3.53 (10.0.3.53) () bytes of data.
bytes from 10.0.3.53: icmp_seq= ttl= time=3.31 ms
bytes from 10.0.3.53: icmp_seq= ttl= time=0.550 ms --- 10.0.3.53 ping statistics ---
packets transmitted, received, % packet loss, time 1001ms
rtt min/avg/max/mdev = 0.550/1.934/3.319/1.385 ms
demorc@root@controller:~$ssh -i /root/.ssh/id_rsa cirros@10.0.3.53
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA::3E::6D:
inet addr:192.168.56.3 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe30:6d63/ Scope:Link
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (29.8 KiB) TX bytes: (26.4 KiB) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::/ Scope:Host
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (0.0 B) TX bytes: (0.0 B) $ exit
Connection to 10.0.3.53 closed.
openstack安装dashboard
root@controller:~# apt-get install -y openstack-dashboard
配置dashboard
root@controller:~# cp /etc/openstack-dashboard/local_settings.py{,.bak}
root@controller:~# vim /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = '*'
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '10.0.3.10:11211',
}
}
OPENSTACK_API_VERSIONS = {
"data-processing": 1.1,
"identity": ,
"volume": ,
"compute": ,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': True,
'enable_firewall': True,
'enable_vpn': True,
'enable_fip_topology_check': True,
'default_ipv4_subnet_pool_label': None,
'default_ipv6_subnet_pool_label': None,
'profile_support': None,
'supported_provider_types': ['*'],
'supported_vnic_types': ['*'],
}
TIME_ZONE = "Asia/Shanghai"
重启apache2
root@controller:~# service apache2 reload
* Reloading web server apache2 *
root@controller:~# echo $?
浏览器测试
# 如果不记得admin密码可以查看这个文件
openstack@controller:~$ cat adminrc
unset OS_TOKEN
unset OS_URL
unset OS_IDENTITY_API_VERSION export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=
export OS_IMAGE_API_VERSION=
export PS1="adminrc@\u@\h:\w\$"


验证demo用户

使用demo用户查看网络拓扑

查看相关信息




查看routers的信息


使用admin查看相关信息

安装cinder
首先需要给虚拟机添加一块新硬盘,添加步骤不再演示,一路默认下一步即可。
开始准备Cinder安装环境
root@controller:~# mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is
Server version: 5.5.-MariaDB-1ubuntu0.14.04. (Ubuntu) Copyright (c) , , Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> create database cinder;
Query OK, row affected (0.00 sec) MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'lcoalhost' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'%' identified by '';
Query OK, rows affected (0.00 sec) MariaDB [(none)]> \q
Bye
切换到adminrc环境
# 创建一个cinder用户
root@controller:~# source adminrc
adminrc@root@controller:~$openstack user create --domain default --password cinder cinder
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 1495769d2bbb44d192eee4c9b2f91ca3 |
| enabled | True |
| id | 74153e9abf694f2f9ecd2203b71e2529 |
| name | cinder |
+-----------+----------------------------------+
添加admin角色到cinder用户上
adminrc@root@controller:~$openstack role add --project service --user cinder admin
创建 cinder 和 cinderv2 服务实体
adminrc@root@controller:~$openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 3f13455162a145e28096ce110be1213e |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
adminrc@root@controller:~$openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 9fefead9767048e1b632bb7026c55380 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
创建块设备存储服务API入口点
dminrc@root@controller:~$openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | d45e4cd8fb7945968d5e644a74dc62e3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3f13455162a145e28096ce110be1213e |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | fcf99a2a72c94d81b472f4c75ea952c8 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3f13455162a145e28096ce110be1213e |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e611a9caabf640dfbcd93b7b750180da |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3f13455162a145e28096ce110be1213e |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | ecd1248c63844473aba74c6af3554a00 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9fefead9767048e1b632bb7026c55380 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 862a463ef202433e95e2e1c80030af59 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9fefead9767048e1b632bb7026c55380 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
adminrc@root@controller:~$openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 89fcc47679e94213a0ec2d8eabed95db |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9fefead9767048e1b632bb7026c55380 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
安装安全配置组件
adminrc@root@controller:~$apt-get install -y cinder-api cinder-scheduler
开始配置cinder
adminrc@root@controller:~$cp /etc/cinder/cinder.conf{,.bak}
adminrc@root@controller:~$vim /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
rpc_backend = rabbit
my_ip = 10.0.3.10
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
确认配置无误后,同步数据库
adminrc@root@controller:~$su -s /bin/bash -c "cinder-manage db sync" cinder
Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
-- ::23.140 WARNING py.warnings [-] /usr/lib/python2./dist-packages/oslo_db/sqlalchemy/enginefacade.py:: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning -- ::23.203 INFO migrate.versioning.api [-] -> ...
.........
-- ::25.097 INFO migrate.versioning.api [-] done
配置计算节点使用块设备存储
adminrc@root@controller:~$cp /etc/nova/nova.conf{,.private}
adminrc@root@controller:~$vim /etc/nova/nova.conf
# 文件末尾添加
[cinder]
os_region_name = RegionOne
# 保存退出后,重启nova-api和cinder服务
adminrc@root@controller:~$service nova-api restart
nova-api stop/waiting
nova-api start/running, process
adminrc@root@controller:~$service cinder-
cinder-api cinder-scheduler
adminrc@root@controller:~$ls /etc/init.d/ | grep cinder
cinder-api
cinder-scheduler
adminrc@root@controller:~$ls /etc/init.d/ | grep cinder | xargs -i service {} restart
cinder-api stop/waiting
cinder-api start/running, process
cinder-scheduler stop/waiting
cinder-scheduler start/running, process
安装lvm2
adminrc@root@controller:~$apt-get install -y lvm2
创建LVM物理卷、卷组
adminrc@root@controller:~$pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
adminrc@root@controller:~$vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
配置
adminrc@root@controller:~$cp /etc/lvm/lvm.conf{,.bak}
adminrc@root@controller:~$vim /etc/lvm/lvm.conf
filter = [ "a/sdb/", "r/.*/"] #将原值修改为这个值
安装安全组件
adminrc@root@controller:~$apt-get install cinder-volume
配置cinder.conf
adminrc@root@controller:~$cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
rpc_backend = rabbit
my_ip = 10.0.3.10
enabled_backends = lvm
glance_api_servers = http://controller:9292 [database] connection = mysql+pymysql://cinder:123456@controller/cinder [keystone_authtoken] auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder [oslo_messaging_rabbit] rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp [lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm [oslo_concurrency]
lock_path = /var/lib/cinder/tmp
重启服务
adminrc@root@controller:~$service tgt restart
tgt stop/waiting
tgt start/running, process
adminrc@root@controller:~$service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process
验证
adminrc@root@controller:~$cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | --17T03::00.000000 | - |
| cinder-volume | controller | nova | enabled | down | --17T03::52.000000 | - |
| cinder-volume | controller@lvm | nova | enabled | up | --17T03::01.000000 | - |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
# 不知道为什么一个状态是down
切换到demo用户
adminrc@root@controller:~$source demorc
demorc@root@controller:~$openstack volume create --size volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | --17T04::56.366573 |
| description | None |
| encrypted | False |
| id | 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | disabled |
| size | |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | c4de9fac882740838aa26e9119b30cb9 |
+---------------------+--------------------------------------+
demorc@root@controller:~$openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1 | available | | |
+--------------------------------------+--------------+-----------+------+-------------+
添加卷到一个实例上
demorc@root@controller:~$nova list
+--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | SHUTOFF | - | Shutdown | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+---------+------------+-------------+-------------------------------------+
demorc@root@controller:~$nova start 4c954e71-8e73-49e1-a67f-20c007d582d3
Request to start server 4c954e71-8e73-49e1-a67f-20c007d582d3 has been accepted.
demorc@root@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
demorc@root@controller:~$ping -c 10.0.3.53
PING 10.0.3.53 (10.0.3.53) () bytes of data.
bytes from 10.0.3.53: icmp_seq= ttl= time=9.45 ms
bytes from 10.0.3.53: icmp_seq= ttl= time=0.548 ms
demorc@openstack@controller:~$nova list
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
| 4c954e71-8e73-49e1-a67f-20c007d582d3 | selfservice-instance | ACTIVE | - | Running | selfservice=192.168.56.3, 10.0.3.53 |
+--------------------------------------+----------------------+--------+------------+-------------+-------------------------------------+
demorc@openstack@controller:~$openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1 | available | | |
+--------------------------------------+--------------+-----------+------+-------------+
# 复制下来实例的ID和volume1的ID
demorc@root@controller:~$openstack server add volume 4c954e71-8e73-49e1-a67f-20c007d582d3 240ee7be-49bb-48bc-8bb3-1c44196b5ad9
再次查看volume1的状态,可以看出正在使用
demorc@root@controller:~$openstack volume list
+--------------------------------------+--------------+--------+------+-----------------------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+--------+------+-----------------------------------------------+
| 240ee7be-49bb-48bc-8bb3-1c44196b5ad9 | volume1 | in-use | | Attached to selfservice-instance on /dev/vdb |
+--------------------------------------+--------------+--------+------+-----------------------------------------------+
创建并格式化新创建的磁盘
demorc@root@controller:~$ssh cirros@10.0.3.53
$ sudo -s
$ fdisk -l Disk /dev/vda: MB, bytes
heads, sectors/track, cylinders, total sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
Disk identifier: 0x00000000 Device Boot Start End Blocks Id System
/dev/vda1 * + Linux Disk /dev/vdb: MB, bytes
heads, sectors/track, cylinders, total sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
Disk identifier: 0x00000000 Disk /dev/vdb doesn't contain a valid partition table
$ mkfs.ext4 /dev/sdb
$ mkfs.ext4 /dev/vdb
mke2fs 1.42. (-Mar-)
Filesystem label=
OS type: Linux
Block size= (log=)
Fragment size= (log=)
Stride= blocks, Stripe width= blocks
inodes, blocks
blocks (5.00%) reserved for the super user
First data block=
Maximum filesystem blocks=
block groups
blocks per group, fragments per group
inodes per group
Superblock backups stored on blocks:
, , , Allocating group tables: done
Writing inode tables: done
Creating journal ( blocks): done
Writing superblocks and filesystem accounting information:done
$ ls /mnt/
lost+found
$ touch /mnt/test
$ ls /mnt/
lost+found test
$ exit
$ exit
Connection to 10.0.3.53 closed.
demorc@root@controller:~$exit
exit
(仅供学习使用,如有侵权请留言,我会第一时间删除相关内容)
Win10+VirtualBox+Openstack Mitaka的更多相关文章
- OpenStack Mitaka安装
http://egon09.blog.51cto.com/9161406/1839667 前言: openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实 ...
- openstack项目【day24】:OpenStack mitaka部署
前言: openstack的部署非常简单,简单的前提建立在扎实的理论功底,本人一直觉得,玩技术一定是理论指导实践,网上遍布个种搭建方法都可以实现一个基本的私有云环境,但是诸位可曾发现,很多配置都是重复 ...
- OpenStack Mitaka/Newton/Ocata/Pike 各版本功能贴整理
逝者如斯,刚接触OpenStack的时候还只是第9版本IceHouse.前几天也看到了刘大咖更新了博客,翻译了Mirantis博客文章<OpenStack Pike 版本中的 53 个新功能盘点 ...
- OpenStack Mitaka HA部署方案(随笔)
[Toc] https://github.com/wanstack/AutoMitaka # 亲情奉献安装openstack HA脚本 使用python + shell,完成了基本的核心功能(纯二层的 ...
- OpenStack Mitaka 版本中的 domain 和 admin
OpenStack 的 Keystone V3 中引入了 Domain 的概念.引入这个概念后,关于 admin 这个role 的定义就变得复杂了起来. 本文测试环境是社区 Mitaka 版本. 1. ...
- 在ubuntu14.04上安装openstack mitaka
最近在工作环境安装部署了juno版本,在GE口测试网络性能不太满意,发现mitaka版本支持ovs-dpdk,于是抽时间安装实验一番. 参考官网的安装文档,先准备将mitaka版本安装好再配置ovs. ...
- 云计算之阿里仓库停止openstack mitaka源报错“No package centos-release-openstack-mitaka available.”
之前学习了一个月的openstack的mitaka版本,写完脚本放置一段时间,最近准备正式部署突然发现 No package centos-release-openstack-mitaka avail ...
- Openstack Mitaka 负载均衡 LoadBalancerv2
最近研究了一下Openstack负载均衡,yum源和源码级别的安装都尝试成功了.网上有很多文章都是LoadBalancerv1,这个已经被放弃了.所以写一下自己是如何使用LoadBalancerv ...
- CentOS阿里仓库停止openstack mitaka源服务报错------“No package centos-release-openstack-mitaka available.”
之前学习了一个月的openstack的mitaka版本,部署完后放置一段时间,最近准备正式部署突然发现“No package centos-release-openstack-mitaka avail ...
随机推荐
- BZOJ_1998_[Hnoi2010]Fsk物品调度_并查集+置换
BZOJ_1998_[Hnoi2010]Fsk物品调度_并查集+置换 Description 现在找工作不容易,Lostmonkey费了好大劲才得到fsk公司基层流水线操作员的职位.流水线上有n个位置 ...
- Python中定时任务框架APScheduler的快速入门指南
前言 大家应该都知道在编程语言中,定时任务是常用的一种调度形式,在Python中也涌现了非常多的调度模块,本文将简要介绍APScheduler的基本使用方法. 一.APScheduler介绍 APSc ...
- 几个网络模型的示例代码(BlockingModel、OverlappedModel、WSAEventSelect、CompletionRoutine)..c++
作者的blog:猪)的网络编程世界 几个网络模型的示例代码代码包括了下面几个模型的示例:BlockingModel(阻塞模式).OverlappedModel(基于事件的重叠I/O).WSAEvent ...
- Python调试指南
http://blog.sina.com.cn/s/blog_a15aa56901017u0p.html http://www.cnblogs.com/coderzh/archive/2009/12/ ...
- 推荐几个Laravel 后台管理系统
小编推荐几个Laravel 后台管理系统 由百牛信息技术bainiu.ltd整理发布于博客园 一.不容错过的Laravel后台管理扩展包 —— Voyager 简介Voyager是一个你不容错过的La ...
- InformationSecurity:template
ylbtech-InformationSecurity: 1.返回顶部 2.返回顶部 3.返回顶部 4.返回顶部 5.返回顶部 6.返回顶部 作者:ylbtech出处:ht ...
- JavaScript高级程序设计学习笔记第五章--引用类型
一.object类型 1.创建object类型的两种方式: 第一种,使用构造函数 var person = new Object();或者是var person={};/与new Object()等价 ...
- Collection与Map总结
顺序表和链表统称为线性表:顺序表一般表现为数组,如:ArrayList的实现;链表有单链表.双链表.循环链表的区分,如:LinkedArrayList由双链表+哈希表实现
- (三)整合SSH测试项目
整合struts 和 spring 预期:如果可以在action中能够正确调用service里面的方法执行并返回到一个页面中:那么我们认定struts和spring的整合是成功的. 编写JUnit测试 ...
- 【机器学习】文本分类——朴素贝叶斯Bayes
朴素贝叶斯主要用于文本分类.文本分类常见三大算法:KNN.朴素贝叶斯.支持向量机SVM. 一.贝叶斯定理 贝叶斯公式思想:利用已知值来估计未知概率.已知某条件概率,如何得到两个事件交换后的概率,也就是 ...