openstack T版环境部署

发布于 2022-01-07  682 次阅读


openstack T版环境部署

此处省略N多描述

openstack 环境准备

openstack 主机配置列表

官网要求最低要求最低配置是:
控制器节点:1c4g和 5 GB 存储
计算节点:1c2G和 10 GB 存储
我这边一共准备了4台服务器,两台做计算节点一台做控制节点,还有一台磁盘比较大做ceph使用。

主机系统 主机名 用途 配置 网卡模式 ip
centos 7.8 compute1 计算节点 2c4g nat+主机模式 192.168.232.201+192.168.100.101
centos 7.8 control1 控制节点 4c4g nat+主机模式 192.168.232.203+192.168.100.103
centos 7.8 compute2 计算节点 4c4g nat+主机模式 192.168.232.202+192.168.100.102
centos 7.8 ceph ceph 1c1g 单nat 192.168.232.200

虚拟机主机模式和net模式的配置

openstack 主机环境配置

关闭四台服务器firewalld 和SElinux

systemctl disable --now firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

在3台主机配置时间同步源设置,将192.168.232.203 控制节点设备ntp 源。

####在192.168.232.203 服务器上

vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

########重启control 节点chronyd
[root@control ~]# systemctl restart chronyd
[root@control ~]# systemctl enable chronyd

在其他三台机器去上配置chrond

将原有的四个ntp server 地址修改为一条
server 192.168.100.103 iburst

########重启剩余3台服务器chrond
systemctl restart chronyd
systemctl enable chronyd

安装openstack T软件包(所有节点)

###在所有节点安装以下软件包
[root@control ~]# yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre pcre-devel expat-devel cmake bzip2 

net-tools   可以使用ifconfig命令
bash-completion 自动补全
pcre  正则 devel库
expat-devel:Apache依赖包,C语言开发,解析XML文档的开发库

[root@control ~]# yum -y install centos-release-openstack-train
[root@control ~]# yum -y install python-openstackclient openstack-selinux 
#openstack-utils

#注意多装几遍此命令
#centos-release-openstack-train 保证安装更新openstack版本为最新版本t版
#python-openstackclient  openstack的python客户端,因为openstack中的API大多数是python编写的,并且连接数据库,也需要python
#openstack-selinux openstack核心安全防护
#openstack-utils    openstack其它util工具

安装SQL 数据软件包(crontol节点)

安装和配置组件¶安装软件包:

[root@control ~]# yum install -y mariadb mariadb-server python2-PyMySQL

创建和编辑/etc/my.cnf.d/openstack.cnf文件(/etc/my.cnf.d/如果需要,备份现有的配置文件)并完成以下操作:创建一个[mysqld]section,将bind-address key设置为control节点的管理IP地址,允许其他节点通过管理网络访问。设置其他键以启用有用的选项和 UTF-8 字符集

[mysqld]
bind-address = 192.168.100.103
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

完成安装¶启动数据库服务并将其配置为在系统启动时启动:#

systemctl enable mariadb.service
systemctl start mariadb.service

初始数据库

mysql_secure_installation
回车
Y##设置密码Haier@2021
Y
n
Y
Y

############ 配置内容如下
[root@control ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

安装配置消息队列 (crontol节点)

OpenStack 使用消息队列来协调服务之间的操作和状态信息。消息队列服务通常在控制器节点上运行。
OpenStack 支持多种消息队列服务,包括RabbitMQ、 Qpid和ZeroMQ。但是,大多数打包 OpenStack 的发行版都支持特定的消息队列服务。

安装并配置RabbitMQ

安装RabbitMQ

[root@control ~]# yum install -y rabbitmq-server

启动消息队列服务并将其配置为在系统启动时启动

[root@control ~]# systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

添加openstack用户并设置密码openstack@123

[root@control ~]# rabbitmqctl add_user openstack openstack@123
Creating user "openstack" ...

配置允许用户进行、写入和读取访问 openstack:

[root@control ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

查询mq 配置

[root@control ~]# rabbitmqctl list_users
Listing users
openstack       []
guest   [administrator]

###设置 openstack  administrator权限
[root@control ~]# rabbitmqctl set_user_tags openstack administrator
Setting tags for user "openstack" to [administrator]
[root@control ~]# rabbitmqctl list_users
Listing users
openstack       [administrator]
guest   [administrator]

查询需要启动的服务

rabbitmq-plugins list

启动RabbitMQ 可视化web

rabbitmq-plugins enable rabbitmq_management_agent
##可以localhost:15672 来访问

配置缓存 (crontol节点)

服务的 Identity 服务认证机制使用 Memcached 来缓存令牌。memcached 服务通常在控制器节点上运行。
对于生产部署,我们建议启用防火墙、身份验证和加密的组合来保护它。

安装及配置Memcached 缓存

安装memcached 以及python-memcached

[root@control ~]# yum -y install memcached python-memcached

编辑/etc/sysconfig/memcached 文件并完成以下操作,将服务配置为使用控制器节点的管理 IP 地址。
这是为了允许其他节点通过管理网络访问:
MAXCONN 最大连接数,这个可以根据需求调整到204800,CACHESIZE是内存大小2048,这个也不是上限

cat  /etc/sysconfig/memcached
ORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="1024"
OPTIONS="-l 127.0.0.1,::1,control"

启动 Memcached 服务并将其配置为在系统启动时启动:

# systemctl enable memcached.service
# systemctl start memcached.service

etcd 暂时不做安装

openstack T版组件安装

OpenStack 系统由几个单独安装的关键服务组成。这些服务根据您的云需求协同工作,包括计算、身份、网络、镜像、块存储、对象存储、遥测、编排和数据库服务。您可以单独安装这些项目中的任何一个,并将它们单独配置或配置为连接的实体。

keystone (crontol节点)

keystone简介

OpenStack Identity 服务提供了用于管理身份验证、授权和服务目录的单点集成,该服务的项目名称:keystone。
keystone 通常是用户与之交互的第一个服务。一旦通过身份验证,最终用户就可以使用他们的身份访问其他 OpenStack 服务。
同样,其他 OpenStack server 利用 keystone 服务来确保用户是他们所说的身份,并发现其他服务在部署中的位置。

配置keystone数据库

使用数据库访问客户端以root用户身份连接数据库服务器:

$ mysql -u root -pHaier@2021

创建keystone数据库:

MariaDB [(none)]> CREATE DATABASE keystone;

授予对keystone数据库的适当访问权限:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'  IDENTIFIED BY 'Keystone123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY 'Keystone123';
flush privileges;

退出数据库访问客户端。

安装 keystone并初始化

安装软件包:

yum -y  install openstack-keystone httpd mod_wsgi

编辑/etc/keystone/keystone.conf
[database]部分中,配置数据库访问:

[database]
connection = mysql+pymysql://keystone:Keystone123@control/keystone
##############需要特别注意control  为主机,可以正常连接,使用127.0.0.1

[token]部分中,配置 Fernet 令牌

[token]
provider = fernet

同步数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化 Fernet 密钥,创建令牌:
和标志用于指定将用于运行 keystone 的操作系统的用户/组--keystone-user。--keystone-group提供这些是为了允许在另一个操作系统用户/组下运行 keystone。在下面的示例中,我们调用 user & group keystone。

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份服务:特别说明:Admin_123是我设置得admin 用户得密码,control 是我主机名可以ping 通

keystone-manage bootstrap --bootstrap-password Admin_123  --bootstrap-admin-url http://control:5000/v3/  --bootstrap-internal-url http://control:5000/v3/ --bootstrap-public-url http://control:5000/v3/ --bootstrap-region-id RegionOne 

配置 Apache HTTP 服务器

vim /etc/httpd/conf/httpd.conf,配置 ServerName选项以引用控制器节点:

ServerName control:80

创建软连接

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动并设置开启自启

systemctl enable httpd.service
systemctl start httpd.service

通过设置适当的环境变量来配置管理帐户:

[root@control ~]# cat admin.sh
#!/bin/bash
export OS_USERNAME=admin
export OS_PASSWORD=Admin_123
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://control:5000/v3
export OS_IDENTITY_API_VERSION=3
[root@control ~]# source admin.sh

查看keystone 用户信息

[root@control ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                     |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------+
| 1dca34fefc9a4dafb05c7a564311e0e1 | RegionOne | keystone     | identity     | True    | internal  | http://control:5000/v3/ |
| 60ee25033bfd408faaa863cb225db543 | RegionOne | keystone     | identity     | True    | admin     | http://control:5000/v3/ |
| c1c00ce5f1d6431fb2d04549492597a8 | RegionOne | keystone     | identity     | True    | public    | http://control:5000/v3/ |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------+

创建域、项目、用户和角色

keystone服务为每个 OpenStack 服务提供身份验证服务。身份验证服务使用域、项目、用户和角色的组合。

尽管本指南中的keystone-manage 引导步骤中已经存在“默认”域,但创建新域的正式方法是:

openstack domain create --description "An Example Domain" example

创建service 项目:

openstack project create --domain default --description "Service Project" service
创建(非管理员)应用使用非特权项目和用户myproject项目和myuser 用户
openstack project create --domain default --description "Demo Project" myproject

创建myuser用户,设置密码myuser

[root@control ~]# openstack user create --domain default --password-prompt myuser
User Password: myuser
Repeat User Password: myuser
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 402c85d84c1e4b3a986acc6b75e4988f |
| name                | myuser                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

创建myrole角色:

openstack role create myrole

将myrole角色添加到myproject项目和myuser用户:

openstack role add --project myproject --user myuser myrole

keystone验证

取消设置临时变量OS_AUTH_URL和OS_PASSWORD 环境变量:

unset OS_AUTH_URL OS_PASSWORD

admin用户,请求一个身份验证令牌:

openstack --os-auth-url http://control:5000/v3 --os-project-domain-name Default --os-user-domain-name Default  --os-project-name admin --os-username admin token issue
Password:Admin_123
Password:Admin_123
+------------+--------------------------------------------------------------------------------------------------------------                                                                     ---------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                                                                                        |
+------------+--------------------------------------------------------------------------------------------------------------                                                                     ---------------------------------------------------------------------------+
| expires    | 2022-01-27T07:02:48+0000                                                                                                                                                                                                                                     |
| id         | gAAAAABh8jWIv-CLSdA1G1mUZsRu3Kc4_E7OAsm4It7aCSPf_hchAn0P_pnfSnZkjLOLlyZ-7jUGuFWE2OJRT3-nFOBP_ix5AtdHSBmm2F8kU                                                                     V-jNVR2Gt__TStt3CIPFUTdJFKaqx6VEV4YyjWBYH9f3ZAzdlzKMrIbjFwBLeN2BssVRYegPew |
| project_id | 6c09f662f3984f6fbe8dba3f0bb0db18                                                                                                                                                                                                                             |
| user_id    | a9cff32d7aad4dc8bd6f138f4749df92                                                                                                                                                                                                                             |
+------------+--------------------------------------------------------------------------------------------------------------                                                                     ---------------------------------------------------------------------------+

myuser,请求一个身份验证令牌:

[root@control ~]# openstack --os-auth-url http://control:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue
Password:myuser
Password:myuser
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2022-01-27T07:06:25+0000                                                                                                                                                                |
| id         | gAAAAABh8jZhZ2UKwXkaMsTt2YKWgYaTpU-Of6_EvF8WB0hpg26rtF1nfdRaveM7zAQUeQcDaRR7yM8kt39SinlDc2nvcSeNDFE7PXpwIAGPWdevZ-a1Ak3NLCsizj--HX6s519-FX-MRJVSUcoYomXEkxCPKS7Xkz4FLou7GcsLawwHBnaev5A |
| project_id | 8e9eb6a829724683b2e12f20665f726e                                                                                                                                                        |
| user_id    | 402c85d84c1e4b3a986acc6b75e4988f                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

创建 OpenStack 客户端环境脚本

admin 用户环境变量配置

cat
[root@control ~]# cat adminenv.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Admin_123
export OS_AUTH_URL=http://control:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@control ~]# source adminenv.sh
[root@control ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2022-01-27T07:22:50+0000                                                                                                                                                                |
| id         | gAAAAABh8jo6toJioDiZvO6zVJqQKC-vUg_d6Xpz4Se3hsknCVmkD3iz8k1n9rztjCN0KHF1qvLyyogrqP5K_P95wXS4fbRqwnnM1gtQwY0VaXJomBLGQzf0zvVDv4rsNtOurLFE1ynUh4z3htvhvR0IxluJtwQHgHPe0yS9rM2yR_RSOtJ8SFI |
| project_id | 6c09f662f3984f6fbe8dba3f0bb0db18                                                                                                                                                        |
| user_id    | a9cff32d7aad4dc8bd6f138f4749df92                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

myuser 用户环境变量配置

cat myuserenv.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://control:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@control ~]# source myuserenv.sh
[root@control ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2022-01-27T07:27:15+0000                                                                                                                                                                |
| id         | gAAAAABh8jtD-F2oUoCJK8q4QiUTvWbfZPKWuxwZai7-Pbn8OfhLFTWB0YcxyAfwfY-LCnE4q7ue_ko0je6fcHlWQ3h1CpAGoa-6qZmFKrEl7jcHN18Hd78aoueumpQHEralXkexZz-rNeXxFZWEl6yJX2k8YggkLG7DeYwm-nj3HQ9ykcwPOG0 |
| project_id | 8e9eb6a829724683b2e12f20665f726e                                                                                                                                                        |
| user_id    | 402c85d84c1e4b3a986acc6b75e4988f                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Glance(crontol节点)

Glance 简介

glance 为openstack 提供镜像服务,openstack 镜像服务是基础设施即服务 (IaaS) 的核心。
它接受对磁盘或服务器映像的 API 请求,以及来自最终用户或 OpenStack 计算组件的元数据定义。它还支持在各种存储库类型上存储磁盘或服务器映像,包括 OpenStack 对象存储。

安装前环境配置

使用数据库访问客户端以root用户身份连接数据库服务器:

mysql -pHaier@2021

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'Glance_123';
创建glance数据库,授予对glance数据库的适当访问权限:

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY 'Glance_123';

创建glance用户:

将admin角色添加到glance用户和 service项目:

[root@control ~]# openstack user create --domain default --password-prompt glance
User Password:Glance_123
Repeat User Password:Glance_123
You are not authorized to perform the requested action: identity:create_user. (HTTP 403) (Request-ID: req-6aa3aa74-239f-4a2b-91d4-ab02e4d9d49d)
#此出现船舰galance token 失败问题
原因是上面 我们后面切换到myuser 用户,权限不够 使用source adminenv.sh 将用户修改为admin 以及它对应得环境变量,即可继续执行

执行 source adminenv.sh 后

openstack user create --domain default --password-prompt glance

User Password:Glance_123
Repeat User Password:Glance_123
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 695dbd1431fd4907bcda53116f4a35ec |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

将admin角色添加到glance用户和 service项目:

openstack role add --project service --user glance admin

创建glance服务实体:

[root@control ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | a596ef95820b4c869ffeae438139055f |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

创建 glance 内网(internal)、外网(public)、管理网(admin)

[root@control ~]# openstack endpoint create --region RegionOne image public http://control:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d54ed49d885a4f1b865033c37e1035d2 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a596ef95820b4c869ffeae438139055f |
| service_name | glance                           |
| service_type | image                            |
| url          | http://control:9292              |
+--------------+----------------------------------+

[root@control ~]# openstack endpoint create --region RegionOne image internal http://control:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 08513aff3a8a4394b7e92472c1d4c7bf |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a596ef95820b4c869ffeae438139055f |
| service_name | glance                           |
| service_type | image                            |
| url          | http://control:9292              |
+--------------+----------------------------------+

[root@control ~]# openstack endpoint create --region RegionOne image admin http://control:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                                         |
| id           | 02d69de04c44427ab31c94fca081a705 |
| interface    | admin                                        |
| region       | RegionOne                                  |
| region_id    | RegionOne                                 |
| service_id   | a596ef95820b4c869ffeae438139055f |
| service_name | glance                                     |
| service_type | image                                       |
| url          | http://control:9292                         |
+--------------+----------------------------------+

查看注册链接信息

[root@control ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------+
| ID                                                      | Region    | Service Name | Service Type | Enabled | Interface | URL                     |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------+
| 02d69de04c44427ab31c94fca081a705 | RegionOne | glance        | image        | True    | admin     | http://control:9292     |
| 08513aff3a8a4394b7e92472c1d4c7bf  | RegionOne | glance        | image        | True    | internal  | http://control:9292     |
| 1dca34fefc9a4dafb05c7a564311e0e1  | RegionOne | keystone     | identity      | True    | internal  | http://control:5000/v3/ |
| 60ee25033bfd408faaa863cb225db543 | RegionOne | keystone     | identity      | True    | admin     | http://control:5000/v3/ |
| c1c00ce5f1d6431fb2d04549492597a8  | RegionOne | keystone     | identity      | True    | public    | http://control:5000/v3/ |
| d54ed49d885a4f1b865033c37e1035d2 | RegionOne | glance        | image        | True   | public    | http://control:9292     |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------+
安装配置glance服务

安装软件包:

yum -y install openstack-glance

认证方式,openstack不能配置不能有中文,注释了也不行
vim /etc/glance/glance-api.conf ,在该[database]部分中,配置数据库访问:

[database]
#...
connection = mysql+pymysql://glance:Glance_123@control/glance

替换GLANCE_DBPASS为您为图像服务数据库选择的密码。在[keystone_authtoken]和[paste_deploy]部分中,配置身份服务访问:

[keystone_authtoken]
#...
www_authenticate_uri  = http://control:5000
auth_url = http://control:5000
memcached_servers = control:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = Glance_123

###设置认证方式
[paste_deploy]
#...
flavor = keystone

在该[glance_store]部分中,配置本地文件存储位置和以及存储方式:

[glance_store]
#...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

配置数据库成功:

su -s /bin/sh -c "glance-manage db_sync" glance

最终最底部出现Database is synced successfully 表示配置数据库成功

笔记忽略此输出中的任何弃用消息。

完成安装¶启动映像服务并将它们配置为在系统启动时启动

systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service

验证glance服务

下载镜像

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

上传镜像

glance image-create --name "cirros4" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public

+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                 |
| container_format | bare                                                                             |
| created_at       | 2022-02-10T01:54:17Z                                                             |
| disk_format      | qcow2                                                                            |
| id               | a89556a0-3922-4f0e-85af-891ac18625ce                                             |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | cirros4                                                                          |
| os_hash_algo     | sha512                                                                           |
| os_hash_value    | 6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e |
|                  | 2161b5b5186106570c17a9e58b64dd39390617cd5a350f78                                 |
| os_hidden        | False                                                                            |
| owner            | 6c09f662f3984f6fbe8dba3f0bb0db18                                                 |
| protected        | False                                                                            |
| size             | 12716032                                                                         |
| status           | active                                                                           |
| tags             | []                                                                               |
| updated_at       | 2022-02-10T01:54:17Z                                                             |
| virtual_size     | Not available                                                                    |
| visibility       | public                                                                           |
+------------------+----------------------------------------------------------------------------------+

验证镜像

[root@control ~]# glance image-list
+--------------------------------------+---------+
| ID                                   | Name    |
+--------------------------------------+---------+
| a89556a0-3922-4f0e-85af-891ac18625ce | cirros4 |
+--------------------------------------+---------+

[root@control ~]# openstack image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| a89556a0-3922-4f0e-85af-891ac18625ce | cirros4 | active |
+--------------------------------------+---------+--------+

Invalid OpenStack Identity credentials. 报错

当我们 glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public 来验证上传镜像时如果出现 Invalid OpenStack Identity credentials问题

出现这种问题得原因可能是域名解析得问题,可以再/etc/hosts 里添加 主机ip 解析

Placement(crontol节点)

Placement简介

Placement服务跟踪资源(比如计算节点,存储资源池,网络资源池等)的使用情况,提供自定义资源的能力,为分配资源提供服务。Placement在openstack的Stein版本之前,属于Nova组件的一部分。该组件应该在Nova之前安装。

Placement安装前环境配置

创建Placement 数据库

mysql -pHaier@2021

MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost'  IDENTIFIED BY 'Placement_123';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'Placement_123';
 MariaDB [(none)]> flush privileges;

创建Placement 用户

[root@control ~]# openstack user create --domain default --password-prompt placement   ##设置密码Placement_123
User Password:Placement_123
Repeat User Password:Placement_123
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 3e14c1d7f54a41daab4fab35779df2b7 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

将 Placement 用户添加到具有管理员角色的服务项目中:

openstack role add --project service --user placement admin

在服务目录注册 Placement API信息:

 openstack service create --name placement  --description "Placement API" placement
 +-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 8d206c2d0e1a4d849683c8c5b4461047 |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

创建 Placement 内网(internal)、外网(public)、管理网(admin)


openstack endpoint create --region RegionOne placement public http://control:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5ebc3a3809b44de1aa4afc4ee17e97a2 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 8d206c2d0e1a4d849683c8c5b4461047 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://control:8778              |
+--------------+----------------------------------+

openstack endpoint create --region RegionOne placement internal http://control:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 8c6afc4aa9114c10854fc1b8f264718f |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 8d206c2d0e1a4d849683c8c5b4461047 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://control:8778              |
+--------------+----------------------------------+

openstack endpoint create --region RegionOne placement admin http://control:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 99a24cb5e1614c87a342ee06f2b42ba8 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 8d206c2d0e1a4d849683c8c5b4461047 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://control:8778              |
+--------------+----------------------------------+

Placement 安装配置

安装软件包:

yum install -y openstack-placement-api

vim /etc/placement/placement.conf 在 [placement_database]部分添加如下内容


[placement_database]
# ...
connection = mysql+pymysql://placement:Placement_123@control/placement

在[api]和[keystone_authtoken]部分中,配置身份服务访问:

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://control:5000/v3
memcached_servers = control:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Placement_123

同步 placement数据库:

su -s /bin/sh -c "placement-manage db sync" placement

忽略输出信息:
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1280, u"Name 'alembic_version_pkc' ignored for PRIMARY key.")
  result = self._query(query)

记录一个T版本得一个细节

[root@control ~]# httpd -v
Server version: Apache/2.4.6 (CentOS)
Server built:   Jan 25 2022 14:08:43

通过httpd -v 可以看出我得apache 版本是2.4.6 当,这个部分在T版官网文档中没有添加此处修改,我在O版中找到了
参考 https://docs.openstack.org/ocata/install-guide-rdo/nova-controller-install.html 内容

由于打包错误,您必须通过添加以下配置 /etc/httpd/conf.d/00-nova-placement-api.conf,来对 Placement API 的访问。T 版文件是 /etc/httpd/conf.d/00-placement-api.conf ,在后面加入如下配置即可。

#.....
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

重启httpd服务

systemctl restart httpd

Placement 验证

[root@control ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

nova

nova 简介

Nova(OpenStack Compute Service)是 OpenStack 最核心的服务,负责维护和管理云环境的计算资源,同时管理虚拟机生命周期。

control 节点 nova 安装配置

安装前环境配置

创建nova_api、nova和nova_cell0数据库,并授予对应访问权限:


mysql  -pHaier@2021

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'  IDENTIFIED BY 'Nova_123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%'  IDENTIFIED BY 'Nova_123';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'  IDENTIFIED BY 'Nova_123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'  IDENTIFIED BY 'Nova_123';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'Nova_123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'Nova_123';

创建nova用户:

[root@control ~]# openstack user create --domain default --password-prompt nova   ##设置密码Nova_123
User Password:Nova_123
Repeat User Password:Nova_123
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | b8f0052a286a415e9ed57fbcf177610d |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

将 nova 用户添加到具有admin的服务项目中:

openstack role add --project service --user nova admin

在服务目录注册 nova API信息:

[root@control ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 87433503d5014fb0853f80eb333477c7 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

创建 nova 内网(internal)、外网(public)、管理网(admin)


[root@control ~]# openstack endpoint create --region RegionOne compute public http://control:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | e885ebf165c54d06aafa46e9dd764172 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 87433503d5014fb0853f80eb333477c7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://control:8774/v2.1         |
+--------------+----------------------------------+
[root@control ~]# openstack endpoint create --region RegionOne compute internal http://control:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | dc5b5c79864041c481d4895a4d8896d1 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 87433503d5014fb0853f80eb333477c7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://control:8774/v2.1         |
+--------------+----------------------------------+
[root@control ~]# openstack endpoint create --region RegionOne compute admin http://control:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | b26f43a01d96416aabe7e93a0c1f2f61 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 87433503d5014fb0853f80eb333477c7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://control:8774/v2.1         |
+--------------+----------------------------------+
安装配置nova

安装软件包:(openstack-nova-conductor 负责计算节数据库、openstack-nova-novncproxy 云主机连接、openstack-nova-scheduler 负责计算节点调度)

yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

vim /etc/nova/nova.conf
在该[DEFAULT]部分中,仅启用计算和元数据 API:

[DEFAULT]
enabled_apis = osapi_compute,metadata

在[api_database]和[database]部分中,配置数据库访问:

[api_database]
connection = mysql+pymysql://nova:Nova_123@control/nova_api
[database]
connection = mysql+pymysql://nova:Nova_123@control/nova

在[DEFAULT]部分中,配置RabbitMQ消息队列访问:

[DEFAULT]
transport_url = rabbit://openstack:openstack_mq_123@control:5672/

在[api]和[keystone_authtoken]部分中,配置身份服务访问:

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://control:5000/
auth_url = http://control:5000/
memcached_servers = control:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Nova_123

在该[DEFAULT]部分中,配置my_ip选项以使用控制器节点的管理接口 IP 地址:

[DEFAULT]
my_ip = 192.168.100.103

在该[DEFAULT]部分中,启用对网络服务的支持:

[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

配置/etc/nova/nova.conf[neutron]

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

在该[glance]部分中,配置图像服务 API 的位置:

[glance]
api_servers = http://control:9292

在该[oslo_concurrency]部分中,配置锁定路径:

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

在该[placement]部分中,配置对 Placement 服务的访问:

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://control:5000/v3
username = placement
password = Placement_123

同步nova 数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
忽略bug提示

验证数据同步情况

 mysql -pHaier@2021
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| glance             |
| information_schema |
| keystone           |
| mysql              |
| nova               |
| nova_api           |
| nova_cell0         |
| performance_schema |
| placement          |
+--------------------+
9 rows in set (0.001 sec)

#分别查看三个数据库中是个否有表存在来判断数据同步状况
MariaDB [(none)]>  use nova; show tables;      
MariaDB [(none)]>  use nova_api; show tables;
MariaDB [(none)]>  use nova_cell0; show tables;

启动服务

systemctl enable openstack-nova-api.service  openstack-nova-scheduler.service  openstack-nova-conductor.service  openstack-nova-novncproxy.service
systemctl start  openstack-nova-api.service  openstack-nova-scheduler.service  openstack-nova-conductor.service  openstack-nova-novncproxy.service

compute 节点 nova 安装配置

安装软件包:

yum -y  install openstack-nova-compute

vim /etc/nova/nova.conf

在该[DEFAULT]部分中,仅启用计算和元数据 API

[DEFAULT]
enabled_apis = osapi_compute,metadata

在该[DEFAULT]部分中,配置RabbitMQ消息队列访问:

[DEFAULT]
transport_url = rabbit://openstack:openstack_mq_123@control

在[api]和[keystone_authtoken]部分中,配置身份服务访问:

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://control:5000/
auth_url = http://control:5000/
memcached_servers = control:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Nova_123

在该[DEFAULT]部分中,配置my_ip选项:

[DEFAULT]
my_ip = 192.168.100.101

替换为计算节点上管理网络接口的 IP 地址

在该[DEFAULT]部分中,启用对网络服务的支持:

[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

在该[vnc]部分中,启用和配置远程控制台访问:

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://control:6080/vnc_auto.html

服务器组件侦听所有 IP 地址,代理组件仅侦听计算节点的管理接口 IP 地址

在该[glance]部分中,配置图像服务 API 的位置:

[glance]
api_servers = http://control:9292

在该[oslo_concurrency]部分中,配置锁定路径:

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

在该[placement]部分中,配置 Placement API:

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://control:5000/v3
username = placement
password = Placement_123

查看是否支持cpu 虚拟化

egrep -c '(vmx|svm)' /proc/cpuinfo

如果上面支持虚拟化得cpu核数为0 需要添加以下内容

[libvirt]
virt_type = qemu

启动 Compute 服务及其依赖项,并将它们配置为在系统启动时自动启动:

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

控制节点验证

###启用admin 用户环境变量
source adminenv.sh
###确认数据库中有计算主机
[root@control ~]# openstack compute service list --service nova-compute
+----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary       | Host     | Zone | Status  | State | Updated At                 |
+----+--------------+----------+------+---------+-------+----------------------------+
|  6 | nova-compute | compute1 | nova | enabled | up    | 2022-02-10T15:17:56.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+

发现计算主机,手动将新的计算节点添加到openstack集群(每添一个新计算节点都需要执行该操作)

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

添加新计算节点时,必须在控制器节点上运行以注册这些新计算节点。或者,您可以在 中设置适当的间隔 :vim /etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300

重启控制节点得nova 服务器
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

neutron

neutron简介

Neutron  是openstack 网络组件,Neutron 的设计目标是实现“网络即服务(Networking as a Service)”。为了达到这一目标,在设计上遵循了基于 SDN 实现网络虚拟化的原则,在实现上充分利用了 Linux 系统上的各种网络相关的技术。 通过使用它,网络管理员和云计算操作员可以通过程序来动态定义虚拟网络设备。Openstack 网络中的 SDN 组件就是 Quantum.但因为版权问题而改名为Neutron 。

control 节点 neutron 安装配置

创建neutron数据库并授予对应权限

[root@control ~]# mysql  -pHaier@2021
MariaDB [(none)] CREATE DATABASE neutron;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'   IDENTIFIED BY 'Neutron_123';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'  IDENTIFIED BY 'Neutron_123';

启用admin 用户环境变量

source adminenv.sh

创建neutron用户:密码为:Neutron_123

openstack user create --domain default --password-prompt neutron
User Password:Neutron_123
Repeat User Password:Neutron_123
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 3f2d8117dee640f19dc4c26e1b78df43 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

将 neutron 用户添加到具有admin的服务项目中:

openstack role add --project service --user neutron admin

在服务目录注册 neutron API信息:

openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 68cd149f0cc14094998dc3c5876822aa |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

创建 neutron 内网(internal)、外网(public)、管理网(admin)

$ openstack endpoint create --region RegionOne  network public http://control:9696

$ openstack endpoint create --region RegionOne   network internal http://control:9696

$ openstack endpoint create --region RegionOne network admin http://control:9696

安装服务

yum install -y openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge ebtables

配置 /etc/neutron/neutron.conf 文件

[database]
connection = mysql+pymysql://neutron:Neutron_123@control/neutron

在该[DEFAULT]部分中,启用 Modular Layer 2 (ML2) 插件并禁用其他插件

[DEFAULT]
core_plugin = ml2
service_plugins =
neutron配置服务组件

neutron 服务器组件配置包括数据库、身份验证机制、消息队列、拓扑更改通知和插件。

在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:

[DEFAULT]
transport_url = rabbit://openstack:openstack_mq_123@control

替换为您在 RabbitMQ 中RABBIT_PASS为帐户选择的密码 。openstack

在[DEFAULT]和[keystone_authtoken]部分中,配置身份服务访问:

[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://control:5000
auth_url = http://control:5000
memcached_servers = control:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Neutron_123

在[DEFAULT]和[nova]部分中,配置 Networking 以通知 Compute 网络拓扑更改:

[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]
auth_url = http://control:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = Nova_123

在该[oslo_concurrency]部分中,配置锁定路径:

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置 Modular Layer 2 (ML2) 插件

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件并完成以下操作

[ml2]部分中,启用平面和 VLAN 网络:

[ml2]
type_drivers = flat,vlan

[ml2]部分中,禁用自助服务网络:

[ml2]
tenant_network_types = 

[ml2]部分中,启用 Linux 桥接机制:

[ml2]
mechanism_drivers = linuxbridge

[ml2]部分中,启用端口安全扩展驱动程序:

[ml2]
extension_drivers = port_security

[ml2_type_flat]部分中,将提供者虚拟网络配置为平面网络,extnet为自定义名称:

[ml2_type_flat]
flat_networks = extnet

[securitygroup]部分中,启用 ipset 以提高安全组规则的效率:

[securitygroup]
enable_ipset = true
配置 Linux 网桥代理

Linux 桥接代理为实例构建第 2 层(桥接和交换)虚拟网络基础架构并处理安全组。
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作

在该[linux_bridge]部分中,将提供者虚拟网络映射到提供者物理网络接口:

[linux_bridge]
physical_interface_mappings = extnet:ens33

在该[vxlan]部分中,禁用 VXLAN 覆盖网络:

[vxlan]
enable_vxlan = false

在该[securitygroup]部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序:

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

vim /etc/sysctl.conf 确保您的 Linux 操作系统内核支持网桥过滤器1:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

要启用网络桥接支持,通常br_netfilter需要加载内核模块。
modprobe br_netfilter
sysctl -p

配置 DHCP 代理

编辑/etc/neutron/dhcp_agent.ini文件并完成以下操作:

在该[DEFAULT]部分中,配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据,以便提供商网络上的实例可以通过网络访问元数据:

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置 metadata agent 数据

编辑/etc/neutron/metadata_agent.ini
[DEFAULT]部分中,配置元数据主机和共享密钥:

[DEFAULT]
nova_metadata_host = control
metadata_proxy_shared_secret = xier123
配置控制节点nova

编辑/etc/nova/nova.conf文件并执行以下操作
在该[neutron]部分中,配置访问参数、启用元数据代理和配置密钥:

[neutron]
auth_url = http://control:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Neutron_123
service_metadata_proxy = true
metadata_proxy_shared_secret = xier123

配置软连接

 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算 API 服务:

systemctl restart openstack-nova-api.service

启动网络服务并将它们配置为在系统启动时启动。

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
到control验证
openstack network agent list

配置comupute 节点 neutron

安装软件

yum install openstack-neutron-linuxbridge ebtables ipset -y

配置/etc/neutron/neutron.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack_mq_123@control

[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://control:5000
auth_url = http://control:5000
memcached_servers = control:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Neutron_123

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = extnet:ens33
[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置内核参数sysctl vim /etc/sysctl

net.bridge.bridge-nf-call-iptables =1
net.bridge.bridge-nf-call-ip6tables =1

sysctl -p

配置 /etc/nova/nova.conf

[neutron]
auth_url = http://control:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Neutron_123

重启服务器并设置自启动

systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service; systemctl start neutron-linuxbridge-agent.service

到 控制节点使用该命令验证

[root@control neutron]# openstack network agent list
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host     | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| 8d21ab0a-f1e1-4dcd-850b-eb9d57b7cbf5 | Linux bridge agent | compute1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 9ac8dd3e-69b6-4e02-8633-c485f51a46f9 | Linux bridge agent | control  | None              | :-)   | UP    | neutron-linuxbridge-agent |
| b135d9c2-aef9-429a-bea7-d5167d3ed6a7 | DHCP agent         | control  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| ed855e3b-c8a6-4735-9b29-369d9f73126f | Metadata agent     | control  | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+

创建一个实例测试openstack

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

如果没有密钥创建一个密码

ssh-keygen -q -N ""  # 一直回车即可

将密码传到镜像里

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

验证添加密钥的信息:

openstack keypair list

添加策略

openstack security group rule create --proto icmp default

openstack security group rule create --proto tcp --dst-port 22 default

创建网络

openstack network create  --share --external --provider-physical-network extnet --provider-network-type flat flat-extnet

openstack subnet create --network flat-extnet --allocation-pool start=192.168.100.5,end=192.168.100.99 --dns-nameserver 114.114.114.114 --gateway 192.168.100.254 --subnet-range 192.168.100.0/24 flat-subnet

查看镜像名称

[root@control .ssh]# openstack image list
+--------------------------------------+---------+--------+
| ID                                   | Name    | Status |
+--------------------------------------+---------+--------+
| a89556a0-3922-4f0e-85af-891ac18625ce | cirros4 | active |
+--------------------------------------+---------+--------+

查看网路名称id

[root@control .ssh]# openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID                                                    | Name        | Subnets                              |
+--------------------------------------+-------------+--------------------------------------+
| 4bd4ab74-2a54-457d-8642-c6f9d15808fb | flat-extnet | d1ab62be-fe61-4397-aa9e-1b40d98f5c04 |
+--------------------------------------+-------------+--------------------------------------+

创建实例

openstack server create --flavor m1.nano --image cirros4 --nic net-id=4bd4ab74-2a54-457d-8642-c6f9d15808fb --security-group default --key-name mykey vm1

通过该命令查看创建的实例的信息

openstack server create --flavor m1.nano --image cirros4 --nic net-id=4bd4ab74-2a54-457d-8642-c6f9d15808fb --security-group default --key-name mykey vm1
[root@control .ssh]# openstack server list
+--------------------------------------+------+--------+----------------------------+---------+---------+
| ID                                             | Name | Status | Networks                   | Image   | Flavor  |
+--------------------------------------+------+--------+----------------------------+---------+---------+
| 9ba9d8e7-1ecd-42ab-84a6-93a84b328456 | vm1  | ACTIVE | flat-extnet=192.168.100.80 | cirros4 | m1.nano |
+--------------------------------------+------+--------+----------------------------+---------+---------+

解决一个虚拟机无法启动问题,再计算节点上操作
virsh capabilities 查看cpu 可以支持的虚拟化类型

vim /etc/nova/nova.conf
[libvirt]
hw_machine_type = x86_64=pc-i440fx-rhel7.2.0  # 更改虚拟化类型
cpu_mode = host-passthrough      # 直接使用宿主机的cpu,这项不建议配置,这个配置虽然性能得到提高,但是打破虚拟机之间的隔离

重启nova服务
systemctl restart openstack-nova-*

通过该命令查看创建的实例vm2

openstack server create --flavor m1.nano --image cirros4 --nic net-id=4bd4ab74-2a54-457d-8642-c6f9d15808fb --security-group default --key-name mykey vm2

通过该命令查看 vm2 实例的url

[root@control .ssh]# openstack console url show vm2
+-------+----------------------------------------------------------------------------------------+
| Field | Value                                                                                  |
+-------+----------------------------------------------------------------------------------------+
| type  | novnc                                                                                  |
| url   | http://control:6080/vnc_auto.html?path=%3Ftoken%3D9a7908c7-2651-47c9-88f4-2153332772e7 |
+-------+----------------------------------------------------------------------------------------+

浏览器访问该实例,根据提示输入密码进入,可以看到该实例可以联通103 的虚拟机,这里通过103 直接ssh 192.168.100.83,在前我们已经将密钥发送到这个虚拟机里了

Dashboard 安装

在control 节点安装服务

yum install openstack-dashboard -y

dashboard 节点安装服务

vim /etc/openstack-dashboard/local_settings 完成以下配置

OPENSTACK_HOST = "control"  ###在仪表盘添加control主机
ALLOWED_HOSTS = ['*']     ###允许任意主机访问dashboard 

配置memcached会话存储服务:

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'control:11211',
    }
}

配置API

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

配置访问用户

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

配置时区

TIME_ZONE = "Asia/Shanghai"

配置dashboard 链接

WEBROOT = "/dashboard"

配置 openstack-dashboard.conf

vim  /etc/httpd/conf.d/openstack-dashboard.conf  
WSGIApplicationGroup %{GLOBAL}

重启服务

systemctl restart httpd.service memcached.service

通过 http://192.168.100.103/dashboard 即可以访问 域是之前设置的default 用户名:admin 密码:Admin_123

高可用想法以及思路

1.添加计算节点党法网上很多,大致看了下比较交单
2.将control 节点扩张成三个节点操作比较麻烦,大致要做到
mariadb 3主,主从同步
通过keepalivde+ha/lvs/nginx 等方法做到高可用
队列管理器mq 高可用目前还不知道怎么搞
3.ceph 服务器搭建
4.将ceph 添加到系统里作为实例的磁盘

问题记录:

安装openstack时设置rabbitmq、nova等应用时密码设置问题

openstack的账户密码设置中,在不同配置文件中将读取该密码配置。
openstack对密码的复杂度没有要求,可以设置为不带特殊字符的密码;
若在设置密码时,一定要包含特殊符号,openstack仅支持如下如下特殊字符:& = $ - _ . + ! * ( )

我由于之前设置得mq 得密码为openstack@123 导致后面nova 配置时无法连接数据同步数据,可以尝试转义或者修改mq 对应密码,建议先尝试转义。

记录创建vm1 创建失败问题

virsh capabilities 需要通过该命令查看cpu 可以支持的虚拟化类型,需要在计算节点nova.conf上添加查看虚拟化支持的类型

[root@compute1 ~]# virsh capabilities
.......
      <wordsize>32</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine>
      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.6.0</machine>
      <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine>
      <machine maxCpus='240'>rhel6.3.0</machine>
      <machine maxCpus='240'>rhel6.4.0</machine>
      <machine maxCpus='240'>rhel6.0.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine>
      <machine maxCpus='255'>pc-q35-rhel7.3.0</machine>
      <machine maxCpus='240'>rhel6.5.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.4.0</machine>
      <machine maxCpus='240'>rhel6.6.0</machine>
      <machine maxCpus='240'>rhel6.1.0</machine>
      <machine maxCpus='240'>rhel6.2.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.5.0</machine>
      <domain type='qemu'/>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <pae/>
      <nonpae/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine>
      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.6.0</machine>
      <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine>
      <machine maxCpus='240'>rhel6.3.0</machine>
      <machine maxCpus='240'>rhel6.4.0</machine>
      <machine maxCpus='240'>rhel6.0.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine>
      <machine maxCpus='255'>pc-q35-rhel7.3.0</machine>
      <machine maxCpus='240'>rhel6.5.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.4.0</machine>
      <machine maxCpus='240'>rhel6.6.0</machine>
      <machine maxCpus='240'>rhel6.1.0</machine>
      <machine maxCpus='240'>rhel6.2.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.5.0</machine>
      <domain type='qemu'/>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
    </features>
  </guest>

</capabilities>

maxCpus 里记录的是可以支持虚拟化所有类型
通过配置 vim /etc/nova/nova.conf
[libvirt]
hw_machine_type = x86_64=pc-i440fx-rhel7.2.0  # 更改虚拟化类型

更改此处之后,重启计算节点的nova 服务,既可以再次正常创建实例。

T版neutron.conf 缺少[nova]部分

https://docs.openstack.org/ocata/config-reference/networking/samples/neutron.conf.html

T版ml2_conf.ini 缺少[ml2]部分

https://docs.openstack.org/ocata/config-reference/networking/samples/ml2_conf.ini.html

T版linuxbridge_agent.ini 文件不完整

https://docs.openstack.org/ocata/config-reference/networking/samples/linuxbridge_agent.ini.html


参考文章
https://blog.csdn.net/weixin_50344843/article/details/113198834