问题描述
我正在尝试使用启用了ceph的kolla-ansible部署多合一配置
I am trying to deploy the all-in-one configuration using kolla-ansible with ceph enabled
enable_ceph: "yes"
#enable_ceph_mds: "no"
enable_ceph_rgw: "yes"
#enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
#enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"
而且,我的设置包括一个Virtual Box VM,该VM具有Ubuntu 18.04.4桌面版本,具有2个CPU内核,30 GB磁盘(单个磁盘),2GB RAM,分区类型为msdos.
And, my setup consists of a Virtual Box VM with Ubuntu 18.04.4 desktop version with 2 CPU cores, 30 GB Disk (single disk), 2GB RAM, the partitioning type is msdos.
可用版本== 2.9.7
ansible version==2.9.7
kolla-ansible版本== 9.1.0
kolla-ansible version==9.1.0
为了使用kolla-ansible安装ceph OSD,我读到一个分区的名称应为KOLLA_CEPH_OSD_BOOTSTRAP_BS.
In order to install ceph OSD using kolla-ansible i read that a partition should have the name KOLLA_CEPH_OSD_BOOTSTRAP_BS.
因此,我创建了具有20 GB的根分区,即/dev/sda1,然后为其余的20GB创建了扩展分区/dev/sda2,然后是两个逻辑分区(/dev/sda5和/dev/sda6),每个分区均为10GB用于OSD.但是在msdos类型分区中,没有将名称分配给分区的功能.
Hence, i created root partition with 20 GB i.e /dev/sda1 and then an extended partition /dev/sda2 for the rest 20GB and followed by two logical partitions (/dev/sda5 and /dev/sda6) each of 10GB for OSD. But in msdos type partitioning there is no feature to allocate name to partitions.
所以我的问题是:
- 如果是msdos类型的分区,我该如何标记分区,以使kolla-ansible能够识别/dev/sda5和/dev/sda6是为Ceph-OSD指定的?
- 与包含用于Ceph OSD的操作系统的驱动器相比,是否必须具有单独的存储驱动器(我知道不建议将其全部存储在单个磁盘中)?
- 要使用kolla-ansible安装Ceph-OSD,我必须如何配置单驱动器HD空间?
PS:我还尝试了使用kolla-ansible并使用OpenStack VM(4个CPU内核,80GB磁盘空间-单驱动器,因为我没有在OpenStack中安装Cinder来安装ceph)和Ubuntu 18.04.4 Cloud图像,具有GPT分区类型并支持命名分区.分区如下:
P.S: I also tried to install ceph using kolla-ansible using an OpenStack VM (4 CPU cores, 80GB disk space - single drive, as i didn"t install Cinder in my OpenStack infra.) and Ubuntu 18.04.4 Cloud image, which has GPT partition type and supports naming partitions. The partitions were as follows:
/dev/vda1进行根分区
/dev/vda1 for root partition
/dev/vda2(用于ceph OSD)
/dev/vda2 for ceph OSD
/dev/vda3(用于ceph OSD)
/dev/vda3 for ceph OSD
但是缺点是,可llola-ansible擦除了整个磁盘并导致安装失败.
But the drawback was that, kolla-ansible wiped out complete disk and resulted in failure in installation.
我们非常感谢您的帮助.提前谢谢.
Any help is highly appreciated. Thanks a lot in advance.
推荐答案
我还用Ceph作为存储后端安装了Kolla-Ansible单节点多节点一体机,所以我遇到了同样的问题.
I also had installed an Kolla-Ansible single-node all-in-one with Ceph as storage backend, so I had the same problem.
是的,ceph的bluestore安装不适用于单个分区.我还尝试了不同的标记方法,但对我来说,它仅适用于整个磁盘,而不是分区.因此,对于您的虚拟设置,请创建一个全新的磁盘,例如/dev/vdb
.
Yes, the bluestore installation of the ceph doesn't work with a single partition. I had also tried different ways of labeling, but for me it only worked with a whole disk, instead of a partition. So for your virtual setup create a whole new disk, for example /dev/vdb
.
对于标签,我将以下内容用作bash脚本:
For labeling, I used the following as bash-script:
#!/bin/bash
DEV="/dev/vdb"
(
echo g # create GPT partition table
echo n # new partiton
echo # partition number (automatic)
echo # start sector (automatic)
echo +10G # end sector (use 10G size)
echo w # write changes
) | fdisk $DEV
parted $DEV -- name 1 KOLLA_CEPH_OSD_BOOTSTRAP_BS
请注意,开头的 DEV
已根据您的设置正确设置.这将在新磁盘上创建一个新的分区表和一个10GB大小的分区.可扩展性运行可注册标签并擦除整个光盘,因此size-value没什么可说的,仅适用于光盘上的临时分区.
Be aware, that DEV
at the beginning is correctly set for your setup. This creates a new partiton table and one partition on the new disc with 10GB size. The kolla-ansible deploy-run register the label and wipe the whole disc, so the size-value has nothing to say and is only for the temporary partition on the disc.
一张光碟就足以在可兰色环境中使用Ceph-OSD.您不需要第二个OSD.为此,在使用默认的Kolla安装路径时,在此路径的可lla-ansible设置中添加以下配置文件:/etc/kolla/config/ceph.conf
及其内容:
One single disc is enough for the Ceph-OSD in kolla-ansible. You don't need a second OSD. For this, add the following config-file in your kolla-ansible setup in this path, when you used the default kolla installation path: /etc/kolla/config/ceph.conf
with the content:
[global]
osd pool default size = 1
osd pool default min size = 1
这是为了确保kolla-ansible仅请求一个OSD.如果带有 globals.yml
的kolla目录不在/etc/kolla/
下,则还必须更改配置文件的路径.
This is to make sure, that there is only one OSD requested by kolla-ansible. If your kolla-directory with the globals.yml
is not under /etc/kolla/
, you have to change the path of the config-file as well.
使用一张具有多个分区的单个光盘进行安装的解决方案是将可在可启动性的安装中将ceph存储的存储类型从bluestore切换到较旧的文件存储OSD类型.这也需要使用不同的分区标签,例如: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive .对于文件存储,您需要一个标签为 KOLLA_CEPH_OSD_BOOTSTRAP_FOO
的分区和一个标签为 KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J
的小型日记分区(名称中的 FOO
确实是必需的...).为了能够将您的kolla安装切换为文件存储OSD类型,请通过在主机旁边添加 ceph_osd_store_type = filestore
来编辑多合一文件的 [storage]
部分,如下所示,以覆盖默认的bluestore.
Solution for setup with one single disc with multiple partitions is to switch the storage-type of the ceph-storage in the kolla-ansible setup from bluestore to the older filestore OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive .With filestore you need one parition with the label KOLLA_CEPH_OSD_BOOTSTRAP_FOO
and a small journal-partition with label KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J
(the FOO
in the name is really required...). To be able to switch your kolla-installation to filestore OSD type, edit all-in-one file's [storage]
section by adding ceph_osd_store_type=filestore
next to the host as follows, to override the default bluestore.
[storage]
localhost ansible_connection=local ceph_osd_store_type=filestore
上述方法已通过 ansible == 2.9.7
和 kolla-ansible == 9.1.0
和 OpenStack Train版本
进行了测试和以前的版本.
The above method has been tested with ansible==2.9.7
and kolla-ansible==9.1.0
and OpenStack Train release
and prior releases.
这篇关于使用kolla-ansible安装ceph进行多合一安装的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!