Aprenderemos a instalar RedHat Ceph Storage 3.5 en un sistema Linux RedHat 7.7, configurando un cluster y también la versión Opensource Ceph Mimic que, luego, actualizaremos a Nautilus.
Veremos el procedimiento completo de configuración de un cluster de Ceph, así como algunos comandos básicos para el almacenamiento por objetos y en bloque.
¿Qué es Ceph?
Ceph es un software de código abierto que sirve para guardar datos como objetos.
Si esto te suena a chino, es lo que muchas plataformas de cloud están utilizando actualmente. La más popular es Amazon AWS con su servicio S3, pero también lo puedes encontrar en Google Drive.
Para explicar el almacenamiento por objetos lo más simple posible, cuando subimos un documento a Google Drive se está almacenando como un objeto que contiene una serie de metadatos como, por ejemplo, la fecha, el tamaño, ubicación (path), propietario, con quién se comparte, cuando expira, la versión, etc.
Si subimos una nueva versión del mismo fichero, no se sobrescribirá el original, si no que tendremos dos versiones distintas pero con el mismo nombre. Ese tipo de tratamiento de los ficheros es la naturaleza del almacenamiento por objetos y cada uno de los ficheros ocupa espacio de almacenamiento.
Si hiciésemos la misma operación pero con comandos de sistema operativo, es decir, copiar un fichero a una ruta donde ya existe otro con el mismo nombre, lo que ocurriría es que se sobrescribiría el fichero original sin guardar ninguna otra versión. A este tipo de almacenamiento se le conoce como «almacenamiento por bloque». Como vemos, se diferencia mucho del almacenamiento por objetos.
Ventajas de Ceph
- Proporciona almacenamiento ilimitado con réplica, incluso, entre zonas geográficas.
- Almacenamiento por bloque (la presentación de un disco pero con la arquitectura de Ceph detrás del dispositivo).
- Almacenamiento mediante Ceph Filesystem (similar al NFS pero con la arquitectura de Ceph).
- Almacenamiento por objetos, compatible con la API de Amazon S3.
- Es un almacenamiento ideal para sistemas de Big Data.
- Es un sistema de almacenamiento con una arquitectura tolerante a fallos.
Conceptos previos de Ceph
Antes de comenzar con la instalación de Ceph, conviene tener claros algunos conceptos para que podamos entender mejor lo que estamos haciendo.
- Monitor nodes (nodos de monitorización): Se refiere al servicio de monitorización de todos los nodos que forman parte del cluster de Ceph. Ejecuta el demonio ceph-mon. Todos los nodos conectados al cluster reciben una copia de su configuración denominada mapa del cluster o cluster map. Sin este servicio no sería posible asegurar la alta disponibilidad, por eso es importante hacerse una copia de seguridad del cluster map.
- OSD nodes (Object Storage Device): Es el servicio que crea el espacio de almacenamiento de todos los objetos. En esta guía utilizaremos la arquitectura LVM (lo veremos más adelante). El demonio responsable se llama ceph-osd.
- MDS nodes (MetaData Server): Es el servicio que guarda los metadatos utilizados por CephFS. El demonio se llama ceph-mds.
- Object Gateway Node: ceph-radosgw es el servicio que permite el almacenamiento distribuido por objetos e interactuar con él para guardar datos a través de una API (RADOS). Compatible con S3 y Swift.
- MGR: ceph-mgr es un proceso que corren en los servidores de monitorización (MON) para proporcionar más información sobre el estado del cluster. Su instalación no es obligatoria pero sí muy recomendable. Por ejemplo, podemos habilitar el módulo de Grafana, o de Prometheus. Hay muchos y los podemos ver con ceph mgr module ls y hablitar el que queramos con ceph mgr module enable prometheus (suponiendo que el módulo que queramos habilitar se el de prometheus).
[[email protected] ~]# ceph mgr services
{
"dashboard": "https://10.0.1.212:8443/",
"prometheus": "http://10.0.1.212:9283/"
}
[[email protected] ~]#
Arquitectura

Tipos de almacenamiento que podemos utilizar con Ceph:

Diferentes métodos de instalación de Ceph en Linux
Como comentaba anteriormente, utilizaremos dos nodos Linux RedHat 7 y la versión 3.3 de Ceph que distribuye RedHat.
Los dos nodos que van a formar el cluster tienen los siguientes nombres e IPs:
10.0.1.228 ceph1
10.0.1.132 ceph2
Podemos descargarnos la ISO de Ceph gratuitamente desde este enlace. Eso sí, si queremos soporte de RedHat, tendremos que pagarlo.
Instalación MANUAL de Ceph
Requerimientos previos
Abrir los puertos de comunicaciones
Por defecto, Ceph utiliza el rango de puertos entre el 6800 y 7100. También tendremos que abrir el que utilicemos para Apache si utilizamos el almacenamiento por objetos.
Creación de los repositorios de yum
Puedes instalar Ceph tanto con Ansible como manualmente con la ISO que nos hemos descargado previamente. Yo prefiero hacerlo de forma manual, así que lo primero que voy a hacer es crear los repositorios locales de yum (no he pagado soporte para esta demo) para poder instalar el producto.
createrepo -g /Ceph3.3/MON/repodata/repomd.xml /Ceph3.3/MON
createrepo -g /Ceph3.3/OSD/repodata/repomd.xml /Ceph3.3/OSD
createrepo -g /Ceph3.3/Tools/repodata/repomd.xml /Ceph3.3/Tools
[[email protected] Ceph3.3]# cat /etc/yum.repos.d/ceph3.3mon.repo
[Ceph3.3mon]
name=RedHat Ceph Monitor 3.3
baseurl="file:///Ceph3.3/MON"
enabled=1
gpgcheck=0
[[email protected] Ceph3.3]#
[[email protected] yum.repos.d]# cat /etc/yum.repos.d/ceph3.3osd.repo
[Ceph3.3osd]
name=RedHat Ceph OSD 3.3
baseurl="file:///Ceph3.3/OSD"
enabled=1
gpgcheck=0
[[email protected] yum.repos.d]#
[[email protected] ~]# cat /etc/yum.repos.d/ceph3.3tools.repo
[Ceph3.3tools]
name=RedHat Ceph Tools 3.3
baseurl="file:///Ceph3.3/Tools"
enabled=1
gpgcheck=0
[[email protected] ~]#
[[email protected] ~]# yum repolist
Loaded plugins: amazon-id, rhui-lb
Ceph3.3osd | 3.6 kB 00:00:00
Ceph3.3tools | 3.6 kB 00:00:00
(1/4): Ceph3.3osd/group_gz | 1.2 kB 00:00:00
(2/4): Ceph3.3osd/primary_db | 27 kB 00:00:00
(3/4): Ceph3.3tools/group_gz | 1.2 kB 00:00:00
(4/4): Ceph3.3tools/primary_db | 38 kB 00:00:00
repo id repo name status
Ceph3.3mon RedHat Ceph Monitor 3.3 37
Ceph3.3osd RedHat Ceph OSD 3.3 27
Ceph3.3tools RedHat Ceph Tools 3.3 48
rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 4
rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 26,742
rhui-REGION-rhel-server-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (RPMs) 239
repolist: 27,097
[[email protected] ~]#
Esta operación la hago en los dos nodos del cluster.
Instalación de los paquetes RPM de Ceph
Una vez creados los repositorios, ya puedo ejecutar yum para instalar todos los módulos de Ceph y algunos requerimientos:
yum install ceph-mon ceph-mgr ceph-osd ceph-common ceph-radosgw ceph-mds httpd ntp mod_fcgid python-boto python-distribute
NOTA: Antes he tenido que descargarme manualmente, desde el repositorio «extras» de RedHat 7 un par de paquetes que se necesitaban para poder ejecutar correctamente el comando anterior: python-itsdangerous-0.23-1.el7.noarch.rpm y python-werkzeug-0.9.1-1.el7.noarch.rpm
El paquete ceph-deploy también nos lo descargaremos a parte desde el repositorio de RedHat o desde la documentación oficial https://docs.ceph.com/en/octopus/install/ceph-deploy/quick-start-preflight/#rhel-centos.

rpm -i ceph-deploy python-itsdangerous-0.23-1.el7.noarch.rpm python-werkzeug-0.9.1-1.el7.noarch.rpm
Arranque del servicio NTP
Primeramente, habilitamos el servicio de NTP en ambos nodos.
[[email protected] ~]# systemctl enable ntpd
[[email protected] ~]# systemctl start ntpd
[[email protected] ~]# systemctl enable ntpd
[[email protected] ~]# systemctl start ntpd
Aprende a configurar un servidor de NTP en Linux.
Configuración del cluster de Ceph
Ya tenemos el software instalado. Ahora falta configurarlo para que funcione.
Comenzamos a crear el fichero de configuración del cluster (ceph-mon)
La primera configuración básica será asignar un ID único al cluster e indicar el hostname de los nodos que van a formarlo.
[[email protected] ~]# uuidgen
6bbb8f28-46f4-4faa-8b36-bd598df8b57a
[[email protected] ~]#
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
mon_initial_members = ceph1,ceph2
[[email protected] ~]#
[[email protected] ~]# scp -p ceph.conf ceph2:$PWD
ceph.conf 100% 87 0.1KB/s 00:00
[[email protected] ~]#
El otro nodo deberá tener los mismos datos, así que podemos copiar el fichero, tal cual.
Generamos claves de seguridad para acceder al servicio ceph-mon:
[[email protected] ceph]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /tmp/ceph.mon.keyring
[[email protected] ceph]#
[[email protected] ceph]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
creating /etc/ceph/ceph.client.admin.keyring
[[email protected] ceph]#
[[email protected] ceph]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
[[email protected] ceph]#
Copiamos las claves al segundo nodo del cluster:
[[email protected] ceph]# scp -p ceph.client.admin.keyring ceph2:$PWD
ceph.client.admin.keyring 100% 137 0.1KB/s 00:00
[[email protected] ceph]#
[[email protected] ceph]# scp -p /tmp/ceph.mon.keyring ceph2:/tmp
ceph.mon.keyring 100% 214 0.2KB/s 00:00
[[email protected] ceph]#
Compilamos o generamos la primera versión del cluster de Ceph. Es decir, generamos al primer mapa de monitorización (ceph-mon), tal y como habíamo mencionado en los conceptos básicos:
[[email protected] ceph]# monmaptool --create --add ceph1 10.0.1.228 --add ceph2 10.0.1.132 --fsid 6bbb8f28-46f4-4faa-8b36-bd598df8b57a /tmp/monmap
monmaptool: monmap file /tmp/monmap
monmaptool: set fsid to 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
monmaptool: writing epoch 0 to /tmp/monmap (2 monitors)
[[email protected] ceph]#
[[email protected]]# scp /tmp/monmap ceph2:/tmp
monmap 100% 288 0.3KB/s 00:00
[[email protected]]#
Antes de iniciar el servicio, debemos procurar una estructura para almacenar sus datos:
[[email protected]]# mkdir /var/lib/ceph/mon/ceph-ceph1
[[email protected]]# mkdir /var/lib/ceph/mon/ceph-ceph2
[[email protected] ~]# ceph-mon --mkfs -i ceph1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
[[email protected] ~]# ll /var/lib/ceph/mon/ceph-ceph1/
total 8
-rw------- 1 root root 77 Jan 2 15:08 keyring
-rw-r--r-- 1 root root 8 Jan 2 15:08 kv_backend
drwxr-xr-x 2 root root 106 Jan 2 15:08 store.db
[[email protected] ~]#
[[email protected] ~]# ceph-mon --mkfs -i ceph2 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
Llegados a este punto, ya podemos incluir en nuestro fichero de configuración del cluster, los dos nodos de monitorización con sus respectivos nombres e IPs:
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
mon_initial_members = ceph1,ceph2
mon_host = 10.0.1.228,10.0.1.132
public_network = 10.0.1.0/24
auth_cluster_required = none
auth_service_required = none
auth_client_requiered = none
osd_journal_size = 1024
osd_pool_default_size = 3
osd_pool_default_min_size = 1
[[email protected] ~]#
Arrancaremos Ceph con el usurio del sistema operativo, ceph, por lo que modificaremos el propietario de los ficheros que hemos creado hasta ahora:
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mon
[[email protected] ~]# chown -R ceph:ceph /var/log/ceph
[[email protected] ~]# chown -R ceph:ceph /var/run/ceph
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.conf
[[email protected] ~]# chown ceph:ceph /etc/ceph/rbdmap
[[email protected] ~]# echo "CLUSTER=ceph" >> /etc/sysconfig/ceph
[[email protected] ~]# scp -p /etc/ceph/ceph.conf ceph2:/etc/ceph
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mon
[[email protected] ~]# chown -R ceph:ceph /var/log/ceph
[[email protected] ~]# chown -R ceph:ceph /var/run/ceph
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.conf
[[email protected] ~]# chown ceph:ceph /etc/ceph/rbdmap
[[email protected] ~]# echo "CLUSTER=ceph" >> /etc/sysconfig/ceph
Por fin, arrancamos el servicio ceph-mon:
[[email protected] ~]# systemctl enable ceph-mon.target
[[email protected] ~]# systemctl enable [email protected]
[[email protected]]# systemctl restart [email protected]
[[email protected] ~]# systemctl enable ceph-mon.target
[[email protected] ~]# systemctl enable [email protected]
[[email protected]]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
Active: active (running) since Thu 2020-01-02 15:25:05 EST; 7s ago
Main PID: 2626 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
+-2626 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph
Jan 02 15:25:05 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Stopped Ceph cluster monitor daemon.
Jan 02 15:25:05 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Started Ceph cluster monitor daemon.
[[email protected] ~]#
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
Active: active (running) since Thu 2020-01-02 15:25:48 EST; 5s ago
Main PID: 2312 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
+-2312 /usr/bin/ceph-mon -f --cluster ceph --id ceph2 --setuser ceph --setgroup ceph
Jan 02 15:25:48 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Started Ceph cluster monitor daemon.
[[email protected] ~]#
Y comprobamos que el estado del cluster es correcto, aunque todavía faltan algunos servicios por configurar:
[[email protected] ~]# ceph -s
cluster:
id: 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
health: HEALTH_OK
services:
mon: 2 daemons, quorum ceph2,ceph1
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 0B used, 0B / 0B avail
pgs:
[[email protected] ~]#
Configuración manual del servicio OSD (Object Storage Devices)
Comenzamos con la generación de claves de seguridad igual que con el servicio anterior:
[[email protected] ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
[[email protected] ~]#
[[email protected] ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
[[email protected] ~]#
[[email protected] ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /tmp/ceph.mon.keyring
[[email protected] ~]#
[[email protected] ~]# scp /var/lib/ceph/bootstrap-osd/ceph.keyring ceph2:/var/lib/ceph/bootstrap-osd/
ceph.keyring 100% 129 0.1KB/s 00:00
[[email protected] ~]#
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/bootstrap-osd
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/bootstrap-osd
Creamos una estructura LVM para el almacenamiento de los objetos:
[[email protected] ~]# ceph-volume lvm create --data /dev/xvdb
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f9188759-8b0e-42d8-b2de-e0ae490c94ac
Running command: vgcreate --force --yes ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9 /dev/xvdb
stdout: Physical volume "/dev/xvdb" successfully created.
stdout: Volume group "ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9
stdout: Logical volume "osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: restorecon /var/lib/ceph/osd/ceph-1
Running command: chown -h ceph:ceph /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac
Running command: chown -R ceph:ceph /dev/dm-0
Running command: ln -s /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac /var/lib/ceph/osd/ceph-1/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQC+2w5ekX0tBhAAkFoi48j6zap8Hi+O7m2Fcg==
stdout: creating /var/lib/ceph/osd/ceph-1/keyring
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQC+2w5ekX0tBhAAkFoi48j6zap8Hi+O7m2Fcg== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f9188759-8b0e-42d8-b2de-e0ae490c94ac --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/xvdb
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac --path /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac /var/lib/ceph/osd/ceph-1/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: systemctl enable [email protected]
stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl enable --runtime [email protected]
stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl start [email protected]
--> ceph-volume lvm activate successful for osd ID: 1
--> ceph-volume lvm create successful for: /dev/xvdb
[[email protected] ~]#
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9 1 1 0 wz--n- <3.00g 0
[[email protected] ~]#
[[email protected] ~]# ceph-volume lvm create --data /dev/xvdb
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9988a3dc-6f46-4c67-9314-92c5bccc5208
Running command: vgcreate --force --yes ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459 /dev/xvdb
stdout: Physical volume "/dev/xvdb" successfully created.
stdout: Volume group "ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459
stdout: Wiping xfs signature on /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208.
stdout: Logical volume "osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Running command: restorecon /var/lib/ceph/osd/ceph-2
Running command: chown -h ceph:ceph /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208
Running command: chown -R ceph:ceph /dev/dm-0
Running command: ln -s /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 /var/lib/ceph/osd/ceph-2/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQBB3A5eaq+NBBAAFnCQz6ftyeTi/4wBEGEnSg==
stdout: creating /var/lib/ceph/osd/ceph-2/keyring
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQBB3A5eaq+NBBAAFnCQz6ftyeTi/4wBEGEnSg== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9988a3dc-6f46-4c67-9314-92c5bccc5208 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/xvdb
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 --path /var/lib/ceph/osd/ceph-2
Running command: ln -snf /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 /var/lib/ceph/osd/ceph-2/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: systemctl enable [email protected]
stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl enable --runtime [email protected]
stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl start [email protected]
--> ceph-volume lvm activate successful for osd ID: 2
--> ceph-volume lvm create successful for: /dev/xvdb
[[email protected] ~]#
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459 1 1 0 wz--n- <3.00g 0
[[email protected] ~]#
Arrancamos el servicio OSD:
[[email protected] ~]# ceph auth get-or-create mgr.ceph1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.ceph1]
key = AQAIVg5eSFOoChAAb9feCbW+fwUczzGeE5XGBg==
[[email protected] ~]#
[[email protected] ~]# ceph auth get-or-create mgr.ceph1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.ceph2]
key = AQBsVg5eaEt4GBAAJXIo6RjDn6StRkNvjVQmFg==
[[email protected] ~]#
[[email protected] ~]#
[[email protected] ~]# mkdir -p /var/lib/ceph/mgr/ceph-ceph1
[[email protected] ~]# mkdir -p /var/lib/ceph/mgr/ceph-ceph2
[[email protected] ~]# cat /var/lib/ceph/mgr/ceph-ceph1/keyring
[mgr.ceph1]
key = AQAIVg5eSFOoChAAb9feCbW+fwUczzGeE5XGBg==
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mgr
[[email protected] ~]# cat /var/lib/ceph/mgr/ceph-ceph2/keyring
[mgr.ceph2]
key = AQBsVg5eaEt4GBAAJXIo6RjDn6StRkNvjVQmFg==
[[email protected] ~]#
[[email protected] ~]# mkdir /var/lib/ceph/osd/ceph-1
[[email protected] ~]# mkdir /var/lib/ceph/osd/ceph-2
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph*
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph*
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph object storage daemon osd.1
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-01-03 02:12:06 EST; 9s ago
Process: 4316 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 4321 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
+-4321 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Stopped Ceph object storage daemon osd.1.
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Starting Ceph object storage daemon osd.1...
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Started Ceph object storage daemon osd.1.
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: 2020-01-03 02:12:06.197168 7fc6681e3d80 -1 Public network was set, but cluster network was not set
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: 2020-01-03 02:12:06.197172 7fc6681e3d80 -1 Using public network also for cluster network
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: 2020-01-03 02:12:06.309694 7fc6681e3d80 -1 osd.1 39 log_to_monitors {default=true}
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph object storage daemon osd.2
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
Active: active (running) since Fri 2020-01-03 02:13:02 EST; 4s ago
Process: 4754 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 4759 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
+-4759 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Stopped Ceph object storage daemon osd.2.
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Starting Ceph object storage daemon osd.2...
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Started Ceph object storage daemon osd.2.
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: 2020-01-03 02:13:02.293503 7ff953e92d80 -1 Public network was set, but cluster network was not set
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: 2020-01-03 02:13:02.293510 7ff953e92d80 -1 Using public network also for cluster network
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: 2020-01-03 02:13:02.418944 7ff953e92d80 -1 osd.2 41 log_to_monitors {default=true}
[[email protected] ~]#
Si volvemos a comprobar el estado del cluster, veremos cómo ahora aparece el servicio OSD:
[[email protected] ~]# ceph -s
cluster:
id: 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
health: HEALTH_WARN
no active mgr
services:
mon: 2 daemons, quorum ceph2,ceph1
mgr: no daemons active
osd: 3 osds: 2 up, 2 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 0B used, 0B / 0B avail
pgs:
[[email protected] ~]#
Configuración manual del servicio MDS (los metadatos)
Este es el servicio más sencillo de configurar de todos:
[[email protected] ~]# mkdir -p /var/lib/ceph/mds/ceph-ceph1
[[email protected] ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph1/keyring --gen-key -n mds.ceph1
creating /var/lib/ceph/mds/ceph-ceph1/keyring
[[email protected] ~]#
[[email protected] ~]# ceph auth add mds.ceph1 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -i /var/lib/ceph/mds/ceph-ceph1/keyring
added key for mds.ceph1
[[email protected] ~]#
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mds/
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
Active: active (running) since Fri 2020-01-03 02:25:33 EST; 5s ago
Main PID: 5111 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
+-5111 /usr/bin/ceph-mds -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph
Jan 03 02:25:33 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Started Ceph metadata server daemon.
Jan 03 02:25:33 ip-10-0-1-228.eu-west-1.compute.internal ceph-mds[5111]: starting mds.ceph1 at -
[[email protected] ~]#
[[email protected] ~]# mkdir -p /var/lib/ceph/mds/ceph-ceph2
[[email protected] ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph2/keyring --gen-key -n mds.ceph2
creating /var/lib/ceph/mds/ceph-ceph2/keyring
[[email protected] ~]#
[[email protected] ~]# ceph auth add mds.ceph2 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -i /var/lib/ceph/mds/ceph-ceph2/keyring
added key for mds.ceph2
[[email protected] ~]#
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mds/
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
Active: active (running) since Fri 2020-01-03 02:26:16 EST; 6s ago
Main PID: 5371 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
+-5371 /usr/bin/ceph-mds -f --cluster ceph --id ceph2 --setuser ceph --setgroup ceph
Jan 03 02:26:16 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Started Ceph metadata server daemon.
Jan 03 02:26:16 ip-10-0-1-132.eu-west-1.compute.internal ceph-mds[5371]: starting mds.ceph2 at -
[[email protected] ~]#
Una vez arrancado correctamente, lo incluimos en el fichero de configuración del cluster:
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
mon_initial_members = ceph1,ceph2
mon_host = 10.0.1.228,10.0.1.132
public_network = 10.0.1.0/24
auth_cluster_required = none
auth_service_required = none
auth_client_requiered = none
osd_journal_size = 1024
osd_pool_default_size = 3
osd_pool_default_min_size = 1
[mds.ceph1]
host = ceph1
[mds.ceph2]
host = ceph2
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl restart [email protected]
La misma configuración ene l nodo ceph2.
Comprobamos el estado del servicio MDS:
[[email protected] ~]# ceph mds stat
, 2 up:standby
[[email protected] ~]#
[[email protected] ~]# ceph -s
cluster:
id: 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
health: HEALTH_OK
services:
mon: 2 daemons, quorum ceph2,ceph1
mgr: ceph1(active), standbys: ceph2
osd: 3 osds: 2 up, 2 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 2.00GiB used, 3.99GiB / 5.99GiB avail
pgs:
[[email protected] ~]#
Instalación AUTOMATIZADA de Ceph con CEPH-DEPLOY
NOTA: A partir de la versión de la comunidad «Ceph-Pacific», ceph-deploy ha quedado obsoleto y, en su lugar, se utiliza ceph-adm.
En los requerimientos previos habíamos instalado con yum el paquete ceph-deploy, pero en la instalación manual de Ceph no lo habíamos utilizado.
ceph-deploy permite la configuración automática de todos los nodos del cluster de Ceph. Vamos a ver cómo funciona:
Definición de los nodos que van a formar parte del cluster
Ejecutamos el comando ceph-deploy con el usuario root, de la siguiente manera:
[[email protected] ~]# ceph-deploy new ceph1 ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.36): /bin/ceph-deploy new ceph1 ceph2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7fc8c22ca230>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x17cecf8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph1', 'ceph2']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: /usr/sbin/ip link show
[ceph1][INFO ] Running command: /usr/sbin/ip addr show
[ceph1][DEBUG ] IP addresses found: [u'10.0.1.137', u'10.0.1.228']
[ceph_deploy.new][DEBUG ] Resolving host ceph1
[ceph_deploy.new][DEBUG ] Monitor ceph1 at 10.0.1.228
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph2][DEBUG ] connected to host: ceph1
[ceph2][INFO ] Running command: ssh -CT -o BatchMode=yes ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO ] Running command: /usr/sbin/ip link show
[ceph2][INFO ] Running command: /usr/sbin/ip addr show
[ceph2][DEBUG ] IP addresses found: [u'10.0.1.132', u'10.0.1.58']
[ceph_deploy.new][DEBUG ] Resolving host ceph2
[ceph_deploy.new][DEBUG ] Monitor ceph2 at 10.0.1.132
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph1', 'ceph2']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.1.228', '10.0.1.132']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[[email protected] ~]#
Copiamos los binarios de Ceph en todos los nodos del cluster
[[email protected] ~]# ceph-deploy install ceph1 ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.36): /bin/ceph-deploy install ceph1 ceph2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1d06290>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f6c1956b500>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['ceph1', 'ceph2']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts ceph1 ceph2
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph1 ...
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph1][INFO ] installing Ceph on ceph1
[ceph1][INFO ] Running command: yum clean all
[ceph1][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph1][DEBUG ] Cleaning repos: Ceph3.3mon Ceph3.3osd Ceph3.3tools
[ceph1][DEBUG ] : rhui-REGION-client-config-server-7
[ceph1][DEBUG ] : rhui-REGION-rhel-server-releases
[ceph1][DEBUG ] : rhui-REGION-rhel-server-rh-common
[ceph1][DEBUG ] Cleaning up everything
[ceph1][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[ceph1][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph1][DEBUG ] Package 2:ceph-osd-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Package 2:ceph-mds-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Package 2:ceph-mon-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Package 2:ceph-radosgw-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Nothing to do
[ceph1][INFO ] Running command: ceph --version
[ceph1][DEBUG ] ceph version 12.2.12-45.el7cp (60e2063ab367d6d71e55ea3b3671055c4a8cde2f) luminous (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph2 ...
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph2][INFO ] installing Ceph on ceph2
[ceph2][INFO ] Running command: yum clean all
[ceph2][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph2][DEBUG ] Cleaning repos: Ceph3.3mon Ceph3.3osd Ceph3.3tools
[ceph2][DEBUG ] : rhui-REGION-client-config-server-7
[ceph2][DEBUG ] : rhui-REGION-rhel-server-releases
[ceph2][DEBUG ] : rhui-REGION-rhel-server-rh-common
[ceph2][DEBUG ] Cleaning up everything
[ceph2][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[ceph2][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph2][DEBUG ] Package 2:ceph-osd-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Package 2:ceph-mds-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Package 2:ceph-mon-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Package 2:ceph-radosgw-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Nothing to do
[ceph2][INFO ] Running command: ceph --version
[ceph2][DEBUG ] ceph version 12.2.12-45.el7cp (60e2063ab367d6d71e55ea3b3671055c4a8cde2f) luminous (stable)
[[email protected] ~]#
Configuración del servicio de monitorización (Mon)
[[email protected] ~]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.36): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x26b6fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x26a9140>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1 ceph2
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph1][DEBUG ] determining if provided host has same hostname in remote
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] deploying mon to ceph1
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] remote hostname: ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][DEBUG ] create the mon path if it does not exist
[ceph1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create the monitor keyring file
[ceph1][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i ceph1 --keyring /var/lib/ceph/tmp/ceph-ceph1.mon.keyring --setuser 167 --setgroup 167
[ceph1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph1][DEBUG ] create the init path if it does not exist
[ceph1][INFO ] Running command: systemctl enable ceph.target
[ceph1][INFO ] Running command: systemctl enable [email protected]
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[ceph1][INFO ] Running command: systemctl start [email protected]
[ceph1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][DEBUG ] ********************************************************************************
[ceph1][DEBUG ] status for monitor: mon.ceph1
[ceph1][DEBUG ] {
[ceph1][DEBUG ] "election_epoch": 0,
[ceph1][DEBUG ] "extra_probe_peers": [
[ceph1][DEBUG ] "10.0.1.132:6789/0"
[ceph1][DEBUG ] ],
[ceph1][DEBUG ] "feature_map": {
[ceph1][DEBUG ] "mon": {
[ceph1][DEBUG ] "group": {
[ceph1][DEBUG ] "features": "0x3ffddff8eeacfffb",
[ceph1][DEBUG ] "num": 1,
[ceph1][DEBUG ] "release": "luminous"
[ceph1][DEBUG ] }
[ceph1][DEBUG ] }
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "features": {
[ceph1][DEBUG ] "quorum_con": "0",
[ceph1][DEBUG ] "quorum_mon": [],
[ceph1][DEBUG ] "required_con": "0",
[ceph1][DEBUG ] "required_mon": []
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "monmap": {
[ceph1][DEBUG ] "created": "2020-01-09 09:51:23.076706",
[ceph1][DEBUG ] "epoch": 0,
[ceph1][DEBUG ] "features": {
[ceph1][DEBUG ] "optional": [],
[ceph1][DEBUG ] "persistent": []
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "fsid": "307b5bff-33ea-453b-8be7-4519bbd9e8d7",
[ceph1][DEBUG ] "modified": "2020-01-09 09:51:23.076706",
[ceph1][DEBUG ] "mons": [
[ceph1][DEBUG ] {
[ceph1][DEBUG ] "addr": "10.0.1.228:6789/0",
[ceph1][DEBUG ] "name": "ceph1",
[ceph1][DEBUG ] "public_addr": "10.0.1.228:6789/0",
[ceph1][DEBUG ] "rank": 0
[ceph1][DEBUG ] },
[ceph1][DEBUG ] {
[ceph1][DEBUG ] "addr": "0.0.0.0:0/1",
[ceph1][DEBUG ] "name": "ceph2",
[ceph1][DEBUG ] "public_addr": "0.0.0.0:0/1",
[ceph1][DEBUG ] "rank": 1
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ]
[ceph1][DEBUG ] },
[ceph1][DEBUG ] "name": "ceph1",
[ceph1][DEBUG ] "outside_quorum": [
[ceph1][DEBUG ] "ceph1"
[ceph1][DEBUG ] ],
[ceph1][DEBUG ] "quorum": [],
[ceph1][DEBUG ] "rank": 0,
[ceph1][DEBUG ] "state": "probing",
[ceph1][DEBUG ] "sync_provider": []
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ********************************************************************************
[ceph1][INFO ] monitor: mon.ceph1 is running
[ceph1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph2 ...
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph2][DEBUG ] determining if provided host has same hostname in remote
[ceph2][DEBUG ] get remote short hostname
[ceph2][DEBUG ] deploying mon to ceph2
[ceph2][DEBUG ] get remote short hostname
[ceph2][DEBUG ] remote hostname: ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][DEBUG ] create the mon path if it does not exist
[ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph2/done
[ceph2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph2/done
[ceph2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph2.mon.keyring
[ceph2][DEBUG ] create the monitor keyring file
[ceph2][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i ceph2 --keyring /var/lib/ceph/tmp/ceph-ceph2.mon.keyring --setuser 167 --setgroup 167
[ceph2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph2.mon.keyring
[ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph2][DEBUG ] create the init path if it does not exist
[ceph2][INFO ] Running command: systemctl enable ceph.target
[ceph2][INFO ] Running command: systemctl enable [email protected]
[ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[ceph2][INFO ] Running command: systemctl start [email protected]
[ceph2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status
[ceph2][DEBUG ] ********************************************************************************
[ceph2][DEBUG ] status for monitor: mon.ceph2
[ceph2][DEBUG ] {
[ceph2][DEBUG ] "election_epoch": 0,
[ceph2][DEBUG ] "extra_probe_peers": [
[ceph2][DEBUG ] "10.0.1.228:6789/0"
[ceph2][DEBUG ] ],
[ceph2][DEBUG ] "feature_map": {
[ceph2][DEBUG ] "mon": {
[ceph2][DEBUG ] "group": {
[ceph2][DEBUG ] "features": "0x3ffddff8eeacfffb",
[ceph2][DEBUG ] "num": 1,
[ceph2][DEBUG ] "release": "luminous"
[ceph2][DEBUG ] }
[ceph2][DEBUG ] }
[ceph2][DEBUG ] },
[ceph2][DEBUG ] "features": {
[ceph2][DEBUG ] "quorum_con": "0",
[ceph2][DEBUG ] "quorum_mon": [],
[ceph2][DEBUG ] "required_con": "0",
[ceph2][DEBUG ] "required_mon": []
[ceph2][DEBUG ] },
[ceph2][DEBUG ] "monmap": {
[ceph2][DEBUG ] "created": "2020-01-09 09:51:26.601370",
[ceph2][DEBUG ] "epoch": 0,
[ceph2][DEBUG ] "features": {
[ceph2][DEBUG ] "optional": [],
[ceph2][DEBUG ] "persistent": []
[ceph2][DEBUG ] },
[ceph2][DEBUG ] "fsid": "307b5bff-33ea-453b-8be7-4519bbd9e8d7",
[ceph2][DEBUG ] "modified": "2020-01-09 09:51:26.601370",
[ceph2][DEBUG ] "mons": [
[ceph2][DEBUG ] {
[ceph2][DEBUG ] "addr": "10.0.1.132:6789/0",
[ceph2][DEBUG ] "name": "ceph2",
[ceph2][DEBUG ] "public_addr": "10.0.1.132:6789/0",
[ceph2][DEBUG ] "rank": 0
[ceph2][DEBUG ] },
[ceph2][DEBUG ] {
[ceph2][DEBUG ] "addr": "10.0.1.228:6789/0",
[ceph2][DEBUG ] "name": "ceph1",
[ceph2][DEBUG ] "public_addr": "10.0.1.228:6789/0",
[ceph2][DEBUG ] "rank": 1
[ceph2][DEBUG ] }
[ceph2][DEBUG ] ]
[ceph2][DEBUG ] },
[ceph2][DEBUG ] "name": "ceph2",
[ceph2][DEBUG ] "outside_quorum": [
[ceph2][DEBUG ] "ceph2"
[ceph2][DEBUG ] ],
[ceph2][DEBUG ] "quorum": [],
[ceph2][DEBUG ] "rank": 0,
[ceph2][DEBUG ] "state": "probing",
[ceph2][DEBUG ] "sync_provider": []
[ceph2][DEBUG ] }
[ceph2][DEBUG ] ********************************************************************************
[ceph2][INFO ] monitor: mon.ceph2 is running
[ceph2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpjDmruJ
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] fetch remote file
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.admin
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-mds
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-osd
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-rgw
[ceph1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpjDmruJ
[[email protected] ~]#
Distribuimos la configuración del cluster en todos los nodos
[[email protected] ~]# ceph-deploy --overwrite-conf admin ceph1 ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.36): /bin/ceph-deploy admin ceph1 ceph2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x15f2638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['ceph1', 'ceph2']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f04a4fc3f50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[[email protected] ~]#
Configuración del servicio OSD (Object Storage Devices)
[[email protected] ~]# ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.36): /bin/ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('ceph1', '/dev/xvdb', None), ('ceph2', '/dev/xvdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cb1248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1c9d9b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph1:/dev/xvdb: ceph2:/dev/xvdb:
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph1 disk /dev/xvdb journal None activate True
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = data
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:a497f4f8-3051-4e95-a990-af50a7b61a52 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] Creating new GPT entries.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = block
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph1][DEBUG ] meta-data=/dev/xvdb1 isize=2048 agcount=4, agsize=6400 blks
[ceph1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[ceph1][DEBUG ] = crc=0 finobt=0
[ceph1][DEBUG ] data = bsize=4096 blocks=25600, imaxpct=25
[ceph1][DEBUG ] = sunit=0 swidth=0 blks
[ceph1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[ceph1][DEBUG ] log =internal log bsize=4096 blocks=864, version=2
[ceph1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[ceph1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[ceph1][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.jzhHrf with options noatime,inode64
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jzhHrf/block -> /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph1][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph1][DEBUG ] The new table will be used at the next reboot.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph1][INFO ] Running command: systemctl enable ceph.target
[ceph1][INFO ] checking OSD status...
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph1][WARNIN] there is 1 OSD down
[ceph1][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][WARNIN] osd keyring does not exist yet, creating one
[ceph2][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph2 disk /dev/xvdb journal None activate True
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = data
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:17869701-95a4-43e0-97b9-d31eae1b09f2 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] Creating new GPT entries.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = block
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:ec77b847-c605-40bb-b650-099bbea001f5 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph2][DEBUG ] meta-data=/dev/xvdb1 isize=2048 agcount=4, agsize=6400 blks
[ceph2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[ceph2][DEBUG ] = crc=0 finobt=0
[ceph2][DEBUG ] data = bsize=4096 blocks=25600, imaxpct=25
[ceph2][DEBUG ] = sunit=0 swidth=0 blks
[ceph2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[ceph2][DEBUG ] log =internal log bsize=4096 blocks=864, version=2
[ceph2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[ceph2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[ceph2][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.yAInIq with options noatime,inode64
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.yAInIq/block -> /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph2][DEBUG ] The new table will be used at the next reboot.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph2][INFO ] Running command: systemctl enable ceph.target
[ceph2][INFO ] checking OSD status...
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph2][WARNIN] there is 1 OSD down
[ceph2][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph2 is now ready for osd use.
[[email protected] ~]#
Comprobamos que podemos crear un nuevo Data Pool y le asignamos cuotas:
[[email protected] ~]# ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.36): /bin/ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('ceph1', '/dev/xvdb', None), ('ceph2', '/dev/xvdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cb1248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1c9d9b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph1:/dev/xvdb: ceph2:/dev/xvdb:
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph1 disk /dev/xvdb journal None activate True
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = data
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:a497f4f8-3051-4e95-a990-af50a7b61a52 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] Creating new GPT entries.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = block
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph1][DEBUG ] meta-data=/dev/xvdb1 isize=2048 agcount=4, agsize=6400 blks
[ceph1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[ceph1][DEBUG ] = crc=0 finobt=0
[ceph1][DEBUG ] data = bsize=4096 blocks=25600, imaxpct=25
[ceph1][DEBUG ] = sunit=0 swidth=0 blks
[ceph1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[ceph1][DEBUG ] log =internal log bsize=4096 blocks=864, version=2
[ceph1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[ceph1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[ceph1][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.jzhHrf with options noatime,inode64
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jzhHrf/block -> /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph1][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph1][DEBUG ] The new table will be used at the next reboot.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph1][INFO ] Running command: systemctl enable ceph.target
[ceph1][INFO ] checking OSD status...
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph1][WARNIN] there is 1 OSD down
[ceph1][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][WARNIN] osd keyring does not exist yet, creating one
[ceph2][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph2 disk /dev/xvdb journal None activate True
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = data
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:17869701-95a4-43e0-97b9-d31eae1b09f2 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] Creating new GPT entries.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = block
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:ec77b847-c605-40bb-b650-099bbea001f5 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph2][DEBUG ] meta-data=/dev/xvdb1 isize=2048 agcount=4, agsize=6400 blks
[ceph2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[ceph2][DEBUG ] = crc=0 finobt=0
[ceph2][DEBUG ] data = bsize=4096 blocks=25600, imaxpct=25
[ceph2][DEBUG ] = sunit=0 swidth=0 blks
[ceph2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[ceph2][DEBUG ] log =internal log bsize=4096 blocks=864, version=2
[ceph2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[ceph2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[ceph2][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.yAInIq with options noatime,inode64
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.yAInIq/block -> /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph2][DEBUG ] The new table will be used at the next reboot.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph2][INFO ] Running command: systemctl enable ceph.target
[ceph2][INFO ] checking OSD status...
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph2][WARNIN] there is 1 OSD down
[ceph2][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph2 is now ready for osd use.
[[email protected] ~]#
Data Pool
[[email protected] ~]# ceph osd pool create my-userfiles 64
pool 'my-userfiles' created
[[email protected] ~]#
[[email protected] ~]# ceph osd pool set my-userfiles size 2
set pool 1 size to 2
[[email protected] ~]#
[[email protected] ~]# ceph osd pool set my-userfiles min_size 1
set pool 1 min_size to 1
[[email protected] ~]#
Revisamos la configuración del cluster
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 307b5bff-33ea-453b-8be7-4519bbd9e8d7
mon_initial_members = ceph1, ceph2
mon_host = 10.0.1.228,10.0.1.132
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[[email protected] ~]# ceph -s
cluster:
id: 307b5bff-33ea-453b-8be7-4519bbd9e8d7
health: HEALTH_WARN
no active mgr
services:
mon: 2 daemons, quorum ceph2,ceph1
mgr: no daemons active
osd: 2 osds: 2 up, 2 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 0B used, 0B / 0B avail
pgs:
[[email protected] ~]#
Configuración de RADOS (almacenamiento distribuido por objetos a utilzando una API)
[[email protected] ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
creating /etc/ceph/ceph.client.radosgw.keyring
[[email protected] ~]# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
[[email protected] ~]# ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
[[email protected] ~]# ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
added key for client.radosgw.gateway
[[email protected] ~]# chown -R ceph:ceph /etc/ceph
[[email protected] ~]# scp /etc/ceph/ceph.client.radosgw.keyring ceph2:/etc/ceph
ceph.client.radosgw.keyring 100% 121 0.1KB/s 00:00
[[email protected] ~]# scp /etc/ceph/ceph.client.radosgw.keyring ceph2:/etc/ceph
[[email protected] ~]# chown -R ceph:ceph /etc/ceph
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl enable ceph-radosgw.target
[[email protected] ~]# systemctl enable [email protected]
Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[[email protected] ~]#
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 307b5bff-33ea-453b-8be7-4519bbd9e8d7
mon_initial_members = ceph1, ceph2
mon_host = 10.0.1.228,10.0.1.132
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
[client.radosgw.gateway]
host = {hostname}
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = ""
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
rgw print continue = false
[[email protected] ~]# scp -p /etc/ceph/ceph.conf ceph2:/etc/ceph
[[email protected] ~]# cp -p /etc/ceph/ceph.client.radosgw.keyring /var/lib/ceph/radosgw/ceph-ceph1/keyring
[[email protected] ~]# ls -la /var/lib/ceph/radosgw/ceph-ceph1/keyring
-rw------- 1 ceph ceph 121 Jan 3 03:51 /var/lib/ceph/radosgw/ceph-ceph1/keyring
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-01-03 04:46:52 EST; 3s ago
Main PID: 3792 (radosgw)
CGroup: /system.slice/system-ceph\x2dradosgw.slice/[email protected]
+-3792 /usr/bin/radosgw -f --cluster ceph --name client.ceph1 --setuser ceph --setgroup ceph
Jan 03 04:46:52 ceph1 systemd[1]: Stopped Ceph rados gateway.
Jan 03 04:46:52 ceph1 systemd[1]: Started Ceph rados gateway.
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
Creamos un pool para el almacenamiento de objetos y comprobamos que con rados podemos acceder a su listado:
[[email protected] ceph]# ceph osd pool create pool-test 100 100
pool 'pool-test' created
[[email protected] ceph]#
[[email protected] ceph]# rados lspools
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
pool-test
[[email protected] ceph]#
Otra prueba más es la creación de un usuario de test con rados:
[[email protected] ~]# radosgw-admin user create --uid="testuser" --display-name="First User"
2020-01-03 06:06:41.834257 7f4e05f0ae00 0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
"user_id": "testuser",
"display_name": "First User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "testuser",
"access_key": "Q08H44HC01E5ZN0PUNAG",
"secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
[[email protected] ~]#
[[email protected] ~]# radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
2020-01-03 06:07:18.550398 7fbe8070be00 0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
"user_id": "testuser",
"display_name": "First User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "testuser:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "testuser",
"access_key": "Q08H44HC01E5ZN0PUNAG",
"secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
}
],
"swift_keys": [
{
"user": "testuser:swift",
"secret_key": "iQHrJeorO8xOcBuY1vZJOwqYM6I4CHP5mKmRqpld"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
[[email protected] ~]#
Generamos la clave secreta del usuario:
[[email protected] ~]# radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
2020-01-03 06:07:51.398865 7f3bfdef1e00 0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
"user_id": "testuser",
"display_name": "First User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "testuser:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "testuser",
"access_key": "Q08H44HC01E5ZN0PUNAG",
"secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
}
],
"swift_keys": [
{
"user": "testuser:swift",
"secret_key": "sdl3N1xaWS7S6dCiK0AvhH0U2fwiG7dOTMIn4kuj"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
[[email protected] ~]#
Obtener información del usuario:
[[email protected] ~]# radosgw-admin user info --uid=testuser
2020-01-03 06:08:20.886360 7f037393ee00 0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
"user_id": "testuser",
"display_name": "First User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "testuser:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "testuser",
"access_key": "Q08H44HC01E5ZN0PUNAG",
"secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
}
],
"swift_keys": [
{
"user": "testuser:swift",
"secret_key": "sdl3N1xaWS7S6dCiK0AvhH0U2fwiG7dOTMIn4kuj"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
[[email protected] ~]#
Eliminar un usuario:
radosgw-admin user rm --uid=testuser
Si lo deseas, también puedes probar con la API de S3 mediante código python:
[[email protected] ~]# cat s3test.py
import boto
import boto.s3.connection
access_key = '64HHUXBFG9F4X2KAQCVA'
secret_key = 'w3oBij15mi6SKzcMaY2s8FCUV8iANg1om5cYYunU'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'ceph1',
port = 80,
is_secure=False,
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
print "{name}\t{created}".format(
name = bucket.name,
created = bucket.creation_date,
)
[[email protected] ~]#
Añadir un nuevo servicio extra de monitorización (MON)
Es muy recomendable añadir un segundo servicio de monitorización del cluster por si le ocurriera algo al servicio primario. Vamos a instalarlo en un servidor nuevo:
- Configuramos el fichero /etc/hosts, añadiendo la IP del nuevo servidor de monitorización en todos los servidores del cluster.
10.0.3.247 ceph-mon rgw.ceph-mon.com ceph-mon.com
10.0.3.49 ceph-mon2
10.0.3.95 ceph-osd1
10.0.3.199 ceph-osd2
10.0.3.241 ceph-osd3
- Instalamos dnsmasq
- Creamos el usuario ceph
- Añadimos los permisos de sudo
- Creamos las relaciones de confianza SSH con el resto de nodos del cluster
- Configuramos los respositorio de yum para ceph y epel e instalamos el software requerido:
yum install -y ceph ceph-deploy ntp ntpdate ntp-doc yum-plugin-priorities httpd mod_ssl openssl fcgi mod_fcgid python2-pip
pip install s3cmd
- Desplegamos el nuevo monitor de Ceph
[[email protected] ~]# cd /etc/ceph
[[email protected] ~]# ceph-deploy install ceph-mon2
[[email protected] ~]# ceph-deploy admin ceph-mon2
[[email protected] ~]# ceph-deploy mon add ceph-mon2
[[email protected] ~]# ceph-deploy mgr create ceph-mon2
- Desplegamos el servicio de Rados gateway en el nuevo monitor para poder comunicarnos con el sistema de almacenamiento:
[[email protected] ~]# ceph-deploy install --rgw ceph-mon2
[[email protected] ~]# cd /etc/ceph
[[email protected] ceph]# ceph-deploy rgw create ceph-mon2
[[email protected] ~]# systemctl enable ceph-radosgw.target
[[email protected] ~]# systemctl start ceph-radosgw.target
[[email protected] ~]#
[[email protected] ~]# lsof -i:7480
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
radosgw 21290 ceph 40u IPv4 45241 0t0 TCP *:7480 (LISTEN)
[[email protected] ~]#
- Comprobamos el estado del cluster desde el nuevo servidor de monitor (vemos que aparecen dos servicios de monitorización):
[[email protected] ~]# ceph -s
cluster:
id: 677b62e9-8834-407e-b73c-3f41e97597a8
health: HEALTH_OK
services:
mon: 2 daemons, quorum ceph-mon,ceph-mon2 (age 37s)
mgr: ceph-mon(active, since 79m), standbys: ceph-mon2
osd: 3 osds: 3 up (since 4h), 3 in (since 2d)
rgw: 1 daemon active (ceph-mon)
data:
pools: 7 pools, 148 pgs
objects: 226 objects, 2.0 KiB
usage: 3.0 GiB used, 6.0 GiB / 9 GiB avail
pgs: 148 active+clean
[[email protected] ~]#
- Observamos que en el fichero ceph.conf aparece el nuevo monitor:
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 677b62e9-8834-407e-b73c-3f41e97597a8
public_network = 10.0.3.0/24
mon_initial_members = ceph-mon,ceph-mon2
mon_host = 10.0.3.247:6789,10.0.3.49:6789
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[[email protected] ~]#
- Comprobamos que podemos ver los objetos que ya habíamos subido previamente al bucket de prueba:
[[email protected] ~]# s3cmd -c s3test.cfg ls s3://david-bucket/
2020-02-24 09:31 6 s3://david-bucket/david.txt
[[email protected] ~]#
Añadir de nuevo el monitor eliminado
Anteriormente hemos eliminado el monitor ceph-mon2 pero lo queremos volver a añadir. Lo haremos manualmente:
[[email protected] ceph]# ceph mon add ceph-mon2 10.0.3.49:6789
adding mon.ceph-mon2 at [v2:10.0.3.49:3300/0,v1:10.0.3.49:6789/0]
[[email protected] ceph]# ceph-mon -i ceph-mon2 --public-addr 192.168.1.33:6789
Arrancamos el servicio en el servidor ceph-mon2 y comprobamos el estado del cluster para ver que aparece el monitor:
[[email protected] ceph]# systemctl enable [email protected]
[[email protected] ceph]# systemctl start [email protected]
[[email protected] ceph]# ceph -s
cluster:
id: 677b62e9-8834-407e-b73c-3f41e97597a8
health: HEALTH_WARN
25 slow ops, oldest one blocked for 43 sec, mon.ceph-mon has slow ops
services:
mon: 2 daemons, quorum ceph-mon,ceph-mon2 (age 3s)
mgr: ceph-mon(active, since 13m)
osd: 3 osds: 3 up (since 7h), 3 in (since 2d)
rgw: 1 daemon active (ceph-mon)
data:
pools: 7 pools, 148 pgs
objects: 226 objects, 2.0 KiB
usage: 3.0 GiB used, 6.0 GiB / 9 GiB avail
pgs: 148 active+clean
[[email protected] ceph]#
No olvidemos vovler a añadir la configuración de los monitores en el fichero de configuración de Ceph:
[[email protected] ceph]# cat /etc/ceph.conf |grep mon |grep -v "#"
mon_initial_members = ceph-mon
mon_host = 10.0.3.247
[[email protected] ceph]#
Añadir servicios de monitorización complementarios (MGR)
Los servicios MGR son una serie de módulos que se pueden activar para monitorizar el estado del cluster de Ceph. Por ejemplo, podemos habilitar el módulo de Prometheus para analizar su rendimiento.
Veamos cómo funciona:
- Listado de los módulos:
[[email protected] ~]# ceph mgr module ls |more
{
"always_on_modules": [
"balancer",
"crash",
"devicehealth",
"orchestrator",
"pg_autoscaler",
"progress",
"rbd_support",
"status",
"telemetry",
"volumes"
],
"enabled_modules": [
"cephadm",
"dashboard",
"iostat",
"nfs",
"prometheus",
"restful"
],
"disabled_modules": [
{
"name": "alerts",
"can_run": true,
"error_string": "",
"module_options": {
"interval": {
- Habilitar un módulo:
En este caso hemos habilitado el exporter de Ceph para Prometheus.
[[email protected] ~]# ceph mgr module enable prometheus
[[email protected] ~]#
- Consultar los módulos o servicios MGR que tenemos habilitados:
[[email protected] ~]# ceph mgr services
{
"dashboard": "https://10.0.1.212:8443/",
"prometheus": "http://10.0.1.212:9283/"
}
[[email protected] ~]#
Copia de seguridad del mapa del monitor y eliminar uno de los monitores del cluster
Esto es útil cuando alguno de los monitores que tenemos configurado ha dejado de responder. Pongamos por caso que hemos configurado tres servicios de monitorización (A, B y C) pero sólo funciona el A.
Lo que tendríamos que hacer es identificar el monitor que ha sobrevivido y extraer el mapa, con el servicio parado.
[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# ceph-mon -i ceph-mon --extract-monmap /tmp/ceph-mon_map
2020-02-24 11:59:44.510 7f2cc8987040 -1 wrote monmap to /tmp/ceph-mon_map
[[email protected] ~]#
Luego, eliminamos los monitores que no responden:
[[email protected] ~]# monmaptool /tmp/ceph-mon_map --rm ceph-mon2
monmaptool: monmap file /tmp/ceph-mon_map
monmaptool: removing ceph-mon2
monmaptool: writing epoch 2 to /tmp/ceph-mon_map (1 monitors)
[[email protected] ~]#
Inyectar el mapa correcto en los monitores que han sobrevivido:
[[email protected] ~]# ceph-mon -i ceph-mon --inject-monmap /tmp/ceph-mon_map
[[email protected] ~]#
Arrancamos el servicio de monitorización en los nodos supervivientes:
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-02-24 12:10:29 UTC; 3s ago
Main PID: 4307 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
└─4307 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon --setuser ceph --setgroup ceph
Feb 24 12:10:29 ceph-mon systemd[1]: [email protected] holdoff time over, scheduling restart.
Feb 24 12:10:29 ceph-mon systemd[1]: Stopped Ceph cluster monitor daemon.
Feb 24 12:10:29 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.
[[email protected] ~]#
Observamos que, de nuevo, sólo hay un monitor en el cluster:
[[email protected] ~]# ceph -s
cluster:
id: 677b62e9-8834-407e-b73c-3f41e97597a8
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph-mon (age 4s)
mgr: ceph-mon2(active, since 6m), standbys: ceph-mon
osd: 3 osds: 3 up (since 6h), 3 in (since 2d)
rgw: 1 daemon active (ceph-mon)
data:
pools: 7 pools, 148 pgs
objects: 226 objects, 2.0 KiB
usage: 3.0 GiB used, 6.0 GiB / 9 GiB avail
pgs: 148 active+clean
[[email protected] ~]#
[[email protected] ~]# ceph mon stat
e3: 1 mons at {ceph-mon=[v2:10.0.3.247:3300/0,v1:10.0.3.247:6789/0]}, election epoch 61, leader 0 ceph-mon, quorum 0 ceph-mon
[[email protected] ~]#
Instalación de Ceph con CEPHADM
Requerimientos previos

Por lo tanto, haremos caso de las recomendaciones e instalaremos un nuevo cluster de Ceph con Ceph-adm. Para ello, seguiremos la guía oficial: https://docs.ceph.com/en/pacific/cephadm/install/#requirements
- Configuración del fichero /etc/hosts
# Monitor
10.0.1.182 cephmon3
# OSDs
10.0.1.64 cephosd1
10.0.1.204 cephosd2
10.0.1.182 cephosd3
Para este ejemplo únicamente configuraremos un monitor pero lo recomendable son tres.
- Instalamos los paquetes necesarios en el nodo del monitor:
dnf install -y python3 podman lvm2
En los OSDs no es necesario podman.
- Configuramos del repositorio de Cephadm en el nodo del monitor
Podemos observar que vamos a instalar la versión «pacific»: https://docs.ceph.com/en/latest/install/get-packages/
[[email protected] ~]# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
[[email protected] ~]# chmod u+x cephadm
[[email protected] ~]# ./cephadm add-repo --release pacific
Writing repo to /etc/yum.repos.d/ceph.repo...
Enabling EPEL...
Completed adding repo.
[[email protected] ~]#
[[email protected] ~]# dnf -y install cephadm
Configuración el servicio de monitorización
[[email protected] ~]# cephadm bootstrap --mon-ip 10.0.1.182
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 3.3.1 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 0e698ff2-5a9a-11ec-a3e0-06ccb6123643
Verifying IP 10.0.1.182 port 3300 ...
Verifying IP 10.0.1.182 port 6789 ...
Mon IP `10.0.1.182` is in CIDR network `10.0.1.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v16...
Ceph version: ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 10.0.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to [email protected] authorized_keys...
Adding host cephmon3...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://cephmon3:8443/
User: admin
Password: cm3t6xcigr
Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid 0e698ff2-5a9a-11ec-a3e0-06ccb6123643 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/pacific/mgr/telemetry/
Bootstrap complete.
[[email protected] ~]#
- Comprobamos que han arrancado todos los contenedores correctamente:
[[email protected] ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5127e9c46aa6 quay.io/ceph/ceph-grafana:6.7.4 /bin/bash About an hour ago Up About an hour ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-grafana-cephmon3
58d95c5dcd51 quay.io/ceph/ceph:v16 -n mgr.cephmon3.v... About an hour ago Up About an hour ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-mgr-cephmon3-vfgvhw
2d0db9042b01 quay.io/prometheus/node-exporter:v0.18.1 --no-collector.ti... About an hour ago Up About an hour ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-node-exporter-cephmon3
1bcf12fa7338 quay.io/ceph/[email protected]:2f7f0af8663e73a422f797de605e769ae44eb0297f2a79324739404cc1765728 -n client.crash.c... About an hour ago Up About an hour ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-crash-cephmon3
0bdda17a5841 quay.io/ceph/ceph:v16 -n mon.cephmon3 -... About an hour ago Up About an hour ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-mon-cephmon3
ee88eb81d9b0 quay.io/prometheus/alertmanager:v0.20.0 --cluster.listen-... 39 minutes ago Up 39 minutes ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-alertmanager-cephmon3
ceb809e0f901 quay.io/prometheus/prometheus:v2.18.1 --config.file=/et... 39 minutes ago Up 39 minutes ago ceph-9ebd1a74-5a9a-11ec-97be-064744202561-prometheus-cephmon3
[[email protected] ~]#
- Instalamos «ceph-common» en todos los nodos de monitorización, que contiene los comandos ceph, rbd, mount.ceph, entre otros.
[[email protected] ~]# cephadm install ceph-common
Installing packages ['ceph-common']...
[[email protected] ~]#
Para que funcione el comando «ceph -s» y el resto de comandos de Ceph en cualquier nodo de monitorización, tenemos que copiar la clave pública de Ceph y su fichero de configuración:
[[email protected] ~]# scp -p /etc/ceph/ceph.conf cephmon4:/etc/ceph
ceph.conf 100% 345 238.8KB/s 00:00
[[email protected] ~]# scp -p /etc/ceph/ceph.client.admin.keyring cephmon4:/etc/ceph
ceph.client.admin.keyring 100% 151 104.4KB/s 00:00
[[email protected] ~]
Si no lo hacemos, cuando ejecutemos cualquier comando de Ceph en otro nodo de monitorización donde no hayamos copiado estos ficheros, obtendremos el siguiente mensaje de error:
[[email protected] ~]# ceph -s
Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
[[email protected] ~]#
Configuración del servicio OSD
- Añadismos los hosts de OSD:
[[email protected] ~]# ceph orch host add cephosd1 10.0.1.64
Added host 'cephosd1' with addr '10.0.1.64'
[[email protected] ~]# ceph orch host add cephosd2 10.0.1.204
Added host 'cephosd2' with addr '10.0.1.204'
[[email protected] ~]# ceph orch host add cephosd3 10.0.1.182
Added host 'cephosd3' with addr '10.0.1.182'
[[email protected] ~]#
- Añadimos al cluster los discos de almacenamiento de los nodos OSD:
[[email protected] ~]# ceph orch daemon add osd cephosd1:/dev/xvdb
Created osd(s) 0 on host 'cephosd1'
[[email protected] ~]# ceph orch daemon add osd cephosd2:/dev/xvdb
Created osd(s) 1 on host 'cephosd2'
[[email protected] ~]# ceph orch daemon add osd cephosd3:/dev/xvdb
Created osd(s) 0 on host 'cephosd3'
[[email protected] ~]#
Configuración del servicio MGR
- Configuramos un servicio MGR (Manager) de standby:
[[email protected] ~]# ceph orch daemon add mgr --placement=cephmon4
Deployed mgr.cephmon4.lxghai on host 'cephmon4'
[[email protected] ~]#
[[email protected] ~]# ceph -s
cluster:
id: 9ebd1a74-5a9a-11ec-97be-064744202561
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
services:
mon: 5 daemons, quorum cephmon3,cephosd3,cephmon4,cephosd1,cephosd2 (age 2h)
mgr: cephmon3.vfgvhw(active, since 2h), standbys: cephmon4.lxghai
osd: 3 osds: 3 up (since 2h), 3 in (since 10d)
rgw: 12 daemons active (4 hosts, 1 zones)
data:
pools: 6 pools, 168 pgs
objects: 234 objects, 41 MiB
usage: 300 MiB used, 15 GiB / 15 GiB avail
pgs: 168 active+clean
[[email protected] ~]#
Conexión a la consola de administración del cluster de Ceph
- Por último, nos conectamos a la consola (IP y puerto del servicio de monitor) y comprobamos, gráficamente, que el cluster está correcto:


Configuración del servicio Rados Gateway
[[email protected] ~]# ceph orch host label add cephosd1 rgw
Added label rgw to host cephosd1
[[email protected] ~]# ceph orch host label add cephosd2 rgw
Added label rgw to host cephosd2
[[email protected] ~]# ceph orch host label add cephosd3 rgw
Added label rgw to host cephosd3
[[email protected] ~]#
[[email protected] ~]# ceph orch apply rgw rgwsvc '--placement=label:rgw count-per-host:3' --port=8000
Scheduled rgw.rgwsvc update...
[[email protected] ~]#
[[email protected] ~]# ceph -s
cluster:
id: 9ebd1a74-5a9a-11ec-97be-064744202561
health: HEALTH_WARN
1 failed cephadm daemon(s)
services:
mon: 5 daemons, quorum cephmon3,cephosd3,cephmon4,cephosd1,cephosd2 (age 9m)
mgr: cephmon3.vfgvhw(active, since 10m), standbys: cephosd3.rdpusq
osd: 3 osds: 3 up (since 9m), 3 in (since 10d)
rgw: 9 daemons active (3 hosts, 1 zones)
data:
pools: 6 pools, 191 pgs
objects: 163 objects, 41 MiB
usage: 176 MiB used, 15 GiB / 15 GiB avail
pgs: 191 active+clean
io:
client: 286 KiB/s rd, 0 B/s wr, 292 op/s rd, 138 op/s wr
progress:
[[email protected] ~]#
Configuración de un nuevo monitor con Cephadm
Si queremos añadir un nuevo servicio de monitorización en un nodo nuevo, los pasos que tendremos que seguir son:
- Instalar los paquetes necesarios en el nuevo nodo:
[[email protected] ~]# dnf install -y python3 podman lvm2
[r[email protected] ~]# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
[[email protected] ~]# chmod u+x cephadm
[[email protected] ~]# ./cephadm add-repo --release pacific
[[email protected] ~]# dnf -y install cephadm
[[email protected] ~]# cephadm install ceph-common
- Incorporar el nuevo nodo en el fichero /etc/hosts
- Configurar las relaciones de confianza SSH con el nuevo nodo:
[[email protected] .ssh]# ssh cephmon4 id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[[email protected] .ssh]#
[[email protected] .ssh]# ssh cephmon3 id
uid=0(root) gid=0(root) groups=0(root)
[[email protected] .ssh]#
- Desde el servicio de monitorización ya existente, añadir el nuevo Host:
[[email protected] ~]# ceph orch host add cephmon4 10.0.1.235
Added host 'cephmon4' with addr '10.0.1.235'
[[email protected] ~]#
En este punto, el servicio de monitorización se debería desplegar automáticamente pero, si no lo hiciese, también lo podríamos realizar manualmente:
[[email protected] ~]# /usr/sbin/cephadm shell
Inferring fsid 9ebd1a74-5a9a-11ec-97be-064744202561
Using recent ceph image quay.io/ceph/[email protected]:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
[ceph: [email protected] /]# ceph orch daemon add mon cephmon4:10.0.1.235
Error EINVAL: name mon.cephmon4 already in use
[ceph: [email protected] /]#
Almacenamiento en Ceph
Sustitución de un disco averiado de OSD
Comprobamos que tenemos caído el servicio OSD porque se ha perdido un disco (el servicio osd.0 aparece en estado down):
[[email protected] ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00870 root default
-3 0.00290 host ceph-osd1
0 ssd 0.00290 osd.0 down 1.00000 1.00000
-5 0.00290 host ceph-osd2
1 ssd 0.00290 osd.1 up 1.00000 1.00000
-7 0.00290 host ceph-osd3
2 ssd 0.00290 osd.2 up 1.00000 1.00000
[[email protected] ~]#
Eliminamos el disco afectado:
[[email protected] ~]# ceph osd unset noout
noout is unset
[[email protected] ~]#
[[email protected] ~]# ceph osd crush reweight osd.0 0
reweighted item id 0 name 'osd.0' to 0 in crush map
[[email protected] ~]#
[[email protected] ~]# ceph osd out osd.0 0
osd.0 is already out. osd.0 is already out.
[[email protected] ~]#
[[email protected] ~]# ceph osd crush remove osd.0
removed item id 0 name 'osd.0' from crush map
[[email protected] ~]#
[[email protected] ~]# ceph auth del osd.0
updated
[[email protected] ~]#
[[email protected] ~]# ceph osd rm osd.0
removed osd.0
[[email protected] ~]#
[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# systemctl disable [email protected]
[[email protected] ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00580 root default
-3 0 host ceph-osd1
-5 0.00290 host ceph-osd2
1 ssd 0.00290 osd.1 up 1.00000 1.00000
-7 0.00290 host ceph-osd3
2 ssd 0.00290 osd.2 up 1.00000 1.00000
[[email protected] ~]#
Añadimos el nuevo disco:
[[email protected] ceph]# ceph-deploy --overwrite-conf osd create ceph-osd1 --data /dev/xvdf
[[email protected] ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00870 root default
-3 0.00290 host ceph-osd1
0 ssd 0.00290 osd.0 up 1.00000 1.00000
-5 0.00290 host ceph-osd2
1 ssd 0.00290 osd.1 up 1.00000 1.00000
-7 0.00290 host ceph-osd3
2 ssd 0.00290 osd.2 up 1.00000 1.00000
[[email protected] ceph]#
Ampliar el tamaño de un volumen OSD tras ampliar una LUN
Para esta prueba vamos a identificar el disco que queremos ampliar:
[[email protected] ~]# ceph orch device ls
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REJECT REASONS
cephosd1 /dev/xvdf ssd 10.7G Insufficient space (<10 extents) on vgs, LVM detected, locked
cephosd2 /dev/xvdf ssd 10.7G Insufficient space (<10 extents) on vgs, LVM detected, locked
cephosd3 /dev/xvdf ssd 5368M Insufficient space (<10 extents) on vgs, LVM detected, locked
[[email protected] ~]#
En este caso vamos a elegir el disco /dev/xvdf del servidor cephosd3, que actualmente tiene 5GB y lo vamos a aumentar a 10.
Si instalamos el paquete «ceph-osd» encontraremos la utilidad ceph-volume, que nos permitirá obtener los detalles del disco.
[[email protected] ~]# ceph-volume lvm list
====== osd.1 =======
[block] /dev/ceph-70c97694-5801-4d5f-a513-5f3d1d664c14/osd-block-9f25301b-ff73-4a14-86d7-e5a270649d3e
block device /dev/ceph-70c97694-5801-4d5f-a513-5f3d1d664c14/osd-block-9f25301b-ff73-4a14-86d7-e5a270649d3e
block uuid aekzyw-tshc-Mpev-xEzY-pZwi-aH3t-dHsdsl
cephx lockbox secret
cluster fsid 9ebd1a74-5a9a-11ec-97be-064744202561
cluster name ceph
crush device class None
encrypted 0
osd fsid 9f25301b-ff73-4a14-86d7-e5a270649d3e
osd id 1
osdspec affinity all-available-devices
type block
vdo 0
devices /dev/xvdf
[[email protected] ~]#
Ahora sabemos que el disco xvdf pertenece al OSD 1. Este dato es importante porque tendremos que reiniciar este servicio para aplicar el nuevo tamaño.
El siguiente paso es ampliar el disco:
[[email protected] ~]# fdisk -l |grep xvdf
Disk /dev/xvdf: 5 GiB, 5368709120 bytes, 10485760 sectors
[[email protected] ~]# rescan-scsi-bus.sh
which: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
0 new or changed device(s) found.
0 remapped or resized device(s) found.
0 device(s) removed.
[[email protected] ~]# fdisk -l |grep xvdf
Disk /dev/xvdf: 10 GiB, 10737418240 bytes, 20971520 sectors
[[email protected] ~]#
Como este disco pertenece a una estructura LVM, también hay que ampliar el PV y el LV:
[[email protected] ~]# lvextend -n /dev/ceph-70c97694-5801-4d5f-a513-5f3d1d664c14/osd-block-9f25301b-ff73-4a14-86d7-e5a270649d3e -l+100%FREE
Size of logical volume ceph-70c97694-5801-4d5f-a513-5f3d1d664c14/osd-block-9f25301b-ff73-4a14-86d7-e5a270649d3e changed from <5.00 GiB (1279 extents) to <10.00 GiB (2559 extents).
Logical volume ceph-70c97694-5801-4d5f-a513-5f3d1d664c14/osd-block-9f25301b-ff73-4a14-86d7-e5a270649d3e successfully resized.
[[email protected] ~]#
El logical volume ampliado se corresponde, exactamente, con el campo «block device» que nos ha dado el comando «ceph-volume lvm list».
Por último, reiniciamos el servicio de OSD 1, identificado anteriormente:
[[email protected] ~]# ceph df |head -5
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 25 GiB 15 GiB 10 GiB 10 GiB 41.05
TOTAL 25 GiB 15 GiB 10 GiB 10 GiB 41.05
[[email protected] ~]# ceph orch daemon restart osd.1
Scheduled to restart osd.1 on host 'cephosd3'
[[email protected] ~]# ceph df |head -5
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 30 GiB 15 GiB 15 GiB 15 GiB 50.84
TOTAL 30 GiB 15 GiB 15 GiB 15 GiB 50.84
[[email protected] ~]#
Como podemos observar, el tamaño total, ha aumentado de 25 a 30GB.
Consultar qué discos pertenecen a cada OSD
En el momento en que escribo esta sección, la última cersión de ceph es la «release pacific». Como podemos observar en la página oficial de instalación de Ceph, ceph-deploy ya ha quedado obsoleto y recomiendan el uso de CEPHADM:
[[email protected] ~]# ceph-volume lvm list
====== osd.0 =======
[block] /dev/ceph-dcbd800c-4ed0-4ce5-9768-a79f637a949a/osd-block-2dc06989-022c-4c16-94b9-fe31c940b50a
block device /dev/ceph-dcbd800c-4ed0-4ce5-9768-a79f637a949a/osd-block-2dc06989-022c-4c16-94b9-fe31c940b50a
block uuid OA7LXG-zpOg-kzKS-tcsh-VZE3-1iUl-dR1DZb
cephx lockbox secret
cluster fsid 244f4818-95f7-42fc-ae33-6c45adb4521f
cluster name ceph
crush device class None
encrypted 0
osd fsid 2dc06989-022c-4c16-94b9-fe31c940b50a
osd id 0
type block
vdo 0
devices /dev/xvdf
[[email protected] ~]#
Tipo de almacenamiento por bloque (RBD)
Creación de un pool para el almacenamiento de datos
Creamos el pool que utilizará el filesystem:
[[email protected] ~]# ceph osd pool create pool-test 10 10
pool 'pool-test' created
[[email protected] ~]# ceph osd pool application enable pool-test rbd
enabled application 'rbd' on pool 'pool-test'
[[email protected] ~]#
Los dos números 10 que vemos se refieren a los parámetros PG y PGP (placement groups). Los Placement Groups son la manera en cómo Ceph se encarga de distribuir los datos entre los diferentes discos OSD.
Placement Groups: You can set the number of placement groups for the pool. A typical configuration targets approximately 100 placement groups per OSD to provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to set a reasonable number of placement groups for each pool and for the cluster as a whole. Note that each PG belongs to a specific pool, so when multiple pools use the same OSDs, you must take care that the sum of PG replicas per OSD is in the desired PG per OSD target range.
https://docs.ceph.com/en/latest/rados/operations/pools/
Si lo deseamos, podemos modificar manualmente el número de Placement Groups configurados inicialmente:
ceph osd pool set {pool-name} pg_num {pg_num}
Pero es conveniente activar el autoescalado en el momento de crear el pool:
ceph osd pool set {pool-name} pg_autoscale_mode on
CRUSH MAP: El Crush Map, que son las iniciales de Controlled Replication Under Scalable Hashing, es el algoritmo encargado de almacenar los datos en los diferentes Placement Groups , asignando cada uno de ellos a un servicio de OSD diferente.
Crush map es capaz de rebalancear los datos entre los diferentes OSDs activos.
[[email protected] ~]# ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 0.01469 root default
-3 0.00490 host cephosd1
0 ssd 0.00490 osd.0
-5 0.00490 host cephosd2
2 ssd 0.00490 osd.2
-7 0.00490 host cephosd3
1 ssd 0.00490 osd.1
[[email protected] ~]#
Asignamos un disco
Creamos el disco que se utilizará para almacenamiento por bloques (1MB):
[[email protected] ~]# rbd create disk01 --size 1 -p pool-test
[[email protected] ~]#
[[email protected] ~]# rbd ls -l -p pool-test
NAME SIZE PARENT FMT PROT LOCK
disk01 1 MiB 2
[[email protected] ~]#
[[email protected] ~]# rbd map disk01 -p pool-test
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable pool-test/disk01 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[[email protected] ~]# modprobe rbd
[[email protected] ~]# rbd feature disable pool-test/disk01 object-map fast-diff deep-flatten
[[email protected] ~]# rbd map disk01 -p pool-test
/dev/rbd0
[[email protected] ~]#
Creamos un filesystem sobre el disco de Ceph presentado
Formateamos el filesystem y lo montamos:
[[email protected] ~]# mkfs.ext4 /dev/rbd0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem too small for a journal
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4096 blocks, Stripe width=4096 blocks
128 inodes, 1024 blocks
51 blocks (4.98%) reserved for the super user
First data block=1
Maximum filesystem blocks=1048576
1 block group
8192 blocks per group, 8192 fragments per group
128 inodes per group
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
[[email protected] ~]# mount /dev/rbd0 /rbd-test/
[[email protected] ~]# df -hP /rbd-test/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 1003K 21K 911K 3% /rbd-test
[[email protected] ~]#
[[email protected] rbd-test]# echo david > david.txt
[[email protected] rbd-test]# ll
total 13
-rw-r--r-- 1 root root 6 Feb 18 10:03 david.txt
drwx------ 2 root root 12288 Feb 18 10:02 lost+found
[[email protected] rbd-test]#
Ampliar un filesystem de tipo RBD
A continuación vamos a ampliar el filesystem anterior hasta 500MB con rbd resize:
[[email protected] ~]# rbd ls -l -p pool-test
NAME SIZE PARENT FMT PROT LOCK
disk01 1 MiB 2 excl
disk02 1 GiB 2 excl
[[email protected] ~]#
[[email protected] ~]# rbd map disk01 -p pool-test
/dev/rbd0
[[email protected] ~]# mount /dev/rbd0 /mnt/mycephfs/
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 1003K 23K 909K 3% /mnt/mycephfs
[[email protected] ~]#
[[email protected] ~]# rbd resize --image disk01 --size 500M -p pool-test
Resizing image: 100% complete...done.
[[email protected] ~]#
[[email protected] ~]# resize2fs /dev/rbd0
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/rbd0 is mounted on /mnt/mycephfs; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 4
The filesystem on /dev/rbd0 is now 512000 blocks long.
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 499M 52K 499M 1% /mnt/mycephfs
[[email protected] ~]#
Reducir el tamaño de un filesystem RBD
El filesystem anterior de 500MB lo vamos a reducir a 200MB:
[[email protected] ~]# e2fsck -f /dev/rbd0
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/rbd0: 13/8064 files (0.0% non-contiguous), 1232/512000 blocks
[[email protected] ~]# resize2fs -M /dev/rbd0
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/rbd0 to 1308 (1k) blocks.
The filesystem on /dev/rbd0 is now 1308 blocks long.
[[email protected] ~]#
[[email protected] ~]# rbd resize --size 200M disk01 --allow-shrink -p pool-test
Resizing image: 100% complete...done.
[[email protected] ~]#
[[email protected] ~]# resize2fs /dev/rbd0
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/rbd0 to 204800 (1k) blocks.
The filesystem on /dev/rbd0 is now 204800 blocks long.
[[email protected] ~]#
[[email protected] ~]# mount /dev/rbd0 /mnt/mycephfs/
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 200M 52K 196M 1% /mnt/mycephfs
[[email protected] ~]#
Renombrar una imagen RBD
Si queremos personalizar el nombre de las imágenes RBD, por ejemplo, para relacionarlas con un servicio o un filesystem, podemos hacerlo de la siguiente manera:
[[email protected]h-mon ~]# rbd ls -l -p pool-test
NAME SIZE PARENT FMT PROT LOCK
disk01 200 MiB 2 excl
disk02 1 GiB 2 excl
[[email protected] ~]# rbd mv disk01 rbd_david -p pool-test
[[email protected] ~]# rbd ls -l -p pool-test
NAME SIZE PARENT FMT PROT LOCK
disk02 1 GiB 2 excl
rbd_david 200 MiB 2 excl
[[email protected] ~]#
Configurar el fichero fstab para montar filesystems RBD con el boot del sistema
- Configuramos el fichero /etc/ceph/rbdmap con la imagen RBD que queremos montar:
[[email protected] ~]# tail -1 /etc/ceph/rbdmap
pool-test/rbd_david
[[email protected] ~]#
- Activamos el servicio rbdmap con el boot del sistema:
[[email protected] ~]# systemctl enable rbdmap
Created symlink from /etc/systemd/system/multi-user.target.wants/rbdmap.service to /usr/lib/systemd/system/rbdmap.service.
[[email protected] ~]# systemctl start rbdmap
[[email protected] ~]# systemctl status rbdmap
● rbdmap.service - Map RBD devices
Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2020-02-20 08:18:01 UTC; 3s ago
Process: 2607 ExecStart=/usr/bin/rbdmap map (code=exited, status=0/SUCCESS)
Main PID: 2607 (code=exited, status=0/SUCCESS)
Feb 20 08:18:01 ceph-mon systemd[1]: Starting Map RBD devices...
Feb 20 08:18:01 ceph-mon systemd[1]: Started Map RBD devices.
[[email protected] ~]#
- Observamos que se ha creado automáticamente el dispositivo RBD en el sistema operativo:
[[email protected] ~]# ll /dev/rbd/pool-test/
total 0
lrwxrwxrwx 1 root root 10 Feb 20 08:28 rbd_david -> ../../rbd0
[[email protected] ~]#
- Configuramos el fichero /etc/fstab como hacemos habitualmente y montamos el filesystem:
[[email protected] ~]# tail -1 /etc/fstab
/dev/rbd/pool-test/rbd_david /mnt/mycephfs ext4 defaults 0 0
[[email protected] ~]#
[[email protected] ~]# mount /mnt/mycephfs/
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 200M 52K 196M 1% /mnt/mycephfs
[[email protected] ~]#
Tipo de almacenamiento Ceph Filesystem
Activación del servicio MDS
Todavía no lo habíamos activado y es un requerimiento para que podamos almacenar datos con CephFS.
Desplegamos y habilitamos el servicio de MDS en diferentes nodos para el almacenamiento distribuido. En este caso, utilizaremos los mismos nodos donde habíamos desplegado el servicio de OSD:
[[email protected] ~]# cd /etc/ceph
[[email protected] ceph]# ceph-deploy mds create ceph-osd1:ceph-mds1 ceph-osd2:ceph-mds2 ceph-osd3:ceph-mds3
[[email protected] ceph]# ceph -s
cluster:
id: 6154923d-93fc-48a6-860d-612c71576d38
health: HEALTH_WARN
application not enabled on 1 pool(s)
services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
mds: cephfs-1/1/1 up {0=ceph-mds2=up:active}, 1 up:standby
osd: 3 osds: 3 up, 3 in
rgw: 1 daemon active
data:
pools: 7 pools, 62 pgs
objects: 194 objects, 1.0 MiB
usage: 3.0 GiB used, 6.0 GiB / 9 GiB avail
pgs: 62 active+clean
[[email protected] ceph]#
Arrancamos el servicio MDS en cada uno de los nodos donde lo hemos instalado:
[[email protected] ~]# systemctl enable [email protected]
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Sun 2020-02-23 17:47:40 UTC; 6min ago
Main PID: 17162 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
└─17162 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds1 --setuser ceph --setgroup ceph
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:14] Unknown lvalue 'LockPersonality' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:15] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'ProtectControlGroups' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:20] Unknown lvalue 'ProtectKernelModules' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:21] Unknown lvalue 'ProtectKernelTunables' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:14] Unknown lvalue 'LockPersonality' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:15] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'ProtectControlGroups' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:20] Unknown lvalue 'ProtectKernelModules' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:21] Unknown lvalue 'ProtectKernelTunables' in section 'Service'
[[email protected] ~]#
[[email protected] ~]# systemctl enable [email protected]
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]# systemctl enable [email protected]
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]#
Deberemos fijarnos en la siguiente línea para saber si está el servicio levantado:
mds: cephfs-1/1/1 up {0=ceph-mds2=up:active}, 1 up:standby
Creamos un pool de almacenamiento de datos
Creamos los pools para almacenar los datos y metadatos
[[email protected] ~]# ceph osd pool create cephfs_data 10 10
pool 'cephfs_data' created
[[email protected] ~]#
[[email protected] ~]# ceph osd pool create cephfs_metadata 10 10
pool 'cephfs_metadata' created
[[email protected] ~]#
Creamos el filesystem de Ceph (CephFS)
- Creamos el filesystem de Ceph
[[email protected] ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 7 and data pool 6
[[email protected] ~]#
[[email protected] ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[[email protected] ~]# ceph mds stat
cephfs-0/0/1 up
[[email protected] ~]#
- Copiamos la clave de autentificación para poder montar los filesystems:
[[email protected] ~]# ssh ceph-osd1 'sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring' > ceph.key
[[email protected] ~]# chmod 600 ceph.key
[[email protected] ~]#
[[email protected] ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQAjoUteTSDRIhAAO5zStGzdqlZgaTWI2eQy0Q==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[[email protected] ~]#
[[email protected] ~]# cat ceph.key
AQAjoUteTSDRIhAAO5zStGzdqlZgaTWI2eQy0Q==
[[email protected] ~]#
[[email protected] ~]# ceph auth get client.admin
exported keyring for client.admin
[client.admin]
key = AQAjoUteTSDRIhAAO5zStGzdqlZgaTWI2eQy0Q==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[[email protected] ~]#
- Montamos el filesystem apuntando al servidor donde está arrancado el servicio de monitorización del cluster (mon):
[[email protected] ~]# mount -t ceph ceph-mon:6789:/ /mnt/mycephfs -o name=admin,secretfile=ceph.key -vvv
parsing options: rw,name=admin,secretfile=ceph.key
[[email protected] ~]#
[[email protected] ~]# df -hP |grep myceph
10.0.3.154:6789:/ 1.9G 0 1.9G 0% /mnt/mycephfs
[[email protected] ~]#
Si tuviésemos replicado el punto de montaje en varios monitores de Ceph, la sintaxis sería la siguiente:
mount -t ceph <monitor1-host-name>:6789,<monitor2-host-name>:6789,<monitor3-host-name>:6789:/ <mount-point> -o name=<user-name>,secretfile=<path>
- Escribimos algún dato en el filesystem para probarlo:
[[email protected] mycephfs]# echo david > david.txt
[[email protected] mycephfs]# ll
total 1
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] mycephfs]#
- Probamos CephFS en otro nodo cliente:
A continuación, vamos a montar el filesystem desde otro nodo para tener montado el mismo filesystem CephFS al mismo tiempo (como un NFS tradicional):
[[email protected] ~]# mkdir /mnt/mycephfs
[[email protected] ~]# mount -t ceph ceph-mon:6789:/ /mnt/mycephfs -o name=admin,secretfile=ceph.key
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem Size Used Avail Use% Mounted on
10.0.3.154:6789:/ 1.9G 0 1.9G 0% /mnt/mycephfs
[[email protected] ~]# ll /mnt/mycephfs/
total 1
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] ~]#
[[email protected] ~]# ll /mnt/mycephfs/
total 1
-rw-r--r-- 1 root root 7 Feb 19 08:38 david2.txt
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] ~]#
Desde ambos nodos se ven los dos ficheros que hemos creado desde nodos diferentes apuntando al mismo FS de Ceph:
[[email protected] ~]# ll /mnt/mycephfs/
total 1
-rw-r--r-- 1 root root 7 Feb 19 08:38 david2.txt
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] ~]#
Almacenamiento por objetos utilizando la API de S3
Esta configuración también me ha dado problemas y todavía la estoy investigando. Os explico hasta donde he llegado.
Configuración del DNS interno
La API de S3 utiliza la resolución por nombres de DNS e ignora el fichero /etc/hosts. Para solucionar esta problemática, instalaremos dnsmasq:
[[email protected] ~]# yum install -y dnsmasq
[[email protected] ~]# systemctl enable dnsmasq
Created symlink from /etc/systemd/system/multi-user.target.wants/dnsmasq.service to /usr/lib/systemd/system/dnsmasq.service.
[[email protected] ~]# systemctl start dnsmasq
[[email protected] ~]# systemctl status dnsmasq
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-02-18 08:48:34 UTC; 6s ago
Main PID: 16493 (dnsmasq)
CGroup: /system.slice/dnsmasq.service
└─16493 /usr/sbin/dnsmasq -k
Feb 18 08:48:34 ceph-mon systemd[1]: Started DNS caching server..
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: started, version 2.76 cachesize 150
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: reading /etc/resolv.conf
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: using nameserver 10.0.0.2#53
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: read /etc/hosts - 6 addresses
[[email protected] ~]#
[[email protected] ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
#search eu-west-1.compute.internal
nameserver 10.0.3.154
nameserver 10.0.0.2
[[email protected] ~]#
[[email protected] ~]# nslookup rgw.ceph-mon.com
Server: 10.0.3.154
Address: 10.0.3.154#53
Name: rgw.ceph-mon.com
Address: 10.0.1.253
[[email protected] ~]#
Evitamos que dhclient sobrescriba el fichero /etc/resolv.conf al rebotar el sistema:
[[email protected] ~]# cat /etc/dhcp/dhclient-enter-hooks
#!/bin/sh
make_resolv_conf(){
:
}
[[email protected] ~]# chmod u+x /etc/dhcp/dhclient-enter-hooks
Instalación de Apache
La API de S3 necesita una URL con la que interactuar, así que instalaremos Apache.
[[email protected] ~]# yum install -y httpd mod_ssl openssl fcgi mod_fcgid
[[email protected] ~]# cat /etc/httpd/conf.d/rgw.conf
<VirtualHost *:80>
ServerName rgw.ceph-mon.com
ServerAdmin [email protected]
ServerAlias *.ceph-mon.com
DocumentRoot /var/www/html
RewriteEngine On
RewriteRule ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
<IfModule mod_fastcgi.c>
<Directory /var/www/html>
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
</Directory>
</IfModule>
AllowEncodedSlashes On
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
ServerSignature Off
</VirtualHost>
[[email protected] ~]#
/etc/httpd/conf/httpd.conf
<IfModule !proxy_fcgi_module>
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
</IfModule>
[[email protected] ~]# openssl genrsa -out ca.key 2048
Generating RSA private key, 2048 bit long modulus
........................+++
..................................................................................................+++
e is 65537 (0x10001)
[[email protected] ~]# openssl req -new -key ca.key -out ca.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[[email protected] ~]#
[[email protected] ~]# openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting Private key
[[email protected] ~]# cp ca.crt /etc/pki/tls/certs
[[email protected] ~]# cp ca.key /etc/pki/tls/private/ca.key
[[email protected] ~]# cp ca.csr /etc/pki/tls/private/ca.csr
[[email protected] ~]#
/etc/httpd/conf.d/ssl.conf
SSLCertificateFile /etc/pki/tls/certs/ca.crt
SSLCertificateKeyFile /etc/pki/tls/private/ca.key
Creamos un usuario de Rados para que pueda acceder al almacenamiento por objetos
[[email protected] ~]# radosgw-admin user create --uid="david" --display-name="David"
{
"user_id": "david",
"display_name": "David",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "david",
"access_key": "OXJF3D8RKL84ITQI7OFO",
"secret_key": "ANRy3jLqdQNrC8lxrgJ8K3xCW59fELjKRGi7OIji"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
[[email protected] ~]#
Instalamos la API de S3
[[email protected] ~]# yum install -y python2-pip
[[email protected] ~]# pip install s3cmd
Collecting s3cmd
Downloading https://files.pythonhosted.org/packages/3a/f5/c70bfb80817c9d81b472e077e390d8c97abe130c9e86b61307a1d275532c/s3cmd-2.0.2.tar.gz (124kB)
100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 133kB 6.7MB/s
Collecting python-dateutil (from s3cmd)
Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB)
100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 235kB 4.4MB/s
Collecting python-magic (from s3cmd)
Downloading https://files.pythonhosted.org/packages/42/a1/76d30c79992e3750dac6790ce16f056f870d368ba142f83f75f694d93001/python_magic-0.4.15-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.5 in /usr/lib/python2.7/site-packages (from python-dateutil->s3cmd)
Installing collected packages: python-dateutil, python-magic, s3cmd
Running setup.py install for s3cmd ... done
Successfully installed python-dateutil-2.8.1 python-magic-0.4.15 s3cmd-2.0.2
You are using pip version 8.1.2, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[[email protected] ~]#
[[email protected] ~]# s3cmd --configure -c s3test.cfg
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: OXJF3D8RKL84ITQI7OFO
Secret Key: ANRy3jLqdQNrC8lxrgJ8K3xCW59fELjKRGi7OIji
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.ceph-mon.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: ceph-mon.com
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: OXJF3D8RKL84ITQI7OFO
Secret Key: ANRy3jLqdQNrC8lxrgJ8K3xCW59fELjKRGi7OIji
Default Region: US
S3 Endpoint: rgw.ceph-mon.com
DNS-style bucket+hostname:port template for accessing a bucket: ceph-mon.com
Encryption password:
Path to GPG program: /bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] n
Save settings? [y/N] y
Configuration saved to 's3test.cfg'
[[email protected] ~]#
Importante cambiar los nombres del fichero de configuración anterior. De hecho, por este motivo tuvimos que instalar dnsmasq anteriormente.
[[email protected] ~]# egrep "ceph-mon|https" s3test.cfg
host_base = ceph-mon.com:7480
host_bucket = ceph-mon.com/%(bucket)
signurl_use_https = False
use_https = False
website_endpoint = http://%(bucket)s.ceph-mon.com
[[email protected] ~]#
Creamos correctamente un bucket de S3 y subimos un fichero:
[[email protected] ~]# s3cmd -c s3test.cfg mb --no-check-certificate s3://david-bucket/
Bucket 's3://david-bucket/' created
[[email protected] ~]#
[[email protected] ceph]# s3cmd -c s3test.cfg ls
2020-02-25 07:50 s3://david-bucket
[[email protected] ceph]#
[[email protected] ~]# s3cmd -c s3test.cfg put /tmp/david.txt s3://david-bucket/
upload: '/tmp/david.txt' -> 's3://david-bucket/david.txt' [1 of 1]
6 of 6 100% in 1s 3.53 B/s done
[[email protected] ~]#
[[email protected] ~]# s3cmd -c s3test.cfg ls s3://david-bucket/
2020-02-20 09:51 13 s3://david-bucket/david.txt
[[email protected] ~]#
Si queremos eliminar un bucket:
[[email protected] ceph]# s3cmd -c s3test.cfg rb s3://david-bucket/
Bucket 's3://david-bucket/' removed
[[email protected] ceph]#
Descargar un fichero u objeto con S3cmd
[[email protected] ~]# s3cmd -c s3test.cfg get s3://david-bucket/david.txt
download: 's3://david-bucket/david.txt' -> './david.txt' [1 of 1]
6 of 6 100% in 0s 141.01 B/s done
[[email protected] ~]# ll david.txt
-rw-r--r-- 1 root root 6 Feb 20 10:05 david.txt
[[email protected] ~]#
Copiar un objeto de un bucket remoto a un bucket local
- Descargamos el fichero del bucket remoto:
[[email protected] ~]# s3cmd -c s3mon11.cfg get s3://david-bucket/*.txt
download: 's3://david-bucket/david.txt' -> './david.txt' [1 of 1]
6 of 6 100% in 0s 137.01 B/s done
[[email protected] ~]#
En el fichero s3mon11.cfg tenems la configuración para poder conectarnos al bucket remoto.
- Creamos el bucket local:
[[email protected] ~]# s3cmd -c s3mon21.cfg mb s3://david-bucket
Bucket 's3://david-bucket/' created
[[email protected] ~]#
En el fichero s3mon21.cfg tenemos la configuración para conectarnos al bucket local.
- Subimos el fichero al bucket local:
[[email protected] ~]# s3cmd -c s3mon21.cfg put david.txt s3://david-bucket
upload: 'david.txt' -> 's3://david-bucket/david.txt' [1 of 1]
6 of 6 100% in 1s 3.58 B/s done
[[email protected] ~]#
También podemos subir directorios enteros con el parámetro «–recursive» o sincronizar directorios o buckets con sync ( s3cmd sync Directorio_local s3://Bucket_Destino).
Eliminar un fichero u objeto con s3cmd
[[email protected] ~]# s3cmd -c s3test.cfg del s3://david-bucket/david.txt
delete: 's3://david-bucket/david.txt'
[[email protected] ~]# s3cmd -c s3test.cfg ls s3://david-bucket/david.txt
[[email protected] ~]#
Obtener información de un objeto (metadatos)
[[email protected] ~]# s3cmd -c s3test.cfg info s3://david-bucket/david.txt
s3://david-bucket/david.txt (object):
File size: 6
Last mod: Thu, 20 Feb 2020 12:00:22 GMT
MIME type: text/plain
Storage: STANDARD
MD5 sum: e7ad599887b1baf90b830435dac14ba3
SSE: none
Policy: none
CORS: none
ACL: David: FULL_CONTROL
x-amz-meta-s3cmd-attrs: atime:1582193678/ctime:1582193668/gid:0/gname:root/md5:e7ad599887b1baf90b830435dac14ba3/mode:33188/mtime:1582193668/uid:0/uname:root
[[email protected] ~]#
Configuración de la expiración de un objeto
[[email protected] ~]# s3cmd -c s3test.cfg put /tmp/david.txt s3://david-bucket/ --expiry-date=2020/02/21
upload: '/tmp/david.txt' -> 's3://david-bucket/david.txt' [1 of 1]
6 of 6 100% in 0s 113.70 B/s done
[[email protected] ~]#
[[email protected] ~]# s3cmd -c s3test.cfg put /tmp/david.txt s3://david-bucket/ --expiry-days=1
upload: '/tmp/david.txt' -> 's3://david-bucket/david.txt' [1 of 1]
6 of 6 100% in 0s 114.29 B/s done
[[email protected] ~]#
Configuración de políticas de expiración en todo el bucket S3 (lyfecycle)
- Creamos un fichero XML con las políticas que nos interesan:
[[email protected] ~]# cat lifecycle.xml
<LifecycleConfiguration>
<Rule>
<ID>delete-error-logs</ID>
<Prefix>error</Prefix>
<Status>Enabled</Status>
<Expiration>
<Days>7</Days>
</Expiration>
</Rule>
<Rule>
<ID>delete-standard-logs</ID>
<Prefix>logs</Prefix>
<Status>Enabled</Status>
<Expiration>
<Days>1</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
[[email protected] ~]#
- Actualizamos las políticas:
[[email protected] ~]# s3cmd -c s3test.cfg setlifecycle lifecycle.xml s3://david-bucket
s3://david-bucket/: Lifecycle Policy updated
[[email protected] ~]#
- Comprobamos que se han actualizado:
[[email protected] ~]# s3cmd -c s3test.cfg getlifecycle s3://david-bucket
<?xml version="1.0" ?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<ID>delete-error-logs</ID>
<Prefix>error</Prefix>
<Status>Enabled</Status>
<Expiration>
<Days>7</Days>
</Expiration>
</Rule>
<Rule>
<ID>delete-standard-logs</ID>
<Prefix>logs</Prefix>
<Status>Enabled</Status>
<Expiration>
<Days>1</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
[[email protected] ~]#
Compartir un bucket con otro usuario
Para permitir que otro usuario pueda leer o escribir en un bucket o un objeto del que no es propietario, configuraremos permisos ACL. Por ejemplo, así:
s3cmd setacl --grant-acl=read:<canonical-user-id> s3://BUCKETNAME[/OBJECT]
Seguridad en Ceph
La seguridad en Ceph está basada en un sistema de clave-valor. Una aplicación cliente solamente se podrá conectar al servicio de Ceph cuando creemos un usuario en Ceph con su clave correspondiente.
La nomeclatura estándar para la creación de usuarios es client.usuario. Por ejemplo, el usuario «admin» lo crearemos en ceph como client.admin. Este usuario se crea automáticamente con la instalación del producto y podemos obtener su clave de la siguiente manera:
[[email protected] ~]# ceph auth get client.admin
[client.admin]
key = AQAUyrRhjFpBKxAA+J8M21fT7Kt9h4vPgl9cxA==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
exported keyring for client.admin
[[email protected] ~]#
Esta información también a podemos consultar leyendo el fichero de claves, pero nunca lo debemos modificar manualmente:
[[email protected] ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQAUyrRhjFpBKxAA+J8M21fT7Kt9h4vPgl9cxA==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[[email protected] ~]#
Si quisiéramos crear otro usuario diferente de Ceph con otros permisos diferentes para otra aplicación que se deba conectar a Ceph, lo haremos de la siguiente manera:
[[email protected] ~]# ceph auth get-or-create client.wordpress mon 'profile rbd' osd 'profile rbd pool=wordpress' mgr 'profile rbd pool=wordpress'
[client.wordpress]
key = AQB+uMBhuEAUOxAAnB5IHvnDb2WyA4CH3Zn2LA==
[[email protected] ~]#
[[email protected] ~]# ceph auth get client.wordpress
[client.wordpress]
key = AQB+uMBhuEAUOxAAnB5IHvnDb2WyA4CH3Zn2LA==
caps mgr = "profile rbd pool=wordpress"
caps mon = "profile rbd"
caps osd = "profile rbd pool=wordpress"
exported keyring for client.wordpress
[[email protected] ~]#
Como vemos, en esta ocasión el usuario de Ceph «wordpress» no tiene acceso a toda la administración de Ceph sino solamente a un pool llamado WordPress.
Para ver el listado completo de autorizaciones utilizaremos el comando:
[[email protected] ~]# ceph auth list |more
installed auth entries:
osd.0
key: AQC6BLZh7tDuCRAAQeg2gz4TQ9boEHMDrpqFSQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQCzBrZhyGD9BxAABv3eBX/euzcFJoeTYbG+ng==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQDuCbZhbPnBARAAMVwiE8c0BmOa4L3KdZFzYw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQAUyrRhjFpBKxAA+J8M21fT7Kt9h4vPgl9cxA==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQAYyrRhIfsPKhAAq8rMKcejLFANvrI0KUZ+7g==
caps: [mon] allow profile bootstrap-mds
Si queremos ver cómo se conecta la aplicación a Ceph, podemos visitar el artículo Instalación de Contenedores con Dockers, Podman y Kubernetes en Linux Centos, donde tenemos un ejemplo.
Actualización de la versión de Ceph
Vamos a actualizar desde la versión opensource Ceph Mimic a Nautilus:
- Miramos la versión actual de Ceph
[[email protected] ~]# ceph --version
ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0) mimic (stable)
[[email protected] ~]#
- Configuramos el respositorio de yum que tiene la nueva versión de Ceph, en todos los servidores del cluster:
[[email protected] yum.repos.d]# cat ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[[email protected] yum.repos.d]#
- Actualizamos Ceph y el sistema operativo:
[[email protected] yum.repos.d]# ceph-deploy install --release luminous ceph-mon ceph-osd1 ceph-osd2 ceph-osd3
[[email protected] yum.repos.d]# yum update -y
- Reiniciamos el servicio de monitor
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:52:32 UTC; 1s ago
Main PID: 3436 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
└─3436 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon --setuser ceph --setgroup ceph
Feb 20 10:52:32 ceph-mon systemd[1]: Stopped Ceph cluster monitor daemon.
Feb 20 10:52:32 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.
Feb 20 10:52:32 ceph-mon ceph-mon[3436]: 2020-02-20 10:52:32.761 7f5f2e8ec040 -1 [email protected](electing) e1 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''
Feb 20 10:52:32 ceph-mon ceph-mon[3436]: 2020-02-20 10:52:32.770 7f5f1516a700 -1 [email protected](electing) e2 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''
[[email protected] ~]#
- Reiniciamos los servicios de OSD
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:54:54 UTC; 5s ago
Process: 2500 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 2505 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
└─2505 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: legacy statfs record found, removing
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 1
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 3
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 5
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 6
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 7
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 9
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool ffffffffffffffff
Feb 20 10:54:57 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:57.335 7f696ffe1a80 -1 osd.0 83 log_to_monitors {default=true}
Feb 20 10:54:57 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:57.350 7f69625d4700 -1 osd.0 83 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:55:42 UTC; 8s ago
Process: 2706 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 2711 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
└─2711 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
Feb 20 10:55:42 ceph-osd1 systemd[1]: Stopped Ceph object storage daemon osd.0.
Feb 20 10:55:42 ceph-osd1 systemd[1]: Starting Ceph object storage daemon osd.0...
Feb 20 10:55:42 ceph-osd1 systemd[1]: Started Ceph object storage daemon osd.0.
Feb 20 10:55:42 ceph-osd1 ceph-osd[2711]: 2020-02-20 10:55:42.816 7f1a19731a80 -1 Falling back to public interface
Feb 20 10:55:43 ceph-osd1 ceph-osd[2711]: 2020-02-20 10:55:43.625 7f1a19731a80 -1 osd.0 87 log_to_monitors {default=true}
Feb 20 10:55:43 ceph-osd1 ceph-osd[2711]: 2020-02-20 10:55:43.636 7f1a0bd24700 -1 osd.0 87 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph object storage daemon osd.1
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:56:19 UTC; 4s ago
Process: 2426 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 2431 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
└─2431 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: legacy statfs record found, removing
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 1
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 3
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 5
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 6
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 7
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 9
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool ffffffffffffffff
Feb 20 10:56:22 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:22.346 7fe3ed4f4a80 -1 osd.1 91 log_to_monitors {default=true}
Feb 20 10:56:22 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:22.356 7fe3dfae7700 -1 osd.1 91 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected]h-osd2 ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph object storage daemon osd.2
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:56:52 UTC; 5s ago
Process: 2311 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 2316 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
└─2316 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: legacy statfs record found, removing
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 1
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 3
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 5
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 6
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 7
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 9
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool ffffffffffffffff
Feb 20 10:56:55 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:55.367 7fd617d0ca80 -1 osd.2 95 log_to_monitors {default=true}
Feb 20 10:56:55 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:55.377 7fd60a2ff700 -1 osd.2 95 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#
- Reiniciamos los servicios de MDS
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:58:10 UTC; 4s ago
Main PID: 3015 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
└─3015 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds1 --setuser ceph --setgroup ceph
Feb 20 10:58:10 ceph-osd1 systemd[1]: Stopped Ceph metadata server daemon.
Feb 20 10:58:10 ceph-osd1 systemd[1]: Started Ceph metadata server daemon.
Feb 20 10:58:11 ceph-osd1 ceph-mds[3015]: starting mds.ceph-mds1 at
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:58:36 UTC; 5s ago
Main PID: 2588 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
└─2588 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds2 --setuser ceph --setgroup ceph
Feb 20 10:58:36 ceph-osd2 systemd[1]: Stopped Ceph metadata server daemon.
Feb 20 10:58:36 ceph-osd2 systemd[1]: Started Ceph metadata server daemon.
Feb 20 10:58:36 ceph-osd2 ceph-mds[2588]: starting mds.ceph-mds2 at
[[email protected] ~]#
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
● [email protected] - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-02-20 10:58:57 UTC; 4s ago
Main PID: 2473 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected].service
└─2473 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds3 --setuser ceph --setgroup ceph
Feb 20 10:58:57 ceph-osd3 systemd[1]: Stopped Ceph metadata server daemon.
Feb 20 10:58:57 ceph-osd3 systemd[1]: Started Ceph metadata server daemon.
Feb 20 10:58:58 ceph-osd3 ceph-mds[2473]: starting mds.ceph-mds3 at
[[email protected] ~]#
- Comprobamos la nueva versión de Ceph en cada nodo
[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#
[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#
[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#
[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#
Al finalizar, debemos comprobar el correcto funcionamiento del cluster (estado, que los filesystems de RBD están montados, que podemos subir objetos…).
Copias de seguridad
Copìas de seguridad del cluster de Ceph
Backup
Básicamente tendremos que hacer dos copias de seguridad:
- De los mapas de monitorización del cluster (mon)
- De la configuración de los servicios de OSD, es decir, del CRUSHMAP.
[[email protected] cephbackup]# ceph mon getmap -o monmap.$(date +%Y%m%d)
got monmap epoch 9
[[email protected] cephbackup]# ceph osd getmap -o osdmap.$(date +%Y%m%d)
got osdmap epoch 767
[[email protected] cephbackup]# ceph osd getcrushmap -o crushmap.$(date +%Y%m%d)
64
[[email protected] cephbackup]# ll
total 16
-rw-r--r-- 1 root root 857 Dec 24 10:37 crushmap.20211224
-rw-r--r-- 1 root root 726 Dec 24 10:37 monmap.20211224
-rw-r--r-- 1 root root 4797 Dec 24 10:37 osdmap.20211224
[[email protected] cephbackup]#
Los ficheros creados son binarios. Si queremos ver en modo humano el mapa de configuración del servicio OSD, por ejemplo:
[[email protected] cephbackup]# osdmaptool --print osdmap.20211224
osdmaptool: osdmap file 'osdmap.20211224'
epoch 767
fsid 9ebd1a74-5a9a-11ec-97be-064744202561
created 2021-12-11T15:56:08.705311+0000
modified 2021-12-24T08:56:24.851020+0000
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 64
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client luminous
min_compat_client jewel
require_osd_release pacific
stretch_mode_enabled false
pool 3 'wordpress' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 551 lfor 0/0/95 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 4 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 746 lfor 0/0/104 flags hashpspool stripe_width 0 application mgr_devicehealth
pool 6 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 572 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 576 flags hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 577 flags hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 684 lfor 0/684/682 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8 application rgw
max_osd 3
osd.0 up in weight 1 up_from 758 up_thru 762 down_at 750 last_clean_interval [699,743) [v2:10.0.1.64:6800/1501761493,v1:10.0.1.64:6801/1501761493] [v2:10.0.1.64:6802/1501761493,v1:10.0.1.64:6803/1501761493] exists,up e9010ddd-9d52-443c-b59f-294b120ae256
osd.1 up in weight 1 up_from 754 up_thru 760 down_at 753 last_clean_interval [735,743) [v2:10.0.1.182:6800/899678548,v1:10.0.1.182:6801/899678548] [v2:10.0.1.182:6802/899678548,v1:10.0.1.182:6803/899678548] exists,up 9f25301b-ff73-4a14-86d7-e5a270649d3e
osd.2 up in weight 1 up_from 749 up_thru 762 down_at 748 last_clean_interval [724,743) [v2:10.0.1.204:6800/2466819320,v1:10.0.1.204:6801/2466819320] [v2:10.0.1.204:6802/2466819320,v1:10.0.1.204:6803/2466819320] exists,up 778e9710-7341-4e1d-b62e-f66d7bed29c9
blocklist 10.0.1.212:0/1349489096 expires 2021-12-25T07:30:17.016568+0000
blocklist 10.0.1.212:0/2729178252 expires 2021-12-25T07:30:17.016568+0000
blocklist 10.0.1.212:0/4281414249 expires 2021-12-25T07:30:17.016568+0000
blocklist 10.0.1.212:6800/2132398372 expires 2021-12-25T07:30:17.016568+0000
blocklist 10.0.1.212:6801/2132398372 expires 2021-12-25T07:30:17.016568+0000
[[email protected] cephbackup]#
Pero todavía no habíamos hecho la copia de seguridad del crushmap. La vamos a extraer del mapa OSD que acabamos de crear:
[[email protected] cephbackup]# osdmaptool --export-crush curshmap.new osdmap.20211224
osdmaptool: osdmap file 'osdmap.20211224'
osdmaptool: exported crush map to curshmap.new
[[email protected] cephbackup]#
Restore
Una vez que hemos hecho una copia de seguridad en binario de los mapas de monitorización y crushmap, si algún día necesitamos restaurarlos, lo haremos de la siguiente manera:
[[email protected] cephbackup]# ceph-mon -i cephmon3 --inject-monmap monmap.20211224
[[email protected] cephbackup]# osdmaptool --import-crush curshmap.new
Antes de resturar, tiene que estar parado el servicio que vayamos a restaurar.
Copias de seguridad de los datos
Para los filesystems de tipo bloque (RBS y CephFS) podemos utilizar el software de backups que utilicemos habitualmente, ya que en el sistema operativo vemos un punto de montaje, sin embargo, el almacenamiento por objetos no es posible copiarlo de la manera tradicional.
Para realizar copias de seguridad del sistema de archivos por objetos tenemos varias opciones:
- Descargamos el contenido de cada bucket a un directorio local (ya vimos el comando sc3cmd get…) y hacemos la copia de seguridad habitual.
- Descargamos el contenido de cada bucket a un directorio local y subimos el contenido a otro bucket que está en otra zona de disponibilidad (ya lo hemos visto anteriormente).
- Configuramos la réplica por zona geográfica.
- Configuración de la réplica de discos con DRBD (todavía no lo he probado con Ceph pero debería funcionar).