Instalación de Ceph Storage en Linux RedHat 7 y CentOS 7

Aprenderemos a instalar RedHat Ceph Storage 3.5 en un sistema Linux RedHat 7.7, configurando un cluster y también la versión Opensource Ceph Mimic que, luego, actualizaremos a Nautilus.

Veremos el procedimiento completo de configuración de un cluster de Ceph, así como algunos comandos básicos para el almacenamiento por objetos y en bloque.

Tabla de contenidos

¿Qué es Ceph?

Ceph es un software de código abierto que sirve para guardar datos como objetos.

Si esto te suena a chino, es lo que muchas plataformas de cloud están utilizando actualmente. La más popular es Amazon AWS con su servicio S3, pero también lo puedes encontrar en Google Drive.

Para explicarlo lo más simple posible, cuando subimos un documento a Google Drive se está almacenando como un objeto que contiene una serie de metadatos como, por ejemplo, la fecha, el tamaño, ubicación (path), propietario, con quién se comparte, cuando expira, la versión, etc.

Si subimos una nueva versión del mismo fichero, no se sobrescribirá el original, si no que tendremos dos versiones distintas pero con el mismo nombre. Ese tipo de tratamiento de los ficheros es la naturaleza del almacenamiento por objetos y cada uno de los ficheros ocupa espacio de almacenamiento.

Si hiciésemos la misma operación pero con comandos de sistema operativo, es decir, copiar un fichero a una ruta donde ya existe otro con el mismo nombre, lo que ocurriría es que se sobrescribiría el fichero original sin guardar ninguna otra versión. A este tipo de almacenamiento se le conoce como «almacenamiento por bloque». Como vemos, se diferencia mucho del almacenamiento por objetos.

Ventajas de Ceph

  • Proporciona almacenamiento ilimitado con réplica, incluso, entre zonas geográficas.
  • Almacenamiento por bloque (la presentación de un disco pero con la arquitectura de Ceph detrás del dispositivo).
  • Almacenamiento mediante Ceph Filesystem (similar al NFS pero con la arquitectura de Ceph).
  • Almacenamiento por objetos, compatible con la API de Amazon S3.
  • Es un almacenamiento ideal para sistemas de Big Data.
  • Es un sistema de almacenamiento con una arquitectura tolerante a fallos.

Conceptos previos de Ceph

Antes de comenzar con la instalación de Ceph, conviene tener claros algunos conceptos para que podamos entender mejor lo que estamos haciendo.

  • Monitor nodes (nodos de monitorización): Se refiere al servicio de monitorización de todos los nodos que forman parte del cluster de Ceph. Ejecuta el demonio ceph-mon. Todos los nodos conectados al cluster reciben una copia de su configuración denominada mapa del cluster o cluster map.
  • OSD nodes (Object Storage Device): Es el servicio que crea el espacio de almacenamiento de todos los objetos. En esta guía utilizaremos la arquitectura LVM (lo veremos más adelante). El demonio responsable se llama ceph-osd.
  • MDS nodes (MetaData Server): Es el servicio que guarda los metadatos de cada objeto, tal y como explicaba en la introducción. El demonio se llama ceph-mds.
  • Object Gateway Node: ceph-radosgw es el servicio que permite el almacenamiento distribuido en bloque e interactuar con él para guardar datos (RADOS). Compatible con S3 y Swift.
  • MGR: ceph-mgr es un proceso que corren en los servidores de monitorización (MON) para proporcionar más información sobre el estado del cluster. Su instalación no es obligatoria pero sí muy recomendable.

Arquitectura

Arquitectura de Ceph
Fuente: https://docs.ceph.com/docs/mimic/architecture/

Instalación de Ceph en Linux RedHat 7

Como comentaba anteriormente, utilizaremos dos nodos Linux RedHat 7 y la versión 3.3 de Ceph que distribuye RedHat.

Los dos nodos que van a formar el cluster tienen los siguientes nombres e IPs:

10.0.1.228 ceph1
10.0.1.132 ceph2

Podemos descargarnos la ISO de Ceph gratuitamente desde este enlace. Eso sí, si queremos soporte de RedHat, tendremos que pagarlo.

Requerimientos previos

Abrir los puertos de comunicaciones

Por defecto, Ceph utiliza el rango de puertos entre el 6800 y 7100. También tendremos que abrir el que utilicemos para Apache si utilizamos el almacenamiento por objetos.

Documentación oficial: https://docs.ceph.com/docs/giant/rados/configuration/network-config-ref/

Creación de los repositorios de yum

Puedes instalar Ceph tanto con Ansible como manualmente con la ISO que nos hemos descargado previamente. Yo prefiero hacerlo de forma manual, así que lo primero que voy a hacer es crear los repositorios locales de yum (no he pagado soporte para esta demo) para poder instalar el producto.

createrepo -g /Ceph3.3/MON/repodata/repomd.xml /Ceph3.3/MON
createrepo -g /Ceph3.3/OSD/repodata/repomd.xml /Ceph3.3/OSD
createrepo -g /Ceph3.3/Tools/repodata/repomd.xml /Ceph3.3/Tools


[[email protected] Ceph3.3]# cat /etc/yum.repos.d/ceph3.3mon.repo
[Ceph3.3mon]
name=RedHat Ceph Monitor 3.3
baseurl="file:///Ceph3.3/MON"
enabled=1
gpgcheck=0
[[email protected] Ceph3.3]#

[[email protected] yum.repos.d]# cat /etc/yum.repos.d/ceph3.3osd.repo
[Ceph3.3osd]
name=RedHat Ceph OSD 3.3
baseurl="file:///Ceph3.3/OSD"
enabled=1
gpgcheck=0
[[email protected] yum.repos.d]#

[[email protected] ~]# cat /etc/yum.repos.d/ceph3.3tools.repo
[Ceph3.3tools]
name=RedHat Ceph Tools 3.3
baseurl="file:///Ceph3.3/Tools"
enabled=1
gpgcheck=0
[[email protected] ~]#


[[email protected] ~]# yum repolist
Loaded plugins: amazon-id, rhui-lb
Ceph3.3osd                                                                                                                                                                    | 3.6 kB  00:00:00
Ceph3.3tools                                                                                                                                                                  | 3.6 kB  00:00:00
(1/4): Ceph3.3osd/group_gz                                                                                                                                                    | 1.2 kB  00:00:00
(2/4): Ceph3.3osd/primary_db                                                                                                                                                  |  27 kB  00:00:00
(3/4): Ceph3.3tools/group_gz                                                                                                                                                  | 1.2 kB  00:00:00
(4/4): Ceph3.3tools/primary_db                                                                                                                                                |  38 kB  00:00:00
repo id                                                                                 repo name                                                                                              status
Ceph3.3mon                                                                              RedHat Ceph Monitor 3.3                                                                                    37
Ceph3.3osd                                                                              RedHat Ceph OSD 3.3                                                                                        27
Ceph3.3tools                                                                            RedHat Ceph Tools 3.3                                                                                      48
rhui-REGION-client-config-server-7/x86_64                                               Red Hat Update Infrastructure 2.0 Client Configuration Server 7                                             4
rhui-REGION-rhel-server-releases/7Server/x86_64                                         Red Hat Enterprise Linux Server 7 (RPMs)                                                               26,742
rhui-REGION-rhel-server-rh-common/7Server/x86_64                                        Red Hat Enterprise Linux Server 7 RH Common (RPMs)                                                        239
repolist: 27,097
[[email protected] ~]#

Esta operación la hago en los dos nodos del cluster.

Instalación de los paquetes RPM de Ceph

Una vez creados los repositorios, ya puedo ejecutar yum para instalar todos los módulos de Ceph y algunos requerimientos:

yum install ceph-mon ceph-mgr ceph-osd ceph-common ceph-radosgw ceph-mds httpd ntp mod_fcgid python-boto python-distribute

NOTA: Antes he tenido que descargarme manualmente, desde el repositorio «extras» de RedHat 7 un par de paquetes que se necesitaban para poder ejecutar correctamente el comando anterior: python-itsdangerous-0.23-1.el7.noarch.rpm y python-werkzeug-0.9.1-1.el7.noarch.rpm

El paquete ceph-deploy también nos lo descargaremos a parte desde el repositorio de RedHat.

Descarga del paquete ceph-deploy de RedHat
rpm -i ceph-deploy python-itsdangerous-0.23-1.el7.noarch.rpm python-werkzeug-0.9.1-1.el7.noarch.rpm

Arranque del servicio NTP

Primeramente, habilitamos el servicio de NTP en ambos nodos.

[[email protected] ~]# systemctl enable ntpd
[[email protected] ~]# systemctl start ntpd

[[email protected] ~]# systemctl enable ntpd
[[email protected] ~]# systemctl start ntpd

Aprende a configurar un servidor de NTP en Linux.

Configuración Manual de Ceph

Ya tenemos el software instalado. Ahora falta configurarlo para que funcione.

Comenzamos a crear el fichero de configuración del cluster (ceph-mon)

La primera configuración básica será asignar un ID único al cluster e indicar el hostname de los nodos que van a formarlo.

[[email protected] ~]# uuidgen
6bbb8f28-46f4-4faa-8b36-bd598df8b57a
[[email protected] ~]#

[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
mon_initial_members = ceph1,ceph2
[[email protected] ~]#

[[email protected] ~]# scp -p ceph.conf ceph2:$PWD
ceph.conf                                                                                                                                                          100%   87     0.1KB/s   00:00
[[email protected] ~]#

El otro nodo deberá tener los mismos datos, así que podemos copiar el fichero, tal cual.

Generamos claves de seguridad para acceder al servicio ceph-mon:

[[email protected] ceph]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /tmp/ceph.mon.keyring
[[email protected] ceph]#

[[email protected] ceph]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
creating /etc/ceph/ceph.client.admin.keyring
[[email protected] ceph]#

[[email protected] ceph]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
[[email protected] ceph]#

Copiamos las claves al segundo nodo del cluster:

[[email protected] ceph]# scp -p ceph.client.admin.keyring ceph2:$PWD
ceph.client.admin.keyring                                                                                                                                          100%  137     0.1KB/s   00:00
[[email protected] ceph]#

[[email protected] ceph]# scp -p /tmp/ceph.mon.keyring ceph2:/tmp
ceph.mon.keyring                                                                                                                                                   100%  214     0.2KB/s   00:00
[[email protected] ceph]#

Compilamos o generamos la primera versión del cluster de Ceph. Es decir, generamos al primer mapa de monitorización (ceph-mon), tal y como habíamo mencionado en los conceptos básicos:

[[email protected] ceph]# monmaptool --create --add ceph1 10.0.1.228 --add ceph2 10.0.1.132 --fsid 6bbb8f28-46f4-4faa-8b36-bd598df8b57a /tmp/monmap
monmaptool: monmap file /tmp/monmap
monmaptool: set fsid to 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
monmaptool: writing epoch 0 to /tmp/monmap (2 monitors)
[[email protected] ceph]#

[[email protected]]# scp /tmp/monmap ceph2:/tmp
monmap                                                                                                                                                             100%  288     0.3KB/s   00:00
[[email protected]]#

Antes de iniciar el servicio, debemos procurar una estructura para almacenar sus datos:

[[email protected]]# mkdir /var/lib/ceph/mon/ceph-ceph1
[[email protected]]# mkdir /var/lib/ceph/mon/ceph-ceph2

[[email protected] ~]# ceph-mon --mkfs -i ceph1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
[[email protected] ~]# ll /var/lib/ceph/mon/ceph-ceph1/
total 8
-rw------- 1 root root  77 Jan  2 15:08 keyring
-rw-r--r-- 1 root root   8 Jan  2 15:08 kv_backend
drwxr-xr-x 2 root root 106 Jan  2 15:08 store.db
[[email protected] ~]# 

[[email protected] ~]# ceph-mon --mkfs -i ceph2 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

Llegados a este punto, ya podemos incluir en nuestro fichero de configuración del cluster, los dos nodos de monitorización con sus respectivos nombres e IPs:

[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
mon_initial_members = ceph1,ceph2
mon_host = 10.0.1.228,10.0.1.132
public_network = 10.0.1.0/24
auth_cluster_required = none
auth_service_required = none
auth_client_requiered = none
osd_journal_size = 1024
osd_pool_default_size = 3
osd_pool_default_min_size = 1
[[email protected] ~]#

Arrancaremos Ceph con el usurio del sistema operativo, ceph, por lo que modificaremos el propietario de los ficheros que hemos creado hasta ahora:

[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mon
[[email protected] ~]# chown -R ceph:ceph /var/log/ceph
[[email protected] ~]# chown -R ceph:ceph /var/run/ceph
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.conf
[[email protected] ~]# chown ceph:ceph /etc/ceph/rbdmap
[[email protected] ~]# echo "CLUSTER=ceph" >> /etc/sysconfig/ceph
[[email protected] ~]# scp -p /etc/ceph/ceph.conf ceph2:/etc/ceph

[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mon
[[email protected] ~]# chown -R ceph:ceph /var/log/ceph
[[email protected] ~]# chown -R ceph:ceph /var/run/ceph
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
[[email protected] ~]# chown ceph:ceph /etc/ceph/ceph.conf
[[email protected] ~]# chown ceph:ceph /etc/ceph/rbdmap
[[email protected] ~]# echo "CLUSTER=ceph" >> /etc/sysconfig/ceph

Por fin, arrancamos el servicio ceph-mon:

[[email protected] ~]# systemctl enable ceph-mon.target
[[email protected] ~]# systemctl enable [email protected]
[[email protected]]# systemctl restart [email protected]

[[email protected] ~]# systemctl enable ceph-mon.target
[[email protected] ~]# systemctl enable [email protected]
[[email protected]]# systemctl restart [email protected]

[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph cluster monitor daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-01-02 15:25:05 EST; 7s ago
 Main PID: 2626 (ceph-mon)
   CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
           +-2626 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph

Jan 02 15:25:05 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Stopped Ceph cluster monitor daemon.
Jan 02 15:25:05 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Started Ceph cluster monitor daemon.
[[email protected] ~]#

[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph cluster monitor daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-01-02 15:25:48 EST; 5s ago
 Main PID: 2312 (ceph-mon)
   CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
           +-2312 /usr/bin/ceph-mon -f --cluster ceph --id ceph2 --setuser ceph --setgroup ceph

Jan 02 15:25:48 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Started Ceph cluster monitor daemon.
[[email protected] ~]#

Y comprobamos que el estado del cluster es correcto, aunque todavía faltan algunos servicios por configurar:

[[email protected] ~]# ceph -s
  cluster:
    id:     6bbb8f28-46f4-4faa-8b36-bd598df8b57a
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum ceph2,ceph1
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:

[[email protected] ~]#

Configuración manual del servicio OSD (Object Storage Devices)

Comenzamos con la generación de claves de seguridad igual que con el servicio anterior:

[[email protected] ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
[[email protected] ~]#

[[email protected] ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
[[email protected] ~]#

[[email protected] ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /tmp/ceph.mon.keyring
[[email protected] ~]#

[[email protected] ~]# scp /var/lib/ceph/bootstrap-osd/ceph.keyring ceph2:/var/lib/ceph/bootstrap-osd/
ceph.keyring                                                                                                                                                       100%  129     0.1KB/s   00:00
[[email protected] ~]#

[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/bootstrap-osd
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/bootstrap-osd

Creamos una estructura LVM para el almacenamiento de los objetos:

[[email protected] ~]# ceph-volume lvm create --data /dev/xvdb
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f9188759-8b0e-42d8-b2de-e0ae490c94ac
Running command: vgcreate --force --yes ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9 /dev/xvdb
 stdout: Physical volume "/dev/xvdb" successfully created.
 stdout: Volume group "ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9
 stdout: Logical volume "osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: restorecon /var/lib/ceph/osd/ceph-1
Running command: chown -h ceph:ceph /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac
Running command: chown -R ceph:ceph /dev/dm-0
Running command: ln -s /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac /var/lib/ceph/osd/ceph-1/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQC+2w5ekX0tBhAAkFoi48j6zap8Hi+O7m2Fcg==
 stdout: creating /var/lib/ceph/osd/ceph-1/keyring
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQC+2w5ekX0tBhAAkFoi48j6zap8Hi+O7m2Fcg== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f9188759-8b0e-42d8-b2de-e0ae490c94ac --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/xvdb
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac --path /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9/osd-block-f9188759-8b0e-42d8-b2de-e0ae490c94ac /var/lib/ceph/osd/ceph-1/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: systemctl enable [email protected]
 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl enable --runtime [email protected]
 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl start [email protected]
--> ceph-volume lvm activate successful for osd ID: 1
--> ceph-volume lvm create successful for: /dev/xvdb
[[email protected] ~]#

[[email protected] ~]# vgs
  VG                                        #PV #LV #SN Attr   VSize  VFree
  ceph-8a8d0b9e-6db7-4d6c-863e-6f2a86ed75e9   1   1   0 wz--n- <3.00g    0
[[email protected] ~]#

[[email protected] ~]# ceph-volume lvm create --data /dev/xvdb
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9988a3dc-6f46-4c67-9314-92c5bccc5208
Running command: vgcreate --force --yes ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459 /dev/xvdb
 stdout: Physical volume "/dev/xvdb" successfully created.
 stdout: Volume group "ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459
 stdout: Wiping xfs signature on /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208.
 stdout: Logical volume "osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Running command: restorecon /var/lib/ceph/osd/ceph-2
Running command: chown -h ceph:ceph /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208
Running command: chown -R ceph:ceph /dev/dm-0
Running command: ln -s /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 /var/lib/ceph/osd/ceph-2/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQBB3A5eaq+NBBAAFnCQz6ftyeTi/4wBEGEnSg==
 stdout: creating /var/lib/ceph/osd/ceph-2/keyring
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQBB3A5eaq+NBBAAFnCQz6ftyeTi/4wBEGEnSg== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9988a3dc-6f46-4c67-9314-92c5bccc5208 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/xvdb
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 --path /var/lib/ceph/osd/ceph-2
Running command: ln -snf /dev/ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459/osd-block-9988a3dc-6f46-4c67-9314-92c5bccc5208 /var/lib/ceph/osd/ceph-2/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: systemctl enable [email protected]
 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl enable --runtime [email protected]
 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
Running command: systemctl start [email protected]
--> ceph-volume lvm activate successful for osd ID: 2
--> ceph-volume lvm create successful for: /dev/xvdb
[[email protected] ~]#

[[email protected] ~]# vgs
  VG                                        #PV #LV #SN Attr   VSize  VFree
  ceph-a7c61b7f-5f8c-4882-b090-cb21ed52a459   1   1   0 wz--n- <3.00g    0
[[email protected] ~]#

Arrancamos el servicio OSD:

[[email protected] ~]# ceph auth get-or-create mgr.ceph1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.ceph1]
        key = AQAIVg5eSFOoChAAb9feCbW+fwUczzGeE5XGBg==
[[email protected] ~]#

[[email protected] ~]# ceph auth get-or-create mgr.ceph1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.ceph2]
        key = AQBsVg5eaEt4GBAAJXIo6RjDn6StRkNvjVQmFg==
[[email protected] ~]#
[[email protected] ~]#

[[email protected] ~]# mkdir -p /var/lib/ceph/mgr/ceph-ceph1
[[email protected] ~]# mkdir -p /var/lib/ceph/mgr/ceph-ceph2

[[email protected] ~]# cat /var/lib/ceph/mgr/ceph-ceph1/keyring
[mgr.ceph1]
        key = AQAIVg5eSFOoChAAb9feCbW+fwUczzGeE5XGBg==
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mgr

[[email protected] ~]# cat /var/lib/ceph/mgr/ceph-ceph2/keyring
[mgr.ceph2]
        key = AQBsVg5eaEt4GBAAJXIo6RjDn6StRkNvjVQmFg==
[[email protected] ~]#

[[email protected] ~]# mkdir /var/lib/ceph/osd/ceph-1
[[email protected] ~]# mkdir /var/lib/ceph/osd/ceph-2
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph*
[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph*

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph object storage daemon osd.1
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-01-03 02:12:06 EST; 9s ago
  Process: 4316 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 4321 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
           +-4321 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph

Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Stopped Ceph object storage daemon osd.1.
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Starting Ceph object storage daemon osd.1...
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Started Ceph object storage daemon osd.1.
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: 2020-01-03 02:12:06.197168 7fc6681e3d80 -1 Public network was set, but cluster network was not set
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: 2020-01-03 02:12:06.197172 7fc6681e3d80 -1     Using public network also for cluster network
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 03 02:12:06 ip-10-0-1-228.eu-west-1.compute.internal ceph-osd[4321]: 2020-01-03 02:12:06.309694 7fc6681e3d80 -1 osd.1 39 log_to_monitors {default=true}
[[email protected] ~]#

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph object storage daemon osd.2
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Fri 2020-01-03 02:13:02 EST; 4s ago
  Process: 4754 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 4759 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
           +-4759 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph

Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Stopped Ceph object storage daemon osd.2.
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Starting Ceph object storage daemon osd.2...
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Started Ceph object storage daemon osd.2.
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: 2020-01-03 02:13:02.293503 7ff953e92d80 -1 Public network was set, but cluster network was not set
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: 2020-01-03 02:13:02.293510 7ff953e92d80 -1     Using public network also for cluster network
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 03 02:13:02 ip-10-0-1-132.eu-west-1.compute.internal ceph-osd[4759]: 2020-01-03 02:13:02.418944 7ff953e92d80 -1 osd.2 41 log_to_monitors {default=true}
[[email protected] ~]#

Si volvemos a comprobar el estado del cluster, veremos cómo ahora aparece el servicio OSD:

[[email protected] ~]# ceph -s
  cluster:
    id:     6bbb8f28-46f4-4faa-8b36-bd598df8b57a
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 2 daemons, quorum ceph2,ceph1
    mgr: no daemons active
    osd: 3 osds: 2 up, 2 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:

[[email protected] ~]#

Configuración manual del servicio MDS (los metadatos)

Este es el servicio más sencillo de configurar de todos:

[[email protected] ~]# mkdir -p /var/lib/ceph/mds/ceph-ceph1
[[email protected] ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph1/keyring --gen-key -n mds.ceph1
creating /var/lib/ceph/mds/ceph-ceph1/keyring
[[email protected] ~]#

[[email protected] ~]# ceph auth add mds.ceph1 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -i  /var/lib/ceph/mds/ceph-ceph1/keyring
added key for mds.ceph1
[[email protected] ~]#

[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mds/

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-01-03 02:25:33 EST; 5s ago
 Main PID: 5111 (ceph-mds)
   CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
           +-5111 /usr/bin/ceph-mds -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph

Jan 03 02:25:33 ip-10-0-1-228.eu-west-1.compute.internal systemd[1]: Started Ceph metadata server daemon.
Jan 03 02:25:33 ip-10-0-1-228.eu-west-1.compute.internal ceph-mds[5111]: starting mds.ceph1 at -
[[email protected] ~]#

[[email protected] ~]# mkdir -p /var/lib/ceph/mds/ceph-ceph2
[[email protected] ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph2/keyring --gen-key -n mds.ceph2
creating /var/lib/ceph/mds/ceph-ceph2/keyring
[[email protected] ~]#

[[email protected] ~]# ceph auth add mds.ceph2 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -i  /var/lib/ceph/mds/ceph-ceph2/keyring
added key for mds.ceph2
[[email protected] ~]#

[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph/mds/

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-01-03 02:26:16 EST; 6s ago
 Main PID: 5371 (ceph-mds)
   CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
           +-5371 /usr/bin/ceph-mds -f --cluster ceph --id ceph2 --setuser ceph --setgroup ceph

Jan 03 02:26:16 ip-10-0-1-132.eu-west-1.compute.internal systemd[1]: Started Ceph metadata server daemon.
Jan 03 02:26:16 ip-10-0-1-132.eu-west-1.compute.internal ceph-mds[5371]: starting mds.ceph2 at -
[[email protected] ~]#

Una vez arrancado correctamente, lo incluimos en el fichero de configuración del cluster:

[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbb8f28-46f4-4faa-8b36-bd598df8b57a
mon_initial_members = ceph1,ceph2
mon_host = 10.0.1.228,10.0.1.132
public_network = 10.0.1.0/24
auth_cluster_required = none
auth_service_required = none
auth_client_requiered = none
osd_journal_size = 1024
osd_pool_default_size = 3
osd_pool_default_min_size = 1

[mds.ceph1]
host = ceph1

[mds.ceph2]
host = ceph2
[[email protected] ~]#


[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl restart [email protected]

La misma configuración ene l nodo ceph2.

Comprobamos el estado del servicio MDS:

[[email protected] ~]# ceph mds stat
, 2 up:standby
[[email protected] ~]#

[[email protected] ~]# ceph -s
  cluster:
    id:     6bbb8f28-46f4-4faa-8b36-bd598df8b57a
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum ceph2,ceph1
    mgr: ceph1(active), standbys: ceph2
    osd: 3 osds: 2 up, 2 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   2.00GiB used, 3.99GiB / 5.99GiB avail
    pgs:

[[email protected] ~]#

Configuración Automatizada de Ceph con Ceph-Deploy (opción recomendada)

En los requerimientos previos habíamos instalado con yum el paquete ceph-deploy, pero en la instalación manual de Ceph no lo habíamos utilizado.

ceph-deploy permite la configuración automática de todos los nodos del cluster de Ceph. Vamos a ver cómo funciona:

Definición de los nodos que van a formar parte del cluster

Ejecutamos el comando ceph-deploy con el usuario root, de la siguiente manera:

[[email protected] ~]# ceph-deploy new ceph1 ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /bin/ceph-deploy new ceph1 ceph2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fc8c22ca230>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x17cecf8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph1', 'ceph2']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ip link show
[ceph1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph1][DEBUG ] IP addresses found: [u'10.0.1.137', u'10.0.1.228']
[ceph_deploy.new][DEBUG ] Resolving host ceph1
[ceph_deploy.new][DEBUG ] Monitor ceph1 at 10.0.1.228
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph2][DEBUG ] connected to host: ceph1
[ceph2][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG  detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /usr/sbin/ip link show
[ceph2][INFO  ] Running command: /usr/sbin/ip addr show
[ceph2][DEBUG ] IP addresses found: [u'10.0.1.132', u'10.0.1.58']
[ceph_deploy.new][DEBUG ] Resolving host ceph2
[ceph_deploy.new][DEBUG ] Monitor ceph2 at 10.0.1.132
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph1', 'ceph2']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.1.228', '10.0.1.132']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[[email protected] ~]#

Copiamos los binarios de Ceph en todos los nodos del cluster

[[email protected] ~]# ceph-deploy install ceph1 ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /bin/ceph-deploy install ceph1 ceph2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1d06290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f6c1956b500>
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph1', 'ceph2']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts ceph1 ceph2
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph1 ...
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph1][INFO  ] installing Ceph on ceph1
[ceph1][INFO  ] Running command: yum clean all
[ceph1][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph1][DEBUG ] Cleaning repos: Ceph3.3mon Ceph3.3osd Ceph3.3tools
[ceph1][DEBUG ]               : rhui-REGION-client-config-server-7
[ceph1][DEBUG ]               : rhui-REGION-rhel-server-releases
[ceph1][DEBUG ]               : rhui-REGION-rhel-server-rh-common
[ceph1][DEBUG ] Cleaning up everything
[ceph1][INFO  ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[ceph1][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph1][DEBUG ] Package 2:ceph-osd-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Package 2:ceph-mds-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Package 2:ceph-mon-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Package 2:ceph-radosgw-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph1][DEBUG ] Nothing to do
[ceph1][INFO  ] Running command: ceph --version
[ceph1][DEBUG ] ceph version 12.2.12-45.el7cp (60e2063ab367d6d71e55ea3b3671055c4a8cde2f) luminous (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph2 ...
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph2][INFO  ] installing Ceph on ceph2
[ceph2][INFO  ] Running command: yum clean all
[ceph2][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph2][DEBUG ] Cleaning repos: Ceph3.3mon Ceph3.3osd Ceph3.3tools
[ceph2][DEBUG ]               : rhui-REGION-client-config-server-7
[ceph2][DEBUG ]               : rhui-REGION-rhel-server-releases
[ceph2][DEBUG ]               : rhui-REGION-rhel-server-rh-common
[ceph2][DEBUG ] Cleaning up everything
[ceph2][INFO  ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[ceph2][DEBUG ] Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
[ceph2][DEBUG ] Package 2:ceph-osd-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Package 2:ceph-mds-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Package 2:ceph-mon-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Package 2:ceph-radosgw-12.2.12-45.el7cp.x86_64 already installed and latest version
[ceph2][DEBUG ] Nothing to do
[ceph2][INFO  ] Running command: ceph --version
[ceph2][DEBUG ] ceph version 12.2.12-45.el7cp (60e2063ab367d6d71e55ea3b3671055c4a8cde2f) luminous (stable)
[[email protected] ~]#

Configuración del servicio de monitorización (Mon)

[[email protected] ~]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x26b6fc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x26a9140>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1 ceph2
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph1][DEBUG ] determining if provided host has same hostname in remote
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] deploying mon to ceph1
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] remote hostname: ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][DEBUG ] create the mon path if it does not exist
[ceph1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create the monitor keyring file
[ceph1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph1 --keyring /var/lib/ceph/tmp/ceph-ceph1.mon.keyring --setuser 167 --setgroup 167
[ceph1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph1][DEBUG ] create the init path if it does not exist
[ceph1][INFO  ] Running command: systemctl enable ceph.target
[ceph1][INFO  ] Running command: systemctl enable [email protected]
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[ceph1][INFO  ] Running command: systemctl start [email protected]
[ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][DEBUG ] ********************************************************************************
[ceph1][DEBUG ] status for monitor: mon.ceph1
[ceph1][DEBUG ] {
[ceph1][DEBUG ]   "election_epoch": 0,
[ceph1][DEBUG ]   "extra_probe_peers": [
[ceph1][DEBUG ]     "10.0.1.132:6789/0"
[ceph1][DEBUG ]   ],
[ceph1][DEBUG ]   "feature_map": {
[ceph1][DEBUG ]     "mon": {
[ceph1][DEBUG ]       "group": {
[ceph1][DEBUG ]         "features": "0x3ffddff8eeacfffb",
[ceph1][DEBUG ]         "num": 1,
[ceph1][DEBUG ]         "release": "luminous"
[ceph1][DEBUG ]       }
[ceph1][DEBUG ]     }
[ceph1][DEBUG ]   },
[ceph1][DEBUG ]   "features": {
[ceph1][DEBUG ]     "quorum_con": "0",
[ceph1][DEBUG ]     "quorum_mon": [],
[ceph1][DEBUG ]     "required_con": "0",
[ceph1][DEBUG ]     "required_mon": []
[ceph1][DEBUG ]   },
[ceph1][DEBUG ]   "monmap": {
[ceph1][DEBUG ]     "created": "2020-01-09 09:51:23.076706",
[ceph1][DEBUG ]     "epoch": 0,
[ceph1][DEBUG ]     "features": {
[ceph1][DEBUG ]       "optional": [],
[ceph1][DEBUG ]       "persistent": []
[ceph1][DEBUG ]     },
[ceph1][DEBUG ]     "fsid": "307b5bff-33ea-453b-8be7-4519bbd9e8d7",
[ceph1][DEBUG ]     "modified": "2020-01-09 09:51:23.076706",
[ceph1][DEBUG ]     "mons": [
[ceph1][DEBUG ]       {
[ceph1][DEBUG ]         "addr": "10.0.1.228:6789/0",
[ceph1][DEBUG ]         "name": "ceph1",
[ceph1][DEBUG ]         "public_addr": "10.0.1.228:6789/0",
[ceph1][DEBUG ]         "rank": 0
[ceph1][DEBUG ]       },
[ceph1][DEBUG ]       {
[ceph1][DEBUG ]         "addr": "0.0.0.0:0/1",
[ceph1][DEBUG ]         "name": "ceph2",
[ceph1][DEBUG ]         "public_addr": "0.0.0.0:0/1",
[ceph1][DEBUG ]         "rank": 1
[ceph1][DEBUG ]       }
[ceph1][DEBUG ]     ]
[ceph1][DEBUG ]   },
[ceph1][DEBUG ]   "name": "ceph1",
[ceph1][DEBUG ]   "outside_quorum": [
[ceph1][DEBUG ]     "ceph1"
[ceph1][DEBUG ]   ],
[ceph1][DEBUG ]   "quorum": [],
[ceph1][DEBUG ]   "rank": 0,
[ceph1][DEBUG ]   "state": "probing",
[ceph1][DEBUG ]   "sync_provider": []
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ********************************************************************************
[ceph1][INFO  ] monitor: mon.ceph1 is running
[ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph2 ...
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph2][DEBUG ] determining if provided host has same hostname in remote
[ceph2][DEBUG ] get remote short hostname
[ceph2][DEBUG ] deploying mon to ceph2
[ceph2][DEBUG ] get remote short hostname
[ceph2][DEBUG ] remote hostname: ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][DEBUG ] create the mon path if it does not exist
[ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph2/done
[ceph2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph2/done
[ceph2][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph2.mon.keyring
[ceph2][DEBUG ] create the monitor keyring file
[ceph2][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph2 --keyring /var/lib/ceph/tmp/ceph-ceph2.mon.keyring --setuser 167 --setgroup 167
[ceph2][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph2.mon.keyring
[ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph2][DEBUG ] create the init path if it does not exist
[ceph2][INFO  ] Running command: systemctl enable ceph.target
[ceph2][INFO  ] Running command: systemctl enable [email protected]
[ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[ceph2][INFO  ] Running command: systemctl start [email protected]
[ceph2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status
[ceph2][DEBUG ] ********************************************************************************
[ceph2][DEBUG ] status for monitor: mon.ceph2
[ceph2][DEBUG ] {
[ceph2][DEBUG ]   "election_epoch": 0,
[ceph2][DEBUG ]   "extra_probe_peers": [
[ceph2][DEBUG ]     "10.0.1.228:6789/0"
[ceph2][DEBUG ]   ],
[ceph2][DEBUG ]   "feature_map": {
[ceph2][DEBUG ]     "mon": {
[ceph2][DEBUG ]       "group": {
[ceph2][DEBUG ]         "features": "0x3ffddff8eeacfffb",
[ceph2][DEBUG ]         "num": 1,
[ceph2][DEBUG ]         "release": "luminous"
[ceph2][DEBUG ]       }
[ceph2][DEBUG ]     }
[ceph2][DEBUG ]   },
[ceph2][DEBUG ]   "features": {
[ceph2][DEBUG ]     "quorum_con": "0",
[ceph2][DEBUG ]     "quorum_mon": [],
[ceph2][DEBUG ]     "required_con": "0",
[ceph2][DEBUG ]     "required_mon": []
[ceph2][DEBUG ]   },
[ceph2][DEBUG ]   "monmap": {
[ceph2][DEBUG ]     "created": "2020-01-09 09:51:26.601370",
[ceph2][DEBUG ]     "epoch": 0,
[ceph2][DEBUG ]     "features": {
[ceph2][DEBUG ]       "optional": [],
[ceph2][DEBUG ]       "persistent": []
[ceph2][DEBUG ]     },
[ceph2][DEBUG ]     "fsid": "307b5bff-33ea-453b-8be7-4519bbd9e8d7",
[ceph2][DEBUG ]     "modified": "2020-01-09 09:51:26.601370",
[ceph2][DEBUG ]     "mons": [
[ceph2][DEBUG ]       {
[ceph2][DEBUG ]         "addr": "10.0.1.132:6789/0",
[ceph2][DEBUG ]         "name": "ceph2",
[ceph2][DEBUG ]         "public_addr": "10.0.1.132:6789/0",
[ceph2][DEBUG ]         "rank": 0
[ceph2][DEBUG ]       },
[ceph2][DEBUG ]       {
[ceph2][DEBUG ]         "addr": "10.0.1.228:6789/0",
[ceph2][DEBUG ]         "name": "ceph1",
[ceph2][DEBUG ]         "public_addr": "10.0.1.228:6789/0",
[ceph2][DEBUG ]         "rank": 1
[ceph2][DEBUG ]       }
[ceph2][DEBUG ]     ]
[ceph2][DEBUG ]   },
[ceph2][DEBUG ]   "name": "ceph2",
[ceph2][DEBUG ]   "outside_quorum": [
[ceph2][DEBUG ]     "ceph2"
[ceph2][DEBUG ]   ],
[ceph2][DEBUG ]   "quorum": [],
[ceph2][DEBUG ]   "rank": 0,
[ceph2][DEBUG ]   "state": "probing",
[ceph2][DEBUG ]   "sync_provider": []
[ceph2][DEBUG ] }
[ceph2][DEBUG ] ********************************************************************************
[ceph2][INFO  ] monitor: mon.ceph2 is running
[ceph2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph2 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpjDmruJ
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] fetch remote file
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.admin
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-mds
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-osd
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-rgw
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpjDmruJ
[[email protected] ~]#

Distribuimos la configuración del cluster en todos los nodos

[[email protected] ~]# ceph-deploy --overwrite-conf admin ceph1 ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /bin/ceph-deploy admin ceph1 ceph2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x15f2638>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph1', 'ceph2']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f04a4fc3f50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[[email protected] ~]#

Configuración del servicio OSD (Object Storage Devices)

[[email protected] ~]# ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /bin/ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph1', '/dev/xvdb', None), ('ceph2', '/dev/xvdb', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cb1248>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1c9d9b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph1:/dev/xvdb: ceph2:/dev/xvdb:
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph1 disk /dev/xvdb journal None activate True
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = data
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:a497f4f8-3051-4e95-a990-af50a7b61a52 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] Creating new GPT entries.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = block
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph1][DEBUG ] meta-data=/dev/xvdb1             isize=2048   agcount=4, agsize=6400 blks
[ceph1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[ceph1][DEBUG ]          =                       crc=0        finobt=0
[ceph1][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[ceph1][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[ceph1][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[ceph1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph1][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.jzhHrf with options noatime,inode64
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jzhHrf/block -> /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph1][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph1][DEBUG ] The new table will be used at the next reboot.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph1][INFO  ] Running command: systemctl enable ceph.target
[ceph1][INFO  ] checking OSD status...
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph1][WARNIN] there is 1 OSD down
[ceph1][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][WARNIN] osd keyring does not exist yet, creating one
[ceph2][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph2 disk /dev/xvdb journal None activate True
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = data
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:17869701-95a4-43e0-97b9-d31eae1b09f2 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] Creating new GPT entries.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = block
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:ec77b847-c605-40bb-b650-099bbea001f5 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph2][DEBUG ] meta-data=/dev/xvdb1             isize=2048   agcount=4, agsize=6400 blks
[ceph2][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[ceph2][DEBUG ]          =                       crc=0        finobt=0
[ceph2][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[ceph2][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph2][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[ceph2][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[ceph2][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph2][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph2][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.yAInIq with options noatime,inode64
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.yAInIq/block -> /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph2][DEBUG ] The new table will be used at the next reboot.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph2][INFO  ] Running command: systemctl enable ceph.target
[ceph2][INFO  ] checking OSD status...
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph2][WARNIN] there is 1 OSD down
[ceph2][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph2 is now ready for osd use.
[[email protected] ~]#

Comprobamos que podemos crear un nuevo Data Pool y le asignamos cuotas:

[[email protected] ~]# ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /bin/ceph-deploy osd create ceph1:xvdb ceph2:xvdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph1', '/dev/xvdb', None), ('ceph2', '/dev/xvdb', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cb1248>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1c9d9b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph1:/dev/xvdb: ceph2:/dev/xvdb:
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph1 disk /dev/xvdb journal None activate True
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = data
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:a497f4f8-3051-4e95-a990-af50a7b61a52 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] Creating new GPT entries.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] ptype_tobe_for_name: name = block
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph1][DEBUG ] meta-data=/dev/xvdb1             isize=2048   agcount=4, agsize=6400 blks
[ceph1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[ceph1][DEBUG ]          =                       crc=0        finobt=0
[ceph1][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[ceph1][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[ceph1][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[ceph1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph1][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.jzhHrf with options noatime,inode64
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/ceph_fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/fsid.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/magic.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/block_uuid.13998.tmp
[ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jzhHrf/block -> /dev/disk/by-partuuid/9a0afea6-a54d-46c7-bdf2-0d05bd9c5d6b
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf/type.13998.tmp
[ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jzhHrf
[ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph1][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph1][DEBUG ] The new table will be used at the next reboot.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph1][INFO  ] Running command: systemctl enable ceph.target
[ceph1][INFO  ] checking OSD status...
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph1][WARNIN] there is 1 OSD down
[ceph1][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.2 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][WARNIN] osd keyring does not exist yet, creating one
[ceph2][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host ceph2 disk /dev/xvdb journal None activate True
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_type: Will colocate block with data on /dev/xvdb
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = data
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:17869701-95a4-43e0-97b9-d31eae1b09f2 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] Creating new GPT entries.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] ptype_tobe_for_name: name = block
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:ec77b847-c605-40bb-b650-099bbea001f5 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/xvdb
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[ceph2][DEBUG ] meta-data=/dev/xvdb1             isize=2048   agcount=4, agsize=6400 blks
[ceph2][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[ceph2][DEBUG ]          =                       crc=0        finobt=0
[ceph2][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[ceph2][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph2][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[ceph2][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[ceph2][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph2][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph2][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.yAInIq with options noatime,inode64
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/ceph_fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/fsid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/magic.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/block_uuid.13071.tmp
[ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.yAInIq/block -> /dev/disk/by-partuuid/ec77b847-c605-40bb-b650-099bbea001f5
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq/type.13071.tmp
[ceph2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.yAInIq
[ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[ceph2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph2][DEBUG ] The new table will be used at the next reboot.
[ceph2][DEBUG ] The operation has completed successfully.
[ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /usr/sbin/partprobe /dev/xvdb
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[ceph2][INFO  ] Running command: systemctl enable ceph.target
[ceph2][INFO  ] checking OSD status...
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph2][WARNIN] there is 1 OSD down
[ceph2][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host ceph2 is now ready for osd use.
[[email protected] ~]#


Data Pool

[[email protected] ~]# ceph osd pool create my-userfiles 64
pool 'my-userfiles' created
[[email protected] ~]#

[[email protected] ~]# ceph osd pool set my-userfiles size 2
set pool 1 size to 2
[[email protected] ~]#

[[email protected] ~]# ceph osd pool set my-userfiles min_size 1
set pool 1 min_size to 1
[[email protected] ~]#

Revisamos la configuración del cluster

[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 307b5bff-33ea-453b-8be7-4519bbd9e8d7
mon_initial_members = ceph1, ceph2
mon_host = 10.0.1.228,10.0.1.132
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[[email protected] ~]# ceph -s
  cluster:
    id:     307b5bff-33ea-453b-8be7-4519bbd9e8d7
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 2 daemons, quorum ceph2,ceph1
    mgr: no daemons active
    osd: 2 osds: 2 up, 2 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:

[[email protected] ~]#

Sustitución de un disco averiado de OSD

Comprobamos que tenemos caído el servicio OSD porque se ha perdido un disco (el servicio osd.0 aparece en estado down):

[[email protected] ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF
-1       0.00870 root default
-3       0.00290     host ceph-osd1
 0   ssd 0.00290         osd.0        down  1.00000 1.00000
-5       0.00290     host ceph-osd2
 1   ssd 0.00290         osd.1          up  1.00000 1.00000
-7       0.00290     host ceph-osd3
 2   ssd 0.00290         osd.2          up  1.00000 1.00000
[[email protected] ~]#

Eliminamos el disco afectado:

[[email protected] ~]# ceph osd unset noout
noout is unset
[[email protected] ~]#

[[email protected] ~]# ceph osd crush reweight osd.0 0
reweighted item id 0 name 'osd.0' to 0 in crush map
[[email protected] ~]#

[[email protected] ~]# ceph osd out osd.0 0
osd.0 is already out. osd.0 is already out.
[[email protected] ~]#

[[email protected] ~]# ceph osd crush remove osd.0
removed item id 0 name 'osd.0' from crush map
[[email protected] ~]#

[[email protected] ~]# ceph auth del osd.0
updated
[[email protected] ~]#

[[email protected] ~]# ceph osd rm osd.0
removed osd.0
[[email protected] ~]#

[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# systemctl disable [email protected]

[[email protected] ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF
-1       0.00580 root default
-3             0     host ceph-osd1
-5       0.00290     host ceph-osd2
 1   ssd 0.00290         osd.1          up  1.00000 1.00000
-7       0.00290     host ceph-osd3
 2   ssd 0.00290         osd.2          up  1.00000 1.00000
[[email protected] ~]#

Añadimos el nuevo disco:

[[email protected] ceph]# ceph-deploy  --overwrite-conf osd create ceph-osd1 --data /dev/xvdf
[[email protected] ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF
-1       0.00870 root default
-3       0.00290     host ceph-osd1
 0   ssd 0.00290         osd.0          up  1.00000 1.00000
-5       0.00290     host ceph-osd2
 1   ssd 0.00290         osd.1          up  1.00000 1.00000
-7       0.00290     host ceph-osd3
 2   ssd 0.00290         osd.2          up  1.00000 1.00000
[[email protected] ceph]#

Ampliar el tamaño de un volumen OSD tras ampliar una LUN

Esto no hay que hacerlo nunca porque el tamaño del OSD no se amplía aunque se amplíe la LUN. Las operaciones se quedan bloqueadas por entrada/salida. Lo que hay que hacer es añadir un nuevo disco más grande al cluster de OSD y eliminar los más pequeños, si se desea.

  • El tamaño actual de la LUN es de 4GB:
[[email protected] ~]# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/xvda                 8.00 GB      False   False
/dev/xvdf                 4.00 GB      False   False
[[email protected] ~]#
  • El equipo de Storage amplía la LUN a 5GB:
[[email protected] ~]# rescan-scsi-bus.sh
[[email protected] ~]# fdisk -l |grep dev |grep -v mapper |grep xvdf
Disk /dev/xvdf: 5368 MB, 5368709120 bytes, 10485760 sectors
[[email protected] ~]#
  • Ampliamos el physical volume y el logical volume de LVM:
[[email protected] ~]# pvresize /dev/xvdf
  Physical volume "/dev/xvdf" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
[[email protected] ~]# lvextend -n ceph-af482ad5-67b3-4f52-adb3-5b80ccf70e50/osd-block-ad35ca06-03d3-4171-a835-5b4fdd915216 -l+100%FREE
  Size of logical volume ceph-af482ad5-67b3-4f52-adb3-5b80ccf70e50/osd-block-ad35ca06-03d3-4171-a835-5b4fdd915216 changed from 3.00 GiB (3 extents) to 4.00 GiB (4 extents).
  Logical volume ceph-af482ad5-67b3-4f52-adb3-5b80ccf70e50/osd-block-ad35ca06-03d3-4171-a835-5b4fdd915216 successfully resized.
[[email protected] ~]#
  • Reiniciamos el servicio de OSD porque no reconoce el nuevo tamaño en caliente:
[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# systemctl start [email protected]
  • Volvemos a comprobar el tamaño de almacenamiento para OSD y, efecfivamente, el servicio OSD.0 ha aumentado pero, como decía al principio, el tamaño del OSD sigue siendo el mismo. Sólo hay que fijarse en el tamaño disponible para datos:
[[email protected] ~]# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE   RAW USE DATA    OMAP    META     AVAIL   %USE  VAR  PGS STATUS
 0   ssd 0.00290  1.00000  4 GiB 2.0 GiB 2.8 MiB     0 B    1 GiB 2.0 GiB 50.07 1.25  96     up
 1   ssd 0.00290  1.00000  3 GiB 1.0 GiB 2.9 MiB   3 KiB 1024 MiB 2.0 GiB 33.43 0.83 148     up
 2   ssd 0.00290  1.00000  3 GiB 1.0 GiB 2.9 MiB   3 KiB 1024 MiB 2.0 GiB 33.43 0.83 148     up
                    TOTAL 10 GiB 4.0 GiB 8.6 MiB 8.0 KiB  3.0 GiB 6.0 GiB 40.08
MIN/MAX VAR: 0.83/1.25  STDDEV: 7.92
[[email protected] ~]#

Consultar qué discos pertenecen a cada OSD

[[email protected] ~]# ceph-volume lvm list


====== osd.0 =======

  [block]       /dev/ceph-dcbd800c-4ed0-4ce5-9768-a79f637a949a/osd-block-2dc06989-022c-4c16-94b9-fe31c940b50a

      block device              /dev/ceph-dcbd800c-4ed0-4ce5-9768-a79f637a949a/osd-block-2dc06989-022c-4c16-94b9-fe31c940b50a
      block uuid                OA7LXG-zpOg-kzKS-tcsh-VZE3-1iUl-dR1DZb
      cephx lockbox secret
      cluster fsid              244f4818-95f7-42fc-ae33-6c45adb4521f
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  2dc06989-022c-4c16-94b9-fe31c940b50a
      osd id                    0
      type                      block
      vdo                       0
      devices                   /dev/xvdf
[[email protected] ~]#

Configuración de RADOS (almacenamiento distribuido en bloque)

[[email protected] ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
creating /etc/ceph/ceph.client.radosgw.keyring
[[email protected] ~]# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
[[email protected] ~]# ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
[[email protected] ~]# ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
added key for client.radosgw.gateway
[[email protected] ~]# chown -R ceph:ceph /etc/ceph
[[email protected] ~]# scp /etc/ceph/ceph.client.radosgw.keyring ceph2:/etc/ceph
ceph.client.radosgw.keyring                                                                                                                                        100%  121     0.1KB/s   00:00
[[email protected] ~]# scp /etc/ceph/ceph.client.radosgw.keyring ceph2:/etc/ceph

[[email protected] ~]# chown -R ceph:ceph /etc/ceph
[[email protected] ~]#

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl enable ceph-radosgw.target
[[email protected] ~]# systemctl enable [email protected]
Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[[email protected] ~]#

[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 307b5bff-33ea-453b-8be7-4519bbd9e8d7
mon_initial_members = ceph1, ceph2
mon_host = 10.0.1.228,10.0.1.132
auth_cluster_required = none
auth_service_required = none
auth_client_required = none

[client.radosgw.gateway]
host = {hostname}
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = ""
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
rgw print continue = false
[[email protected] ~]#  scp -p /etc/ceph/ceph.conf ceph2:/etc/ceph


[[email protected] ~]# cp -p /etc/ceph/ceph.client.radosgw.keyring /var/lib/ceph/radosgw/ceph-ceph1/keyring
[[email protected] ~]# ls -la /var/lib/ceph/radosgw/ceph-ceph1/keyring
-rw------- 1 ceph ceph 121 Jan  3 03:51 /var/lib/ceph/radosgw/ceph-ceph1/keyring
[[email protected] ~]#


[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected]
? [email protected] - Ceph rados gateway
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-01-03 04:46:52 EST; 3s ago
 Main PID: 3792 (radosgw)
   CGroup: /system.slice/system-ceph\x2dradosgw.slice/[email protected]
           +-3792 /usr/bin/radosgw -f --cluster ceph --name client.ceph1 --setuser ceph --setgroup ceph

Jan 03 04:46:52 ceph1 systemd[1]: Stopped Ceph rados gateway.
Jan 03 04:46:52 ceph1 systemd[1]: Started Ceph rados gateway.
[[email protected] ~]#

[[email protected] ~]# systemctl restart [email protected]

Creamos un pool para el almacenamiento de objetos y comprobamos que con rados podemos acceder a su listado:

[[email protected] ceph]# ceph osd pool create pool-test 100 100
pool 'pool-test' created
[[email protected] ceph]#

[[email protected] ceph]# rados lspools
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
pool-test
[[email protected] ceph]#

Otra prueba más es la creación de un usuario de test con rados:

[[email protected] ~]# radosgw-admin user create --uid="testuser" --display-name="First User"
2020-01-03 06:06:41.834257 7f4e05f0ae00  0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
    "user_id": "testuser",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "testuser",
            "access_key": "Q08H44HC01E5ZN0PUNAG",
            "secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

[[email protected] ~]#


[[email protected] ~]# radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
2020-01-03 06:07:18.550398 7fbe8070be00  0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
    "user_id": "testuser",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "testuser:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "testuser",
            "access_key": "Q08H44HC01E5ZN0PUNAG",
            "secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
        }
    ],
    "swift_keys": [
        {
            "user": "testuser:swift",
            "secret_key": "iQHrJeorO8xOcBuY1vZJOwqYM6I4CHP5mKmRqpld"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

[[email protected] ~]#

Generamos la clave secreta del usuario:

[[email protected] ~]# radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
2020-01-03 06:07:51.398865 7f3bfdef1e00  0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
    "user_id": "testuser",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "testuser:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "testuser",
            "access_key": "Q08H44HC01E5ZN0PUNAG",
            "secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
        }
    ],
    "swift_keys": [
        {
            "user": "testuser:swift",
            "secret_key": "sdl3N1xaWS7S6dCiK0AvhH0U2fwiG7dOTMIn4kuj"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

[[email protected] ~]#


Obtener información del usuario:

[[email protected] ~]# radosgw-admin user info --uid=testuser
2020-01-03 06:08:20.886360 7f037393ee00  0 WARNING: detected a version of libcurl which contains a bug in curl_multi_wait(). enabling a workaround that may degrade performance slightly.
{
    "user_id": "testuser",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "testuser:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "testuser",
            "access_key": "Q08H44HC01E5ZN0PUNAG",
            "secret_key": "fuHhEDFsecGP2ksatPH8UnZSag0CCfN1ZhiBCE3a"
        }
    ],
    "swift_keys": [
        {
            "user": "testuser:swift",
            "secret_key": "sdl3N1xaWS7S6dCiK0AvhH0U2fwiG7dOTMIn4kuj"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

[[email protected] ~]#


Eliminar un usuario:

radosgw-admin user rm --uid=testuser

Si lo deseas, también puedes probar con la API de S3 mediante código python:

[[email protected] ~]# cat s3test.py
import boto
import boto.s3.connection

access_key = '64HHUXBFG9F4X2KAQCVA'
secret_key = 'w3oBij15mi6SKzcMaY2s8FCUV8iANg1om5cYYunU'

conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = 'ceph1',
        port = 80,
        is_secure=False,
        calling_format = boto.s3.connection.OrdinaryCallingFormat(),
        )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
        print "{name}\t{created}".format(
                name = bucket.name,
                created = bucket.creation_date,
)
[[email protected] ~]#

Añadir un nuevo servicio extra de monitorización (MON)

Es muy recomendable añadir un segundo servicio de monitorización del cluster por si le ocurriera algo al servicio primario. Vamos a instalarlo en un servidor nuevo:

  • Configuramos el fichero /etc/hosts, añadiendo la IP del nuevo servidor de monitorización en todos los servidores del cluster.
10.0.3.247 ceph-mon rgw.ceph-mon.com ceph-mon.com
10.0.3.49 ceph-mon2
10.0.3.95 ceph-osd1
10.0.3.199 ceph-osd2
10.0.3.241 ceph-osd3
  • Instalamos dnsmasq
  • Creamos el usuario ceph
  • Añadimos los permisos de sudo
  • Creamos las relaciones de confianza SSH con el resto de nodos del cluster
  • Configuramos los respositorio de yum para ceph y epel e instalamos el software requerido:
yum install -y ceph ceph-deploy ntp ntpdate ntp-doc yum-plugin-priorities httpd mod_ssl openssl fcgi mod_fcgid python2-pip

pip install s3cmd
  • Desplegamos el nuevo monitor de Ceph
[[email protected] ~]# cd /etc/ceph
[[email protected] ~]# ceph-deploy install ceph-mon2
[[email protected] ~]# ceph-deploy admin ceph-mon2
[[email protected] ~]# ceph-deploy mon add ceph-mon2
[[email protected] ~]# ceph-deploy mgr create ceph-mon2
  • Desplegamos el servicio de Rados gateway en el nuevo monitor para poder comunicarnos con el sistema de almacenamiento:
[[email protected] ~]# ceph-deploy install --rgw ceph-mon2
[[email protected] ~]# cd /etc/ceph
[[email protected] ceph]# ceph-deploy rgw create ceph-mon2

[[email protected] ~]# systemctl enable ceph-radosgw.target
[[email protected] ~]# systemctl start ceph-radosgw.target
[[email protected] ~]#

[[email protected] ~]# lsof -i:7480
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
radosgw 21290 ceph   40u  IPv4  45241      0t0  TCP *:7480 (LISTEN)
[[email protected] ~]#
  • Comprobamos el estado del cluster desde el nuevo servidor de monitor (vemos que aparecen dos servicios de monitorización):
[[email protected] ~]# ceph -s
  cluster:
    id:     677b62e9-8834-407e-b73c-3f41e97597a8
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum ceph-mon,ceph-mon2 (age 37s)
    mgr: ceph-mon(active, since 79m), standbys: ceph-mon2
    osd: 3 osds: 3 up (since 4h), 3 in (since 2d)
    rgw: 1 daemon active (ceph-mon)

  data:
    pools:   7 pools, 148 pgs
    objects: 226 objects, 2.0 KiB
    usage:   3.0 GiB used, 6.0 GiB / 9 GiB avail
    pgs:     148 active+clean

[[email protected] ~]#
  • Observamos que en el fichero ceph.conf aparece el nuevo monitor:
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 677b62e9-8834-407e-b73c-3f41e97597a8
public_network = 10.0.3.0/24
mon_initial_members = ceph-mon,ceph-mon2
mon_host = 10.0.3.247:6789,10.0.3.49:6789
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[[email protected] ~]#
  • Comprobamos que podemos ver los objetos que ya habíamos subido previamente al bucket de prueba:
[[email protected] ~]# s3cmd -c s3test.cfg ls s3://david-bucket/
2020-02-24 09:31         6   s3://david-bucket/david.txt
[[email protected] ~]#

Añadir de nuevo el monitor eliminado

Anteriormente hemos eliminado el monitor ceph-mon2 pero lo queremos volver a añadir. Lo haremos manualmente:

[[email protected] ceph]# ceph mon add ceph-mon2 10.0.3.49:6789
adding mon.ceph-mon2 at [v2:10.0.3.49:3300/0,v1:10.0.3.49:6789/0]
[[email protected] ceph]# ceph-mon -i ceph-mon2 --public-addr 192.168.1.33:6789

Arrancamos el servicio en el servidor ceph-mon2 y comprobamos el estado del cluster para ver que aparece el monitor:

[[email protected] ceph]# systemctl enable [email protected]
[[email protected] ceph]# systemctl start [email protected]
[[email protected] ceph]# ceph -s
  cluster:
    id:     677b62e9-8834-407e-b73c-3f41e97597a8
    health: HEALTH_WARN
            25 slow ops, oldest one blocked for 43 sec, mon.ceph-mon has slow ops

  services:
    mon: 2 daemons, quorum ceph-mon,ceph-mon2 (age 3s)
    mgr: ceph-mon(active, since 13m)
    osd: 3 osds: 3 up (since 7h), 3 in (since 2d)
    rgw: 1 daemon active (ceph-mon)

  data:
    pools:   7 pools, 148 pgs
    objects: 226 objects, 2.0 KiB
    usage:   3.0 GiB used, 6.0 GiB / 9 GiB avail
    pgs:     148 active+clean

[[email protected] ceph]# 

No olvidemos vovler a añadir la configuración de los monitores en el fichero de configuración de Ceph:

[[email protected] ceph]# cat /etc/ceph.conf  |grep mon |grep -v "#"
mon_initial_members = ceph-mon
mon_host = 10.0.3.247
[[email protected] ceph]#

Copia de seguridad del mapa del monitor y eliminar uno de los monitores del cluster

Esto es útil cuando alguno de los monitores que tenemos configurado ha dejado de responder. Pongamos por caso que hemos configurado tres servicios de monitorización (A, B y C) pero sólo funciona el A.

Lo que tendríamos que hacer es identificar el monitor que ha sobrevivido y extraer el mapa, con el servicio parado.

[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# systemctl stop [email protected]
[[email protected] ~]# ceph-mon -i ceph-mon --extract-monmap /tmp/ceph-mon_map
2020-02-24 11:59:44.510 7f2cc8987040 -1 wrote monmap to /tmp/ceph-mon_map
[[email protected] ~]#

Luego, eliminamos los monitores que no responden:

[[email protected] ~]# monmaptool /tmp/ceph-mon_map --rm ceph-mon2
monmaptool: monmap file /tmp/ceph-mon_map
monmaptool: removing ceph-mon2
monmaptool: writing epoch 2 to /tmp/ceph-mon_map (1 monitors)
[[email protected] ~]#

Inyectar el mapa correcto en los monitores que han sobrevivido:

[[email protected] ~]# ceph-mon -i ceph-mon --inject-monmap /tmp/ceph-mon_map
[[email protected] ~]#

Arrancamos el servicio de monitorización en los nodos supervivientes:

[[email protected] ~]# chown -R ceph:ceph /var/lib/ceph
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph cluster monitor daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]ice; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-02-24 12:10:29 UTC; 3s ago
 Main PID: 4307 (ceph-mon)
   CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
           └─4307 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon --setuser ceph --setgroup ceph

Feb 24 12:10:29 ceph-mon systemd[1]: [email protected] holdoff time over, scheduling restart.
Feb 24 12:10:29 ceph-mon systemd[1]: Stopped Ceph cluster monitor daemon.
Feb 24 12:10:29 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.
[[email protected] ~]#

Observamos que, de nuevo, sólo hay un monitor en el cluster:

[[email protected] ~]# ceph -s
  cluster:
    id:     677b62e9-8834-407e-b73c-3f41e97597a8
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-mon (age 4s)
    mgr: ceph-mon2(active, since 6m), standbys: ceph-mon
    osd: 3 osds: 3 up (since 6h), 3 in (since 2d)
    rgw: 1 daemon active (ceph-mon)

  data:
    pools:   7 pools, 148 pgs
    objects: 226 objects, 2.0 KiB
    usage:   3.0 GiB used, 6.0 GiB / 9 GiB avail
    pgs:     148 active+clean

[[email protected] ~]#

[[email protected] ~]# ceph mon stat
e3: 1 mons at {ceph-mon=[v2:10.0.3.247:3300/0,v1:10.0.3.247:6789/0]}, election epoch 61, leader 0 ceph-mon, quorum 0 ceph-mon
[[email protected] ~]#

Instalación de Ceph en Linux CentOS 7 (versión Opensource)

Hasta ahora hemos utilizado Linux RedHat, para instalar Ceph en entornos empresariales, pero si queremos hacer una prueba de concepto o formarnos en el producto, siempre podemos utilizar Linux CentOS.

Configuración del repositorio de yum e instalación de paquetes RPM

  • Importamos el RPM del repositorio de Ceph y configuramos el repositorio de yum en los tres nodos del cluster:
[[email protected] ~]# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

[[email protected] ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[[email protected] ~]#

[[email protected] ~]# mv ceph.repo.rpmnew ceph.repo
  • Instalamos ceph-deploy y algunas dependencias que vamos a necesitar:
yum install -y ceph-deploy ntp ntpdate ntp-doc yum-plugin-priorities

Creación de un usuario de Ceph

  • Creamos un usuario de Ceph
groupadd -g 1001 ceph
useradd -u 1001 ceph -g ceph

[[email protected] ~]# id ceph
uid=1001(ceph) gid=1001(ceph) groups=1001(ceph)
[[email protected] ~]#
  • Creamos la relación de confianza SSH con el usuario ceph entre los entornos que van a formar el cluster de Ceph:
10.0.1.253 ceph-mon
10.0.1.97 ceph-osd1
10.0.1.174 ceph-osd2
10.0.1.90 ceph-osd3
  • Asignamos permisos de sudo al al usuario ceph en todos los servidores del cluster.
ceph ALL = (root) NOPASSWD:ALL

Configuración del cluster de Ceph con ceph-deploy

  • Construimos el cluster de Ceph con ceph-deploy
[[email protected] ~]$ ceph-deploy new ceph-mon --public-network 10.0.1.0/24

[[email protected] ~]$ ceph-deploy install ceph-mon ceph-osd1 ceph-osd2 ceph-osd3 --release=mimic
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy install ceph-mon ceph-osd1 ceph-osd2 ceph-osd3 --release=mimic

...

[ceph-osd3][DEBUG ] Complete!
[ceph-osd3][INFO  ] Running command: sudo ceph --version
[ceph-osd3][DEBUG ] ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0) mimic (stable)
[[email protected] ~]$

[[email protected] ~]$ ceph-deploy mon create-initial
[[email protected] ~]$ ceph-deploy --overwrite-conf admin ceph-mon ceph-osd1 ceph-osd2 ceph-osd3
[[email protected] ~]$ ceph-deploy admin ceph-mon
[[email protected] ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Una vez finalizados los pasos anteriores, ya podemos ver el estado del cluster:

[[email protected] ~]$ ceph -w
  cluster:
    id:     50e5aedb-9d76-4c5c-b177-c2fbe1864134
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-mon
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:


[[email protected] ~]$ ceph -s
  cluster:
    id:     50e5aedb-9d76-4c5c-b177-c2fbe1864134
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-mon
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[[email protected] ~]$

Pero todavía tenemos que seguir añadiendo los servicios habituales:

[[email protected] ~]$ ceph-deploy gatherkeys ceph-mon
[[email protected] ~]$ ceph-deploy mgr create ceph-mon
[[email protected] ~]$ ceph-deploy osd create ceph-osd1 --data /dev/xvdf
[[email protected] ~]$ ceph-deploy osd create ceph-osd2 --data /dev/xvdf
[[email protected] ~]$ ceph-deploy osd create ceph-osd3 --data /dev/xvdf

[[email protected] ~]$ ceph -s
  cluster:
    id:     50e5aedb-9d76-4c5c-b177-c2fbe1864134
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-mon
    mgr: ceph-mon(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 6.0 GiB / 9 GiB avail
    pgs:

[[email protected] ~]$

Podemos comprobar el estado de los servidores de OSD y el espacio asignado en TB:

[[email protected] ~]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF
-1       0.00870 root default
-3       0.00290     host ceph-osd1
 0   ssd 0.00290         osd.0          up  1.00000 1.00000
-5       0.00290     host ceph-osd2
 1   ssd 0.00290         osd.1          up  1.00000 1.00000
-7       0.00290     host ceph-osd3
 2   ssd 0.00290         osd.2          up  1.00000 1.00000
[[email protected] ~]$

Creación de un pool de test

Con el espacio disponible, podemos crear un pool de almacenamiento:

[[email protected] ~]$ ceph osd pool create pool-test 100 100
pool 'pool-test' created
[[email protected] ~]$

[[email protected] ~]$ ceph osd pool create pool-test2 100 100 replicated
pool 'pool-test2' created
[[email protected] ~]$

[[email protected] ~]$ ceph osd lspools
1 pool-test
2 pool-test2
[[email protected] ~]$

Podemos definir el número de pools OSD en los que queremos replicar los objetos. Es una opción recomendada para la tolerancia a fallos. Por ejemplo:

ceph osd pool set Nombre_del_pool size Numero_de_Replicas
ceph osd pool set data size 3

Realizamos una prueba de rendimiento de escritura sobre el pool2:

[[email protected] ~]$ rados bench -p pool-test2 50 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 50 seconds or 0 objects
Object prefix: benchmark_data_ceph-mon_15749
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0      16        16         0         0         0           -           0
    1      16        17         1    3.9967         4    0.999467    0.999467
    2      16        43        27   53.9725       104    0.571903     1.03934
    3      16        66        50   66.6399        92    0.512483    0.828077
    4      16        83        67   66.9769        68     1.09702    0.888848
    5      16       100        84   67.1792        68      1.1114    0.922221
    6      16       117       101   67.3141        68     1.11634    0.942233
    7      16       119       103   58.8409         8    0.264011    0.928022
    8      16       137       121   60.4839        72    0.325325    0.948485
    9      16       157       141   62.6506        80    0.354859    0.956285

Podemos eliminar un pool de la siguiente manera:

[[email protected] ~]$ ceph config set mon mon_allow_pool_delete true
[[email protected] ~]$ ceph osd pool delete pool-test2 pool-test2 --yes-i-really-really-mean-it
pool 'pool-test2' removed
[[email protected] ~]$

Habiliatamos el Dashboard para la gestión gráfica de Ceph

[[email protected] ~]$ ceph mgr module enable dashboard
[[email protected] ~]$ ceph mgr module ls |more
{
    "enabled_modules": [
        "balancer",
        "crash",
        "dashboard",
        "iostat",
        "restful",
        "status"
    ],

[[email protected] ~]$ ceph dashboard create-self-signed-cert
Self-signed certificate created
[[email protected] ~]$

[[email protected] ~]$ ceph dashboard set-login-credentials ceph MiPassword
Username and password updated
[[email protected] ~]$

[[email protected] ~]$ ceph mgr services
{
    "dashboard": "https://ceph-mon:8443/"
}
[[email protected] ~]$
Ceph Dashboard
Consola de Ceph - Dashboard
Ceph - OSD Dashboard

Instalamos Ceph Object Gateway

[[email protected] ~] ceph-deploy install --rgw ceph-mon
[[email protected] ~]$ ceph-deploy rgw create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy rgw create ceph-mon
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('ceph-mon', 'rgw.ceph-mon')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6258ea9128>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7f625976d050>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-mon:rgw.ceph-mon
[ceph-mon][DEBUG ] connection detected need for sudo
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][WARNIN] rgw keyring does not exist yet, creating one
[ceph-mon][DEBUG ] create a keyring file
[ceph-mon][DEBUG ] create path recursively if it doesn't exist
[ceph-mon][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-mon osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-mon/keyring
[ceph-mon][INFO  ] Running command: sudo systemctl enable [email protected]
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[ceph-mon][INFO  ] Running command: sudo systemctl start [email protected]
[ceph-mon][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ceph-mon and default port 7480
[[email protected] ~]$

Nota: Durante las innumerables pruebas de test que he realizado para comprender Ceph, Rados me ha dejado de funcionar alguna vez. El servicio arrancaba pero el puerto no escuchaba. Lo que he hecho ha sido instalar Rados de nuevo sobrescribiendo la configuración: ceph-deploy –overwrite-conf rgw create ceph-mon11

Fichero de configuración ceph.conf

Al finalizar la configuración del cluster deberíamos ver algo así:

[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 677b62e9-8834-407e-b73c-3f41e97597a8
public_network = 10.0.3.0/24
mon_initial_members = ceph-mon
mon_host = 10.0.3.247
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[[email protected] ~]#

[[email protected] ~]# ssh ceph-osd1 cat /etc/ceph.conf
[global]
fsid = 756b46a7-70b1-410d-8121-f1f0cbd27e1a
public_network = 10.0.3.0/24
mon_initial_members = ceph-mon
mon_host = 10.0.3.247
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[[email protected] ~]#

Tipo de almacenamiento por bloque (RBD)

Creación de un pool para el almacenamiento de datos

Creamos el pool que utilizará el filesystem:

[[email protected] ~]# ceph osd pool create pool-test 10 10
pool 'pool-test' created
[[email protected] ~]# ceph osd pool application enable pool-test rbd
enabled application 'rbd' on pool 'pool-test'
[[email protected] ~]# 

Asignamos un disco

Creamos el disco que se utilizará para almacenamiento por bloques (1MB):

[[email protected] ~]# rbd create disk01 --size 1 -p pool-test
[[email protected] ~]#

[[email protected] ~]# rbd ls -l -p pool-test
NAME    SIZE PARENT FMT PROT LOCK
disk01 1 MiB          2
[[email protected] ~]#

[[email protected] ~]# rbd map disk01 -p pool-test
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable pool-test/disk01 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[[email protected] ~]# modprobe rbd
[[email protected] ~]# rbd feature disable pool-test/disk01 object-map fast-diff deep-flatten
[[email protected] ~]# rbd map disk01 -p pool-test
/dev/rbd0
[[email protected] ~]#

Creamos un filesystem sobre el disco de Ceph presentado

Formateamos el filesystem y lo montamos:

[[email protected] ~]# mkfs.ext4 /dev/rbd0
mke2fs 1.42.9 (28-Dec-2013)

Filesystem too small for a journal
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4096 blocks, Stripe width=4096 blocks
128 inodes, 1024 blocks
51 blocks (4.98%) reserved for the super user
First data block=1
Maximum filesystem blocks=1048576
1 block group
8192 blocks per group, 8192 fragments per group
128 inodes per group

Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done

[[email protected] ~]# mount /dev/rbd0 /rbd-test/
[[email protected] ~]# df -hP /rbd-test/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0      1003K   21K  911K   3% /rbd-test
[[email protected] ~]#

[[email protected] rbd-test]# echo david > david.txt
[[email protected] rbd-test]# ll
total 13
-rw-r--r-- 1 root root     6 Feb 18 10:03 david.txt
drwx------ 2 root root 12288 Feb 18 10:02 lost+found
[[email protected] rbd-test]#

Ampliar un filesystem RBD

A continuación vamos a ampliar el filesystem anterior hasta 500MB con rbd resize:

[[email protected] ~]# rbd ls -l -p pool-test
NAME    SIZE PARENT FMT PROT LOCK
disk01 1 MiB          2      excl
disk02 1 GiB          2      excl
[[email protected] ~]#

[[email protected] ~]# rbd map disk01 -p pool-test
/dev/rbd0
[[email protected] ~]# mount /dev/rbd0 /mnt/mycephfs/
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0      1003K   23K  909K   3% /mnt/mycephfs
[[email protected] ~]#

[[email protected] ~]# rbd resize --image disk01 --size 500M -p pool-test
Resizing image: 100% complete...done.
[[email protected] ~]#

[[email protected] ~]# resize2fs /dev/rbd0
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/rbd0 is mounted on /mnt/mycephfs; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 4
The filesystem on /dev/rbd0 is now 512000 blocks long.

[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       499M   52K  499M   1% /mnt/mycephfs
[[email protected] ~]#

Reducir el tamaño de un filesystem RBD

El filesystem anterior de 500MB lo vamos a reducir a 200MB:

[[email protected] ~]# e2fsck -f /dev/rbd0
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/rbd0: 13/8064 files (0.0% non-contiguous), 1232/512000 blocks
[[email protected] ~]# resize2fs -M /dev/rbd0
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/rbd0 to 1308 (1k) blocks.
The filesystem on /dev/rbd0 is now 1308 blocks long.

[[email protected] ~]#

[[email protected] ~]# rbd resize --size 200M disk01 --allow-shrink -p pool-test
Resizing image: 100% complete...done.
[[email protected] ~]#

[[email protected] ~]# resize2fs /dev/rbd0
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/rbd0 to 204800 (1k) blocks.
The filesystem on /dev/rbd0 is now 204800 blocks long.

[[email protected] ~]#

[[email protected] ~]# mount /dev/rbd0 /mnt/mycephfs/
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       200M   52K  196M   1% /mnt/mycephfs
[[email protected] ~]#

Renombrar una imagen RBD

Si queremos personalizar el nombre de las imágenes RBD, por ejemplo, para relacionarlas con un servicio o un filesystem, podemos hacerlo de la siguiente manera:

[[email protected] ~]# rbd ls -l -p pool-test
NAME      SIZE PARENT FMT PROT LOCK
disk01 200 MiB          2      excl
disk02   1 GiB          2      excl
[[email protected] ~]# rbd mv disk01 rbd_david -p pool-test
[[email protected] ~]# rbd ls -l -p pool-test
NAME         SIZE PARENT FMT PROT LOCK
disk02      1 GiB          2      excl
rbd_david 200 MiB          2      excl
[[email protected] ~]#

Configurar el fichero fstab para montar filesystems RBD con el boot del sistema

  • Configuramos el fichero /etc/ceph/rbdmap con la imagen RBD que queremos montar:
[[email protected] ~]# tail -1 /etc/ceph/rbdmap
pool-test/rbd_david
[[email protected] ~]#
  • Activamos el servicio rbdmap con el boot del sistema:
[[email protected] ~]# systemctl enable rbdmap
Created symlink from /etc/systemd/system/multi-user.target.wants/rbdmap.service to /usr/lib/systemd/system/rbdmap.service.
[[email protected] ~]# systemctl start rbdmap
[[email protected] ~]# systemctl status rbdmap
● rbdmap.service - Map RBD devices
   Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset: disabled)
   Active: active (exited) since Thu 2020-02-20 08:18:01 UTC; 3s ago
  Process: 2607 ExecStart=/usr/bin/rbdmap map (code=exited, status=0/SUCCESS)
 Main PID: 2607 (code=exited, status=0/SUCCESS)

Feb 20 08:18:01 ceph-mon systemd[1]: Starting Map RBD devices...
Feb 20 08:18:01 ceph-mon systemd[1]: Started Map RBD devices.
[[email protected] ~]#
  • Observamos que se ha creado automáticamente el dispositivo RBD en el sistema operativo:
[[email protected] ~]# ll /dev/rbd/pool-test/
total 0
lrwxrwxrwx 1 root root 10 Feb 20 08:28 rbd_david -> ../../rbd0
[[email protected] ~]#
  • Configuramos el fichero /etc/fstab como hacemos habitualmente y montamos el filesystem:
[[email protected] ~]# tail -1 /etc/fstab
/dev/rbd/pool-test/rbd_david    /mnt/mycephfs   ext4    defaults        0 0
[[email protected] ~]#

[[email protected] ~]# mount /mnt/mycephfs/
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       200M   52K  196M   1% /mnt/mycephfs
[[email protected] ~]#

Tipo de almacenamiento Ceph Filesystem

Activación del servicio MDS

Todavía no lo habíamos activado y es un requerimiento para que podamos almacenar datos con CephFS.

Desplegamos y habilitamos el servicio de MDS en diferentes nodos para el almacenamiento distribuido. En este caso, utilizaremos los mismos nodos donde habíamos desplegado el servicio de OSD:

[[email protected] ~]# cd /etc/ceph
[[email protected] ceph]# ceph-deploy mds create ceph-osd1:ceph-mds1 ceph-osd2:ceph-mds2 ceph-osd3:ceph-mds3

[[email protected] ceph]# ceph -s
  cluster:
    id:     6154923d-93fc-48a6-860d-612c71576d38
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

  services:
    mon: 1 daemons, quorum ceph-mon
    mgr: ceph-mon(active)
    mds: cephfs-1/1/1 up  {0=ceph-mds2=up:active}, 1 up:standby
    osd: 3 osds: 3 up, 3 in
    rgw: 1 daemon active

  data:
    pools:   7 pools, 62 pgs
    objects: 194  objects, 1.0 MiB
    usage:   3.0 GiB used, 6.0 GiB / 9 GiB avail
    pgs:     62 active+clean

[[email protected] ceph]#

Arrancamos el servicio MDS en cada uno de los nodos donde lo hemos instalado:

[[email protected] ~]# systemctl enable [email protected]
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-23 17:47:40 UTC; 6min ago
 Main PID: 17162 (ceph-mds)
   CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
           └─17162 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds1 --setuser ceph --setgroup ceph

Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:14] Unknown lvalue 'LockPersonality' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:15] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'ProtectControlGroups' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:20] Unknown lvalue 'ProtectKernelModules' in section 'Service'
Feb 23 17:52:57 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:21] Unknown lvalue 'ProtectKernelTunables' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:14] Unknown lvalue 'LockPersonality' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:15] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'ProtectControlGroups' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:20] Unknown lvalue 'ProtectKernelModules' in section 'Service'
Feb 23 17:54:14 ceph-osd1 systemd[1]: [/usr/lib/systemd/system/[email protected]:21] Unknown lvalue 'ProtectKernelTunables' in section 'Service'
[[email protected] ~]#

[[email protected] ~]# systemctl enable [email protected]
[[email protected] ~]# systemctl start [email protected]

[[email protected] ~]# systemctl enable [email protected]
[[email protected] ~]# systemctl start [email protected]
[[email protected] ~]#

Deberemos fijarnos en la siguiente línea para saber si está el servicio levantado:

mds: cephfs-1/1/1 up  {0=ceph-mds2=up:active}, 1 up:standby

Creamos un pool de almacenamiento de datos

Creamos los pools para almacenar los datos y metadatos

[[email protected] ~]# ceph osd pool create cephfs_data 10 10
pool 'cephfs_data' created
[[email protected] ~]#

[[email protected] ~]# ceph osd pool create cephfs_metadata 10 10
pool 'cephfs_metadata' created
[[email protected] ~]#

Creamos el filesystem de Ceph (CephFS)

  • Creamos el filesystem de Ceph
[[email protected] ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 7 and data pool 6
[[email protected] ~]#

[[email protected] ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[[email protected] ~]# ceph mds stat
cephfs-0/0/1 up
[[email protected] ~]#
  • Copiamos la clave de autentificación para poder montar los filesystems:
[[email protected] ~]# ssh ceph-osd1 'sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring' > ceph.key
[[email protected] ~]# chmod 600 ceph.key
[[email protected] ~]# 

[[email protected] ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
        key = AQAjoUteTSDRIhAAO5zStGzdqlZgaTWI2eQy0Q==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
[[email protected] ~]#

[[email protected] ~]# cat ceph.key
AQAjoUteTSDRIhAAO5zStGzdqlZgaTWI2eQy0Q==
[[email protected] ~]#

[[email protected] ~]# ceph auth get client.admin
exported keyring for client.admin
[client.admin]
        key = AQAjoUteTSDRIhAAO5zStGzdqlZgaTWI2eQy0Q==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
[[email protected] ~]#
  • Montamos el filesystem apuntando al servidor donde está arrancado el servicio de monitorización del cluster (mon):
[[email protected] ~]# mount -t ceph ceph-mon:6789:/ /mnt/mycephfs -o name=admin,secretfile=ceph.key -vvv
parsing options: rw,name=admin,secretfile=ceph.key
[[email protected] ~]#

[[email protected] ~]# df -hP |grep myceph
10.0.3.154:6789:/  1.9G     0  1.9G   0% /mnt/mycephfs
[[email protected] ~]#

Si tuviésemos replicado el punto de montaje en varios monitores de Ceph, la sintaxis sería la siguiente:

mount -t ceph <monitor1-host-name>:6789,<monitor2-host-name>:6789,<monitor3-host-name>:6789:/ <mount-point> -o name=<user-name>,secretfile=<path>
  • Escribimos algún dato en el filesystem para probarlo:
[[email protected] mycephfs]# echo david > david.txt
[[email protected] mycephfs]# ll
total 1
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] mycephfs]#
  • Probamos CephFS en otro nodo cliente:

A continuación, vamos a montar el filesystem desde otro nodo para tener montado el mismo filesystem CephFS al mismo tiempo (como un NFS tradicional):

[[email protected] ~]# mkdir /mnt/mycephfs
[[email protected] ~]# mount -t ceph ceph-mon:6789:/ /mnt/mycephfs -o name=admin,secretfile=ceph.key
[[email protected] ~]# df -hP /mnt/mycephfs/
Filesystem         Size  Used Avail Use% Mounted on
10.0.3.154:6789:/  1.9G     0  1.9G   0% /mnt/mycephfs
[[email protected] ~]# ll /mnt/mycephfs/
total 1
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] ~]#
[[email protected] ~]# ll /mnt/mycephfs/
total 1
-rw-r--r-- 1 root root 7 Feb 19 08:38 david2.txt
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] ~]#

Desde ambos nodos se ven los dos ficheros que hemos creado desde nodos diferentes apuntando al mismo FS de Ceph:

[[email protected] ~]# ll /mnt/mycephfs/
total 1
-rw-r--r-- 1 root root 7 Feb 19 08:38 david2.txt
-rw-r--r-- 1 root root 6 Feb 19 08:29 david.txt
[[email protected] ~]#

Almacenamiento por objetos utilizando la API de S3

Esta configuración también me ha dado problemas y todavía la estoy investigando. Os explico hasta donde he llegado.

Configuración del DNS interno

La API de S3 utiliza la resolución por nombres de DNS e ignora el fichero /etc/hosts. Para solucionar esta problemática, instalaremos dnsmasq:

[[email protected] ~]# yum install -y dnsmasq
[[email protected] ~]# systemctl enable dnsmasq
Created symlink from /etc/systemd/system/multi-user.target.wants/dnsmasq.service to /usr/lib/systemd/system/dnsmasq.service.
[[email protected] ~]# systemctl start dnsmasq
[[email protected] ~]# systemctl status dnsmasq
● dnsmasq.service - DNS caching server.
   Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-02-18 08:48:34 UTC; 6s ago
 Main PID: 16493 (dnsmasq)
   CGroup: /system.slice/dnsmasq.service
           └─16493 /usr/sbin/dnsmasq -k

Feb 18 08:48:34 ceph-mon systemd[1]: Started DNS caching server..
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: started, version 2.76 cachesize 150
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: reading /etc/resolv.conf
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: using nameserver 10.0.0.2#53
Feb 18 08:48:34 ceph-mon dnsmasq[16493]: read /etc/hosts - 6 addresses
[[email protected] ~]#

[[email protected] ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
#search eu-west-1.compute.internal
nameserver 10.0.3.154
nameserver 10.0.0.2
[[email protected] ~]#

[[email protected] ~]# nslookup rgw.ceph-mon.com
Server:         10.0.3.154
Address:        10.0.3.154#53

Name:   rgw.ceph-mon.com
Address: 10.0.1.253

[[email protected] ~]#

Evitamos que dhclient sobrescriba el fichero /etc/resolv.conf al rebotar el sistema:

[[email protected] ~]# cat /etc/dhcp/dhclient-enter-hooks
#!/bin/sh
make_resolv_conf(){
    :
}
[[email protected] ~]# chmod u+x /etc/dhcp/dhclient-enter-hooks

Instalación de Apache

La API de S3 necesita una URL con la que interactuar, así que instalaremos Apache.

[[email protected] ~]# yum install -y httpd mod_ssl openssl fcgi mod_fcgid
[[email protected] ~]# cat /etc/httpd/conf.d/rgw.conf
<VirtualHost *:80>

        ServerName rgw.ceph-mon.com
        ServerAdmin [email protected]
        ServerAlias *.ceph-mon.com
        DocumentRoot /var/www/html
        RewriteEngine On
        RewriteRule  ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]

        <IfModule mod_fastcgi.c>
        <Directory /var/www/html>
                        Options +ExecCGI
                        AllowOverride All
                        SetHandler fastcgi-script
                        Order allow,deny
                        Allow from all
                        AuthBasicAuthoritative Off
                </Directory>
        </IfModule>

        AllowEncodedSlashes On
        ErrorLog /var/log/httpd/error.log
        CustomLog /var/log/httpd/access.log combined
        ServerSignature Off

</VirtualHost>
[[email protected] ~]#


/etc/httpd/conf/httpd.conf
<IfModule !proxy_fcgi_module>
  LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
</IfModule>


[[email protected] ~]# openssl genrsa -out ca.key 2048
Generating RSA private key, 2048 bit long modulus
........................+++
..................................................................................................+++
e is 65537 (0x10001)
[[email protected] ~]# openssl req -new -key ca.key -out ca.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[[email protected] ~]#
[[email protected] ~]# openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting Private key
[[email protected] ~]# cp ca.crt /etc/pki/tls/certs
[[email protected] ~]# cp ca.key /etc/pki/tls/private/ca.key
[[email protected] ~]# cp ca.csr /etc/pki/tls/private/ca.csr
[[email protected] ~]#


/etc/httpd/conf.d/ssl.conf

SSLCertificateFile /etc/pki/tls/certs/ca.crt
SSLCertificateKeyFile /etc/pki/tls/private/ca.key

Creamos un usuario de Rados para que pueda acceder al almacenamiento por objetos

[[email protected] ~]# radosgw-admin user create --uid="david" --display-name="David"
{
    "user_id": "david",
    "display_name": "David",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "david",
            "access_key": "OXJF3D8RKL84ITQI7OFO",
            "secret_key": "ANRy3jLqdQNrC8lxrgJ8K3xCW59fELjKRGi7OIji"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

[[email protected] ~]#

Instalamos la API de S3

[[email protected] ~]# yum install -y python2-pip
[[email protected] ~]# pip install s3cmd
Collecting s3cmd
  Downloading https://files.pythonhosted.org/packages/3a/f5/c70bfb80817c9d81b472e077e390d8c97abe130c9e86b61307a1d275532c/s3cmd-2.0.2.tar.gz (124kB)
    100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 133kB 6.7MB/s
Collecting python-dateutil (from s3cmd)
  Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB)
    100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 235kB 4.4MB/s
Collecting python-magic (from s3cmd)
  Downloading https://files.pythonhosted.org/packages/42/a1/76d30c79992e3750dac6790ce16f056f870d368ba142f83f75f694d93001/python_magic-0.4.15-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.5 in /usr/lib/python2.7/site-packages (from python-dateutil->s3cmd)
Installing collected packages: python-dateutil, python-magic, s3cmd
  Running setup.py install for s3cmd ... done
Successfully installed python-dateutil-2.8.1 python-magic-0.4.15 s3cmd-2.0.2
You are using pip version 8.1.2, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[[email protected] ~]#


[[email protected] ~]# s3cmd --configure -c s3test.cfg

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: OXJF3D8RKL84ITQI7OFO
Secret Key: ANRy3jLqdQNrC8lxrgJ8K3xCW59fELjKRGi7OIji
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.ceph-mon.com

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: ceph-mon.com

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
  Access Key: OXJF3D8RKL84ITQI7OFO
  Secret Key: ANRy3jLqdQNrC8lxrgJ8K3xCW59fELjKRGi7OIji
  Default Region: US
  S3 Endpoint: rgw.ceph-mon.com
  DNS-style bucket+hostname:port template for accessing a bucket: ceph-mon.com
  Encryption password:
  Path to GPG program: /bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to 's3test.cfg'
[[email protected]-mon ~]#

Importante cambiar los nombres del fichero de configuración anterior. De hecho, por este motivo tuvimos que instalar dnsmasq anteriormente.

[[email protected] ~]# egrep "ceph-mon|https" s3test.cfg
host_base = ceph-mon.com:7480
host_bucket = ceph-mon.com/%(bucket)
signurl_use_https = False
use_https = False
website_endpoint = http://%(bucket)s.ceph-mon.com
[[email protected] ~]#

Creamos correctamente un bucket de S3 y subimos un fichero:

[[email protected] ~]# s3cmd -c s3test.cfg mb --no-check-certificate s3://david-bucket/
Bucket 's3://david-bucket/' created
[[email protected] ~]#

[[email protected] ceph]# s3cmd -c s3test.cfg ls
2020-02-25 07:50  s3://david-bucket
[[email protected] ceph]#

[[email protected] ~]# s3cmd -c s3test.cfg put /tmp/david.txt s3://david-bucket/
upload: '/tmp/david.txt' -> 's3://david-bucket/david.txt'  [1 of 1]
 6 of 6   100% in    1s     3.53 B/s  done
[[email protected] ~]#

[[email protected] ~]# s3cmd -c s3test.cfg ls s3://david-bucket/
2020-02-20 09:51        13   s3://david-bucket/david.txt
[[email protected] ~]#

Si queremos eliminar un bucket:

[[email protected] ceph]# s3cmd -c s3test.cfg rb s3://david-bucket/
Bucket 's3://david-bucket/' removed
[[email protected] ceph]# 

Descargar un fichero u objeto con S3cmd

[[email protected] ~]# s3cmd -c s3test.cfg get s3://david-bucket/david.txt
download: 's3://david-bucket/david.txt' -> './david.txt'  [1 of 1]
 6 of 6   100% in    0s   141.01 B/s  done
[[email protected] ~]# ll david.txt
-rw-r--r-- 1 root root 6 Feb 20 10:05 david.txt
[[email protected] ~]#

Copiar un objeto de un bucket remoto a un bucket local

  • Descargamos el fichero del bucket remoto:
[[email protected] ~]# s3cmd -c s3mon11.cfg get s3://david-bucket/*.txt
download: 's3://david-bucket/david.txt' -> './david.txt'  [1 of 1]
 6 of 6   100% in    0s   137.01 B/s  done
[[email protected] ~]#

En el fichero s3mon11.cfg tenems la configuración para poder conectarnos al bucket remoto.

  • Creamos el bucket local:
[[email protected] ~]# s3cmd -c s3mon21.cfg mb s3://david-bucket
Bucket 's3://david-bucket/' created
[[email protected] ~]#

En el fichero s3mon21.cfg tenemos la configuración para conectarnos al bucket local.

  • Subimos el fichero al bucket local:
[[email protected] ~]# s3cmd -c s3mon21.cfg put david.txt s3://david-bucket
upload: 'david.txt' -> 's3://david-bucket/david.txt'  [1 of 1]
 6 of 6   100% in    1s     3.58 B/s  done
[[email protected] ~]#

También podemos subir directorios enteros con el parámetro «–recursive» o sincronizar directorios o buckets con sync ( s3cmd sync Directorio_local s3://Bucket_Destino).

Eliminar un fichero u objeto con s3cmd

[[email protected] ~]# s3cmd -c s3test.cfg del s3://david-bucket/david.txt
delete: 's3://david-bucket/david.txt'
[[email protected] ~]# s3cmd -c s3test.cfg ls s3://david-bucket/david.txt
[[email protected] ~]#

Obtener información de un objeto (metadatos)

[[email protected] ~]# s3cmd -c s3test.cfg info s3://david-bucket/david.txt
s3://david-bucket/david.txt (object):
   File size: 6
   Last mod:  Thu, 20 Feb 2020 12:00:22 GMT
   MIME type: text/plain
   Storage:   STANDARD
   MD5 sum:   e7ad599887b1baf90b830435dac14ba3
   SSE:       none
   Policy:    none
   CORS:      none
   ACL:       David: FULL_CONTROL
   x-amz-meta-s3cmd-attrs: atime:1582193678/ctime:1582193668/gid:0/gname:root/md5:e7ad599887b1baf90b830435dac14ba3/mode:33188/mtime:1582193668/uid:0/uname:root
[[email protected] ~]#

Configuración de la expiración de un objeto

[[email protected] ~]# s3cmd -c s3test.cfg put /tmp/david.txt s3://david-bucket/ --expiry-date=2020/02/21
upload: '/tmp/david.txt' -> 's3://david-bucket/david.txt'  [1 of 1]
 6 of 6   100% in    0s   113.70 B/s  done
[[email protected] ~]#

[[email protected] ~]# s3cmd -c s3test.cfg put /tmp/david.txt s3://david-bucket/ --expiry-days=1
upload: '/tmp/david.txt' -> 's3://david-bucket/david.txt'  [1 of 1]
 6 of 6   100% in    0s   114.29 B/s  done
[[email protected] ~]# 

Configuración de políticas de expiración en todo el bucket S3 (lyfecycle)

  • Creamos un fichero XML con las políticas que nos interesan:
[[email protected] ~]# cat lifecycle.xml
<LifecycleConfiguration>
    <Rule>
        <ID>delete-error-logs</ID>
        <Prefix>error</Prefix>
        <Status>Enabled</Status>
        <Expiration>
            <Days>7</Days>
        </Expiration>
    </Rule>
    <Rule>
        <ID>delete-standard-logs</ID>
        <Prefix>logs</Prefix>
        <Status>Enabled</Status>
        <Expiration>
            <Days>1</Days>
        </Expiration>
    </Rule>
</LifecycleConfiguration>
[[email protected] ~]#
  • Actualizamos las políticas:
[[email protected] ~]# s3cmd -c s3test.cfg setlifecycle lifecycle.xml s3://david-bucket
s3://david-bucket/: Lifecycle Policy updated
[[email protected] ~]#
  • Comprobamos que se han actualizado:
[[email protected] ~]# s3cmd -c s3test.cfg getlifecycle s3://david-bucket
<?xml version="1.0" ?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
        <Rule>
                <ID>delete-error-logs</ID>
                <Prefix>error</Prefix>
                <Status>Enabled</Status>
                <Expiration>
                        <Days>7</Days>
                </Expiration>
        </Rule>
        <Rule>
                <ID>delete-standard-logs</ID>
                <Prefix>logs</Prefix>
                <Status>Enabled</Status>
                <Expiration>
                        <Days>1</Days>
                </Expiration>
        </Rule>
</LifecycleConfiguration>

[[email protected] ~]#

Compartir un bucket con otro usuario

Para permitir que otro usuario pueda leer o escribir en un bucket o un objeto del que no es propietario, configuraremos permisos ACL. Por ejemplo, así:

s3cmd setacl --grant-acl=read:<canonical-user-id> s3://BUCKETNAME[/OBJECT]

Actualización de la versión de Ceph

Vamos a actualizar desde la versión opensource Ceph Mimic a Nautilus:

  • Miramos la versión actual de Ceph
[[email protected] ~]# ceph --version
ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0) mimic (stable)
[[email protected] ~]#
  • Configuramos el respositorio de yum que tiene la nueva versión de Ceph, en todos los servidores del cluster:
[[email protected] yum.repos.d]# cat ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[[email protected] yum.repos.d]#
  • Actualizamos Ceph y el sistema operativo:
[[email protected] yum.repos.d]# ceph-deploy install --release luminous ceph-mon ceph-osd1 ceph-osd2 ceph-osd3
[[email protected] yum.repos.d]# yum update -y
  • Reiniciamos el servicio de monitor
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph cluster monitor daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:52:32 UTC; 1s ago
 Main PID: 3436 (ceph-mon)
   CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
           └─3436 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon --setuser ceph --setgroup ceph

Feb 20 10:52:32 ceph-mon systemd[1]: Stopped Ceph cluster monitor daemon.
Feb 20 10:52:32 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.
Feb 20 10:52:32 ceph-mon ceph-mon[3436]: 2020-02-20 10:52:32.761 7f5f2e8ec040 -1 [email protected](electing) e1 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''
Feb 20 10:52:32 ceph-mon ceph-mon[3436]: 2020-02-20 10:52:32.770 7f5f1516a700 -1 [email protected](electing) e2 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''
[[email protected] ~]#
  • Reiniciamos los servicios de OSD
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph object storage daemon osd.0
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:54:54 UTC; 5s ago
  Process: 2500 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 2505 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
           └─2505 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: legacy statfs record found, removing
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 1
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 3
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 5
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 6
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 7
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool 9
Feb 20 10:54:56 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:56.171 7f696ffe1a80 -1 bluestore(/var/lib/ceph/osd/ceph-0) fsck error: missing Pool StatFS record for pool ffffffffffffffff
Feb 20 10:54:57 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:57.335 7f696ffe1a80 -1 osd.0 83 log_to_monitors {default=true}
Feb 20 10:54:57 ceph-osd1 ceph-osd[2505]: 2020-02-20 10:54:57.350 7f69625d4700 -1 osd.0 83 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#


[[email protected] ~]# systemctl restart [email protected]
[[email protected]-osd1 ~]# systemctl status [email protected][email protected] - Ceph object storage daemon osd.0
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:55:42 UTC; 8s ago
  Process: 2706 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 2711 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
           └─2711 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Feb 20 10:55:42 ceph-osd1 systemd[1]: Stopped Ceph object storage daemon osd.0.
Feb 20 10:55:42 ceph-osd1 systemd[1]: Starting Ceph object storage daemon osd.0...
Feb 20 10:55:42 ceph-osd1 systemd[1]: Started Ceph object storage daemon osd.0.
Feb 20 10:55:42 ceph-osd1 ceph-osd[2711]: 2020-02-20 10:55:42.816 7f1a19731a80 -1 Falling back to public interface
Feb 20 10:55:43 ceph-osd1 ceph-osd[2711]: 2020-02-20 10:55:43.625 7f1a19731a80 -1 osd.0 87 log_to_monitors {default=true}
Feb 20 10:55:43 ceph-osd1 ceph-osd[2711]: 2020-02-20 10:55:43.636 7f1a0bd24700 -1 osd.0 87 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#


[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph object storage daemon osd.1
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:56:19 UTC; 4s ago
  Process: 2426 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 2431 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
           └─2431 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph

Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: legacy statfs record found, removing
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 1
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 3
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 5
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 6
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 7
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool 9
Feb 20 10:56:21 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:21.748 7fe3ed4f4a80 -1 bluestore(/var/lib/ceph/osd/ceph-1) fsck error: missing Pool StatFS record for pool ffffffffffffffff
Feb 20 10:56:22 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:22.346 7fe3ed4f4a80 -1 osd.1 91 log_to_monitors {default=true}
Feb 20 10:56:22 ceph-osd2 ceph-osd[2431]: 2020-02-20 10:56:22.356 7fe3dfae7700 -1 osd.1 91 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph object storage daemon osd.2
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:56:52 UTC; 5s ago
  Process: 2311 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 2316 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
           └─2316 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph

Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: legacy statfs record found, removing
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 1
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 3
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 5
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 6
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 7
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool 9
Feb 20 10:56:54 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:54.803 7fd617d0ca80 -1 bluestore(/var/lib/ceph/osd/ceph-2) fsck error: missing Pool StatFS record for pool ffffffffffffffff
Feb 20 10:56:55 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:55.367 7fd617d0ca80 -1 osd.2 95 log_to_monitors {default=true}
Feb 20 10:56:55 ceph-osd3 ceph-osd[2316]: 2020-02-20 10:56:55.377 7fd60a2ff700 -1 osd.2 95 set_numa_affinity unable to identify public interface 'eth0' numa node: (2) No such file or directory
[[email protected] ~]#
  • Reiniciamos los servicios de MDS
[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:58:10 UTC; 4s ago
 Main PID: 3015 (ceph-mds)
   CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
           └─3015 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds1 --setuser ceph --setgroup ceph

Feb 20 10:58:10 ceph-osd1 systemd[1]: Stopped Ceph metadata server daemon.
Feb 20 10:58:10 ceph-osd1 systemd[1]: Started Ceph metadata server daemon.
Feb 20 10:58:11 ceph-osd1 ceph-mds[3015]: starting mds.ceph-mds1 at
[[email protected] ~]#

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:58:36 UTC; 5s ago
 Main PID: 2588 (ceph-mds)
   CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
           └─2588 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds2 --setuser ceph --setgroup ceph

Feb 20 10:58:36 ceph-osd2 systemd[1]: Stopped Ceph metadata server daemon.
Feb 20 10:58:36 ceph-osd2 systemd[1]: Started Ceph metadata server daemon.
Feb 20 10:58:36 ceph-osd2 ceph-mds[2588]: starting mds.ceph-mds2 at
[[email protected] ~]#

[[email protected] ~]# systemctl restart [email protected]
[[email protected] ~]# systemctl status [email protected][email protected] - Ceph metadata server daemon
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-02-20 10:58:57 UTC; 4s ago
 Main PID: 2473 (ceph-mds)
   CGroup: /system.slice/system-ceph\x2dmds.slice/[email protected]
           └─2473 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mds3 --setuser ceph --setgroup ceph

Feb 20 10:58:57 ceph-osd3 systemd[1]: Stopped Ceph metadata server daemon.
Feb 20 10:58:57 ceph-osd3 systemd[1]: Started Ceph metadata server daemon.
Feb 20 10:58:58 ceph-osd3 ceph-mds[2473]: starting mds.ceph-mds3 at
[[email protected] ~]#
  • Comprobamos la nueva versión de Ceph en cada nodo
[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#

[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#

[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#

[[email protected] ~]# ceph --version
ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
[[email protected] ~]#

Al finalizar, debemos comprobar el correcto funcionamiento del cluster (estado, que los filesystems de RBD están montados, que podemos subir objetos…).

Copias de seguridad

Para los filesystems de tipo bloque (RBS y CephFS) podemos utilizar el software de backups que utilicemos habitualmente, ya que en el sistema operativo vemos un punto de montaje, sin embargo, el almacenamiento por objetos no es posible copiarlo de la manera tradicional.

Para realizar copias de seguridad del sistema de archivos por objetos tenemos varias opciones:

  • Descargamos el contenido de cada bucket a un directorio local (ya vimos el comando sc3cmd get…) y hacemos la copia de seguridad habitual.
  • Descargamos el contenido de cada bucket a un directorio local y subimos el contenido a otro bucket que está en otra zona de disponibilidad (ya lo hemos visto anteriormente).
  • Configuramos la réplica por zona geográfica.
  • Configuración de la réplica de discos con DRBD (todavía no lo he probado con Ceph pero debería funcionar).

Réplica geográfica

Lamentablemente, este punto no lo he podido resolver. He seguido el procedimiento, no me ha dado error, pero tampoco he visto que se hayan replicado los objetos. Seguiré investigando este tema y actualizaré la documentación en caso de que con la solución.

No obstante, adjunto lo que he hecho, por si alguien viera qué es lo que estoy haciendo mal:

Documentación revisada

https://docs.ceph.com/docs/master/radosgw/multisite/
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/object_gateway_guide_for_red_hat_enterprise_linux/index#create-a-master-zone-group-rgw-ms-realm

Servidor Master: https://docs.oracle.com/en/operating-systems/oracle-linux/ceph-storage/ceph-luminous-rgw-multi-realm.html
Servidor Secundario: https://docs.oracle.com/en/operating-systems/oracle-linux/ceph-storage/ceph-luminous-rgw-multi-zone.html

Comandos ejecutados (entre otras muchas pruebas)

MASTER

radosgw-admin realm create –rgw-realm=Spain –default

radosgw-admin zonegroup delete –rgw-zonegroup=default
radosgw-admin zonegroup create –rgw-zonegroup=Barcelona –endpoints=http://ceph-mon:7480 –master –default

radosgw-admin zone create –rgw-zonegroup=Barcelona –rgw-zone=SantCugat –endpoints=http://ceph-mon:7480 –access-key=$SYSTEM_ACCESS_KEY –secret=$SYSTEM_SECRET_KEY –default –master


radosgw-admin user create –uid=»scsys» –display-name=»Sant Cugat System User» –access-key=$SYSTEM_ACCESS_KEY –secret=$SYSTEM_SECRET_KEY –system

radosgw-admin period update –commit

Secundario

radosgw-admin zone create –rgw-zonegroup=Barcelona –rgw-zone=Sabadell –endpoints=http://ceph-mon2:7480 –access-key=$MASTER_SYSTEM_ACCESS_KEY –secret=$MASTER_SYSTEM_SECRET_KEY

radosgw-admin period update –commit

Pruebas de estrés

He querido comparar el rendimiento de Cephs RBD con NFS con el comando «dd» y con pruebas de estrés a través de Sysbench. Este ha sido el resultado:

Comando “dd” (fichero de 4GB)

  • La velocidad de escritura con NFS ha sido de 322MB/s, mientras que con Ceph RBD 1,6GB/s.
  • La velocidad de lectura con NFS ha sido de 1,1GB/s, mientras que con Ceph RBD 3,9GB/s.
  • Escritura sin NFS y sin Cephs: 1,6GB/s
  • Lectura sin NFS y sin Cephs: 3,7GB/s

Pruebas de estrés con Sysbench (fichero de 4GB)

  • La velocidad de escritura con NFS ha sido de 41, mientras que con Ceph RBD 34MB/s
  • La velocidad de lectura con NFS ha sido de 62, mientras que con Ceph RBD 52MB/s

Comando “dd” (fichero de 30GB)

  • Escritura Ceph RBD: 373MB/s
  • Lectura Ceph RBD: 3,9GB/s
  • Escritura NFS: 392MB/s
  • Lectura NFS: 1,1GB/s

Pruebas de estrés con Sysbench (fichero de 30GB)

  • Escritura Ceph RBD: 35,81MB/s
  • Lectura Ceph RBD: 53,72MB/s
  • Escritura NFS: 61,87MB/s
  • Lectura NFS: 41,24MB/s

Adjunto el detalle técnico de las pruebas:

Escenario

Disco de tipo io1 (Hasta 1000MB/s) à https://aws.amazon.com/es/ebs/features/

Tipo de instancias: i3en.3xlarge (optimizadas para disco) à  https://aws.amazon.com/es/ec2/instance-types/

Para las pruebas de rendimiento de disco utilizo el comando “dd” y el software Sysbench para pruebas de estrés à https://wiki.gentoo.org/wiki/Sysbench

Pruebas de rendimiento

Escritura con el comando dd

Para ello, he creado un script que escribe 5 veces un fichero de 4GB para comprobar que la velocidad sea consistente:

[[email protected] testrbd]# cat ifdd.sh
for i in 1 2 3 4 5
do

   time dd if=/dev/zero of=test$i.txt bs=8192 count=524288

done

[[email protected] testrbd]#


[[email protected] testrbd]# ./ifdd.sh
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 2.6614 s, 1.6 GB/s

real    0m2.662s
user    0m0.278s
sys     0m2.384s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 2.72038 s, 1.6 GB/s

real    0m2.721s
user    0m0.266s
sys     0m2.455s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 9.88626 s, 434 MB/s

real    0m9.887s
user    0m0.275s
sys     0m2.466s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 2.67709 s, 1.6 GB/s

real    0m2.678s
user    0m0.270s
sys     0m2.408s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 2.90985 s, 1.5 GB/s

real    0m2.911s
user    0m0.266s
sys     0m2.396s
[[email protected] testrbd]#


Lectura con el commando dd

El script utilizado es el siguiente:

[[email protected] testrbd]# grep -v "#" ifdd.sh
for i in 1 2 3 4 5
do

   time dd if=test$i.txt of=/dev/null bs=8192 count=524288

done

[[email protected] testrbd]#


[[email protected] testrbd]# ./ifdd.sh
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 1.10866 s, 3.9 GB/s

real    0m1.110s
user    0m0.249s
sys     0m0.860s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 1.10583 s, 3.9 GB/s

real    0m1.107s
user    0m0.257s
sys     0m0.849s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 1.10505 s, 3.9 GB/s

real    0m1.106s
user    0m0.258s
sys     0m0.848s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 1.10724 s, 3.9 GB/s

real    0m1.108s
user    0m0.240s
sys     0m0.868s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 1.11014 s, 3.9 GB/s

real    0m1.111s
user    0m0.246s
sys     0m0.865s
[[email protected] testrbd]#



Lectura y escritura con Sysbench

Sysbench es un software pensado para realizar pruebas de estrés con el que nos podemos acercar un poco más a la realidad de un entorno con usuarios. Como cabía esperar, los tiempos se han reducido respecto a la prueba anterior con dd:


[[email protected] ~]# cd /testrbd/
[[email protected] testrbd]# sysbench --test=fileio --file-total-size=4G prepare
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
sysbench 1.0.17 (using system LuaJIT 2.0.4)

128 files, 32768Kb each, 4096Mb total
Creating files for the test...
Extra file open flags: (none)
Creating file test_file.0
Creating file test_file.1
Creating file test_file.2
Creating file test_file.3
Creating file test_file.4
Creating file test_file.5
Creating file test_file.6
Creating file test_file.7
Creating file test_file.8
…
Creating file test_file.126
Creating file test_file.127
4294967296 bytes written in 9.53 seconds (429.72 MiB/sec).
[[email protected] testrbd]#


[[email protected] testrbd]# sysbench --test=fileio --file-total-size=4G --file-test-mode=rndrw --max-time=180 --max-requests=0 --threads=12 run
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:
Number of threads: 12
Initializing random number generator from current time


Extra file open flags: (none)
128 files, 32MiB each
4GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      3333.60
    writes/s:                     2222.41
    fsyncs/s:                     7120.20

Throughput:
    read, MiB/s:                  52.09
    written, MiB/s:               34.73

General statistics:
    total time:                          180.0020s
    total number of events:              2280223

Latency (ms):
         min:                                    0.00
         avg:                                    0.95
         max:                                  114.65
         95th percentile:                        6.09
         sum:                              2158367.75

Threads fairness:
    events (avg/stddev):           190018.5833/1816.51
    execution time (avg/stddev):   179.8640/0.00

[[email protected] testrbd]#


Si reduzco el número de threads a 6, los resultados son parecidos:

Throughput:
    read, MiB/s:                  47.28
    written, MiB/s:               31.52


Elimino los datos generados por sysbench:

[[email protected] testrbd]# sysbench --test=fileio --file-total-size=4G cleanup
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Removing test files...
[[email protected] testrbd]#



Pruebas de rendimiento con NFS

El servidor de NFS es el ceph-osd11 y el cliente el ceph-mon11. Ejecutaremos los mismos comandos que con RBD para comparar tiempos:

[[email protected] testnfs]# df -hP .
Filesystem          Size  Used Avail Use% Mounted on
ceph-osd11:/nfssrv   60G   32M   60G   1% /testnfs
[[email protected] testnfs]#

Escritura con el comando dd

[[email protected] testnfs]$ ./ifdd.sh
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 13.3575 s, 322 MB/s

real    0m13.359s
user    0m0.289s
sys     0m3.334s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 13.8447 s, 310 MB/s

real    0m13.846s
user    0m0.283s
sys     0m3.328s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 13.8607 s, 310 MB/s

real    0m13.862s
user    0m0.300s
sys     0m3.316s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 13.8552 s, 310 MB/s

real    0m13.857s
user    0m0.254s
sys     0m3.345s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 13.8514 s, 310 MB/s

real    0m13.853s
user    0m0.283s
sys     0m3.288s
[[email protected] testnfs]$


Lectura con el comando dd

[[email protected] testnfs]$ ./ifdd.sh
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 3.88214 s, 1.1 GB/s

real    0m3.883s
user    0m0.288s
sys     0m1.613s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 3.88224 s, 1.1 GB/s

real    0m3.883s
user    0m0.271s
sys     0m1.695s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 3.88215 s, 1.1 GB/s

real    0m3.883s
user    0m0.272s
sys     0m1.714s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 3.88153 s, 1.1 GB/s

real    0m3.883s
user    0m0.299s
sys     0m1.779s
524288+0 records in
524288+0 records out
4294967296 bytes (4.3 GB) copied, 3.88125 s, 1.1 GB/s

real    0m3.882s
user    0m0.270s
sys     0m1.610s
[[email protected] testnfs]$


Lectura y escritura con Sysbench

[[email protected] testnfs]$ sysbench --test=fileio --file-total-size=4G prepare
…
4294967296 bytes written in 10.80 seconds (379.33 MiB/sec).
[[email protected] testnfs]$

[[email protected] testnfs]$ sysbench --test=fileio --file-total-size=4G --file-test-mode=rndrw --max-time=180 --max-requests=0 --threads=12 run
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:
Number of threads: 12
Initializing random number generator from current time


Extra file open flags: (none)
128 files, 32MiB each
4GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      4008.25
    writes/s:                     2672.16
    fsyncs/s:                     8559.28

Throughput:
    read, MiB/s:                  62.63
    written, MiB/s:               41.75

General statistics:
    total time:                          180.0175s
    total number of events:              2741897

Latency (ms):
         min:                                    0.00
         avg:                                    0.79
         max:                                   63.30
         95th percentile:                        5.99
         sum:                              2158454.80

Threads fairness:
    events (avg/stddev):           228491.4167/993.11
    execution time (avg/stddev):   179.8712/0.00

[[email protected] testnfs]$

Te puede interesar

¿Te ha gustado? ¡Compártelo!

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on telegram
Share on email

SUSCRÍBETE A PUERTO53

Recibe un email periódico con los artículos más interesantes de Puerto53.com

Antes de suscribirte lee los términos y condiciones. Gracias.

Contenido Relacionado

Este es un blog de informática. ¿Qué necesita un informático?

Portátiles

Rebajas
Lenovo S145-15IIL - Ordenador portátil 15.6" FullHD (Intel Core i5-1035G1, 8GB RAM, 512GB...
  • Pantalla de 15.6" FullHD 1920x1080 pixeles 220nits Anti-glare
  • Procesador Intel Core I5-1035G1, QuadCore, 1.0-3.6GHz
  • Memoria RAM de 4GB Soldered + 4GB DIMM DDR4-2666
Lenovo S145-15AST - Ordenador portátil 15.6" FullHD (AMD A9-9425, 8GB de RAM, 512GB SSD,...
  • Pantalla de 15,6"fullhd 1920x1080 pixeles
  • Procesador amd a9-9425, dualcore 3.1ghz hasta 3.7ghz, 1m
  • Memoria ram de 8gddr4, 2133mhz
Rebajas
HP 15s-eq0004ns - Ordenador portátil de 15.6" HD (AMD Ryzen 3 3200U, 8 GB RAM, 256 GB,...
  • Procesador AMD Ryzen 3 3200U (2 núcleos, 5 MB de Caché, 2.6 GHz hasta 3.5 GHz)
  • Memoria RAM de 8 GB DDR4, 2400 MT/s
  • Disco SSD de 256 GB PCIe NVMe M.2
Rebajas
Lenovo S145-15AST- Ordenador portátil 15.6" FullHD (AMD A9-9425, 8GB RAM, 256GB SSD, AMD...
  • Pantalla de 15.6" FullHD 1920x1080 pixeles 220nits Anti-glare
  • Procesador AMD A9-9425, DualCore, 3.1GHz-3.7GHz
  • Memoria RAM de 8GB DIMM DDR4, 2133Mhz

Monitores PC

HP 22w - Monitor 21.5" (Full HD, 1920 x 1080 pixeles, tiempo de respuesta de 5 ms, 1 x...
  • Regálale a tu escritorio un toque de elegancia
  • Esta pantalla IPS de 53.61 cm (21,5 pulgadas) en diagonal dispone de 178 ángulos de visualización para ofrecer una experiencia de entretenimiento...
  • Con los puertos VGA y HDMI, esta pantalla hace que conectar tu ordenador portátil o pc de sobremesa sea una tarea sencilla y fluida
Rebajas
Samsung LC24F390FHU - Monitor para PC Desktop de 24'' (1920 x 1080 pixeles, Full HD, HD...
  • Pantalla de 24 pulgadas con una resolución de 1920 x 1080 píxeles
  • Brillo de pantalla: 250 cd / m²
  • Interfaz de montaje VESA 75 x 75 mm
Rebajas
BenQ GW2470HL - Monitor para PC Desktop de 23.8" Full HD (1920x1080, VA, 16:9, 2x HDMI,...
  • Los niveles ajustables de baja luz azul eliminan la luz azul peligrosa y mantienen la luz beneficiosa para una comodidad de visualización prolongada
  • Disfruta de gráficos nítidos con una resolución de 1920 x 1080
  • Minimiza las distracciones y crea una configuración de varios paneles con monitores de bisel estrecho

NAS

Rebajas
Western Digital My Cloud Home - Almacenamiento En Red NAS de 3 TB, 1 Bahía, Blanco y...
  • Configuración sencilla y rápida desde el teléfono
  • Acceso desde cualquier lugar con la aplicación para móviles o para ordenadores de My Cloud Home, o bien desde MyCloud.com
  • Copia de seguridad automática de las fotos y los vídeos del teléfono
Synology Diskstation DS218+ - Memoría externa DS218+ NAS 2bay
  • Procesador de doble núcleo con aceleración de cifrado AES-NI
  • No lleva disco
  • Admite la transcodificación 4K en tiempo real
Synology DS218J Diskstation
  • A versatile entry-level 2-bay NAS for home and personal cloud storage
  • Over 113 MB/s reading, 112 MB/s writing
  • Dual-core CPU with hardware encryption engine
Synology diskstation ds120j.
  • Almacenamiento de 1 bahía fácil de usar en una nube personal para usuarios de nas inexpertos
  • Rendimiento secuencial más de 112 mb/s de lectura, 106 mb/s de escritura
  • Plataforma para compartir archivos y sincronización entre dispositivos

Deja un comentario

About Author