Tutorial de Ansible

Recientemente me he estado mirando el funcionamiento de Ansible para automatizar tareas de manera masiva en servidores Linux remotos mediante esta aplicación. Hasta ahora utilizo otra pero como Ansible está cada vez más extendida y considero que vale la pena mirárselo. Más aún si RedHat está apostando por esta herramienta como estándar de automatización.

Su funcionamiento es muy sencillo. Podemos ejecutar comandos de sistema operativo en remoto o podemos utilizar módulos predefinidos, llamados playbooks que realizan multitud de tareas. Ahora veremos un ejemplo.

Tabla de contenidos

Instalar Ansible

Comenzamos por el principio. Instalamos Ansible:

[root ~]# rpm -qa |grep -i ansible
ansible-2.3.1.0-3.el7.noarch
[root~]#

[hpddpers~]$ ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
[hpddpers~]$

Configurar Ansible

Como cada usuario del sistema operativo puede utilizar Ansible, podemos crearnos una configuración personalizada. Así que creo el directorio “ansible” en mi perfil y copio dentro el fichero /etc/ansible/ansible.cfg:

[[email protected]]$ pwd
/root/home/hpddpers/ansible
[hpddpersansible]$ ll ansible.cfg
-rw-r–r–. 1 hpddpers uxsup3 18186 Aug 24 10:08 ansible.cfg
[[email protected]]$

Personalizamos algunas directivas de este fichero:

[[email protected]]$ grep -v «#» ansible.cfg |grep -v ^$
[defaults]
inventory = inventario/ –> Es el directorio donde voy a guardar la configuración de los servidores con los que voy a trabajar remotamente
roles_path = /etc/ansible/roles:/usr/share/ansible/roles
host_key_checking = False
timeout = 60
log_path = ./ansible.log –> Es el fichero de log donde se van a guardar los registros de ansible que haga con mi usuario
retry_files_save_path = retry –> El el fichero donde ansible dejará constancia de los servidores en los que hayan fallado las tareas a ejecutar
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[persistent_connection]
connect_timeout = 30
connect_retries = 30
connect_interval = 1
[accelerate]
[selinux]
[colors]
[diff]
[[email protected]]$

[[email protected]]$ pwd
/root/home/hpddpers/ansible
[[email protected]]$ ll
total 160
-rw-r–r–. 1 hpddpers uxsup3 18186 Aug 24 10:08 ansible.cfg
-rw-r–r–. 1 hpddpers uxsup3 124595 Aug 24 11:35 ansible.log
-rw-r–r–. 1 hpddpers uxsup3 650 Aug 24 10:43 createuser.yaml
-rw-r–r–. 1 hpddpers uxsup3 348 Aug 24 10:45 deleteuser.yaml
drwxr-xr-x. 2 hpddpers uxsup3 51 Aug 24 10:48 inventario
-rwxr–r–. 1 hpddpers uxsup3 257 Aug 24 10:11 lista_groups.sh
-rwxr–r–. 1 hpddpers uxsup3 210 Aug 24 10:10 lista_hosts.sh
drwxr-xr-x. 2 hpddpers uxsup3 30 Aug 24 10:43 retry
[[email protected]]$

Acceso de Ansible a los servidores remotos

Creamos nuestro primer fichero con los servidores a los que vamos a acceder desde Ansible:

[[email protected]]$ pwd
/root/home/hpddpers/ansible/inventario
[[email protected]]$ cat servidores.txt
#Servidores que pertenecen al grupo TEST
[TEST]
# Si el DNS no resuelve los nombres de los servidores, podemos indicar su IP
server1 ansible_host=30.34.79.13
server2 ansible_host=30.34.79.33

# Variables globales para todos los servidores del grupo test
[TEST:vars]
ansible_password=Mi_Contraseña

#ansible_user=Mi_usuario_de_sistema
# No es necesario contraseña si se ha configurado una relación de confianza previamente
#ansible_ssh_pass=XXXX
# Configuramos las credenciales de root para que Ansible ejecute las tareas con este usuario
ansible_become=True
ansible_become_method=su
ansible_become_user=root
ansible_become_pass=Contraseña_del_usuario_root
[[email protected]]$

Consultamos los servidores que hemos configurado para que Ansible pueda realizar alguna acción sobre ellos:

[[email protected]]$ ansible all -i inventario/servidores.txt --list-hosts --limit "$LIMIT"

hosts (2):
server1
server2
[[email protected]]$

Lo mismo pero a nivel de grupo:

[[email protected]]$ ansible TEST -i inventario/servidores.txt  -m debug -a var=group_names
server1 | SUCCESS => {
"group_names": [
"TEST"
]
}
server2 | SUCCESS => {
"group_names": [
"TEST"
]
}

Podemos crear un segundo fichero de servidores:

[[email protected]]$ cat inventario/servidores2.txt
[WS]
servidor1
servidor2

[BD]
bd1
servidor1

[AS]
servidor2
servidor3
[[email protected]]$

Y ver los servidores que pertenecen al grupo WS, o al WS y AS o a la intersección de ambos grupos, por ejemplo:

[[email protected]]$ ansible all -i inventario/servidores2.txt --list-hosts --limit "WS"
hosts (2):
servidor1
servidor2
[[email protected]]$ ansible all -i inventario/servidores2.txt --list-hosts --limit "WS:AS"
hosts (3):
servidor1
servidor2
servidor3
[[email protected]]$ ansible all -i inventario/servidores2.txt --list-hosts --limit "WS:&AS"
hosts (1):
servidor2
[[email protected]]$

Ejecución de comandos con Ansible en servidores remotos

Vamos a ejecutar el comando “id” en los servidores del grupo TEST (he deshabilitado el acceso como root del fichero de configuración). Utilizamos el modo adhoc:

[[email protected]]$ ansible TEST -i inventario/servidores.txt -a id
server1 | SUCCESS | rc=0 >>
uid=8402895(hpddpers) gid=45005(uxsup3) groups=45005(uxsup3) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

server2| SUCCESS | rc=0 >>
uid=8402895(hpddpers) gid=45005(uxsup3) groups=45005(uxsup3)

Vamos a ejecutar un comando más complejo, con tuberías. Para ello, usamos el modo shell:

[[email protected]]$ ansible TEST -i inventario/servidores.txt -m shell -a "ls -la |grep hpddpers"
server1| SUCCESS | rc=0 >>
drwx------. 4 hpddpers uxsup3 4096 Aug 24 10:19 .
drwx------ 3 hpddpers uxsup3 4096 Aug 24 10:19 .ansible
-rw-------. 1 hpddpers uxsup3 1315 Jul 19 09:30 .bash_history
-rw-r--r--. 1 hpddpers uxsup3 18 Sep 26 2014 .bash_logout
-rw-r--r--. 1 hpddpers uxsup3 176 Sep 26 2014 .bash_profile
-rw-r--r--. 1 hpddpers uxsup3 124 Sep 26 2014 .bashrc
drwx------. 2 hpddpers uxsup3 4096 Aug 24 06:15 .ssh

server2| SUCCESS | rc=0 >>
drwx------. 4 hpddpers uxsup3 4096 Aug 24 10:19 .
drwx------. 3 hpddpers uxsup3 4096 Aug 24 10:19 .ansible
-rw-------. 1 hpddpers uxsup3 1866 Jun 19 08:28 .bash_history
-rw-r--r--. 1 hpddpers uxsup3 18 Jan 27 2011 .bash_logout
-rw-r--r--. 1 hpddpers uxsup3 176 Jan 27 2011 .bash_profile
-rw-r--r--. 1 hpddpers uxsup3 124 Jan 27 2011 .bashrc
drwx------. 2 hpddpers uxsup3 4096 Aug 24 06:15 .ssh
-rw-------. 1 hpddpers uxsup3 754 May 11 2015 .viminfo

Vuelvo a habilitar las credenciales de root para crear un usuario en los dos servidores del grupo TEST:

[[email protected]]$ ansible TEST -i inventario/servidores.txt -m user -a «name=kk state=present»
server1| SUCCESS => {
«changed»: true, –> Significa que Ansible ha hecho un cambio en el servidor para ejecutar esta tarea
«comment»: «»,
«createhome»: true,
«group»: 52835,
«home»: «/home/kk»,
«name»: «kk»,
«shell»: «/bin/bash»,
«state»: «present»,
«system»: false,
«uid»: 52835
}
server2| SUCCESS => {
«changed»: true,
«comment»: «»,
«createhome»: true,
«group»: 52834,
«home»: «/home/kk»,
«name»: «kk»,
«shell»: «/bin/bash»,
«state»: «present»,
«system»: false,
«uid»: 52834
}
[[email protected]]$

Si volvemos a ejecutar el comando, observaremos que ya no ha realizado ningún cambio, al haberlo hecho con anterioridad:

[[email protected]]$ ansible TEST -i inventario/servidores.txt -m user -a «name=kk state=present»
server1| SUCCESS => {
«append»: false,
«changed»: false,
«comment»: «»,
«group»: 52835,
«home»: «/home/kk»,
«move_home»: false,
«name»: «kk»,
«shell»: «/bin/bash»,
«state»: «present»,
«uid»: 52835
}
server2| SUCCESS => {
«append»: false,
«changed»: false,
«comment»: «»,
«group»: 52834,
«home»: «/home/kk»,
«move_home»: false,
«name»: «kk»,
«shell»: «/bin/bash»,
«state»: «present»,
«uid»: 52834
}
[[email protected]]$

Podemos comprobar que la tarea se ha realizado entrando a los servidores o en remoto desde Ansible:

[[email protected] ~]# id kk
uid=52834(kk) gid=52834(kk) groups=52834(kk)
[[email protected] ~]#

[[email protected]]$ ansible TEST -a "id kk"
la02sui1 | SUCCESS | rc=0 >>
uid=52835(kk) gid=52835(kk) groups=52835(kk)

la02sui0 | SUCCESS | rc=0 >>
uid=52834(kk) gid=52834(kk) groups=52834(kk)
[[email protected]]$

Eliminamos el usuario creado anteriormente:

[[email protected]]$ ansible TEST -m user -a «name=kk state=absent«
la02sui0 | SUCCESS => {
«changed»: true,
«force»: false,
«name»: «kk»,
«remove»: false,
«state»: «absent»
}
la02sui1 | SUCCESS => {
«changed»: true,
«force»: false,
«name»: «kk»,
«remove»: false,
«state»: «absent»
}
[[email protected]]$

[[email protected]]$ ansible TEST -a «id kk»
la02sui1 | FAILED | rc=1 >>
id: kk: No such user

la02sui0 | FAILED | rc=1 >>
id: kk: No such user

[[email protected]]$

Ejecutar comandos Shell Script

Podemos ejecutar comandos Shell Script (bash) desde la línea de comandos de Ansible. Ejemplo:

[[email protected] Ansible]# ansible TEST -i inventario/test.txt -m shell -a 'echo $TERM'
server2 | CHANGED | rc=0 >>
xterm

[[email protected] Ansible]#

Ejecutar un script local en un servidor remoto

Si hemos creado un script local en bash podemos ejecutarlo masivamente en servidores remotos con Ansible sin necesidad de copiarlo a cada uno de ellos manualmente. Lo haremos así:

Código fuente del Playbook

[[email protected] Ansible]# cat playbooks/check_multipath.yml
- hosts: TEST
  tasks:
     - name: Chequeo multipath
       script: /planific/bin/admsys/check_multipath.sh
       register: out
       tags:
       - checkmultipath

     - debug: var=out.stdout_lines
     - local_action: copy content={{ out.stdout }} dest="/planific/bin/admsys/Ansible/log/check_multipath.{{ inventory_hostname }}.out"
       tags:
       - checkmultipath
[[email protected] Ansible]#

Ejecución del playbook con el script local

[[email protected] Ansible]# ansible-playbook playbooks/check_multipath.yml -i inventario/test.txt -v
Using /planific/bin/admsys/Ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/test.txt did not meet host_list requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/Ansible/inventario/test.txt did not meet script requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/Ansible/inventario/test.txt did not meet yaml requirements, check plugin documentation if this is unexpected

PLAY [TEST] **************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
ok: [server1.ecs.hp.com]
ok: [server2]

TASK [Chequeo multipath] *************************************************************************************************************************************************************************
changed: [server1] => {"changed": true, "rc": 0, "stderr": "Shared connection to server1 closed.rn", "stderr_lines": ["Shared connection to server1 closed."], "stdout": "server1;0 paths failedrnserver1;54 paths activernserver1;Red Hat Enterprise Linux Server release 6.9 (Santiago)rnserver1;2.6.32-696.20.1.el6.x86_64rnserver1;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)rnserver1;Device Mapper - Driver version:    4.33.1rnserver1;HBA 05:00.0 - QLogic Fibre Channel HBA Driverrnserver1;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-krnserver1;HBA 05:00.1 - QLogic Fibre Channel HBA Driverrnserver1;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-krn", "stdout_lines": ["server1;0 paths failed", "server1;54 paths active", "server1;Red Hat Enterprise Linux Server release 6.9 (Santiago)", "server1;2.6.32-696.20.1.el6.x86_64", "server1;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)", "server1;Device Mapper - Driver version:    4.33.1", "server1;HBA 05:00.0 - QLogic Fibre Channel HBA Driver", "server1;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-k", "server1;HBA 05:00.1 - QLogic Fibre Channel HBA Driver", "server1;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-k"]}
changed: [server2] => {"changed": true, "rc": 0, "stderr": "Shared connection to server2 closed.rn", "stderr_lines": ["Shared connection to server2 closed."], "stdout": "server2;0 paths failedrnserver2;57 paths activernserver2;Red Hat Enterprise Linux Server release 6.9 (Santiago)rnserver2;2.6.32-696.20.1.el6.x86_64rnserver2;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)rnserver2;Device Mapper - Driver version:    4.33.1rnserver2;HBA 05:00.0 - QLogic Fibre Channel HBA Driverrnserver2;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-krnserver2;HBA 05:00.1 - QLogic Fibre Channel HBA Driverrnserver2;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-krn", "stdout_lines": ["server2;0 paths failed", "server2;57 paths active", "server2;Red Hat Enterprise Linux Server release 6.9 (Santiago)", "server2;2.6.32-696.20.1.el6.x86_64", "serer2;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)", "server2;Device Mapper - Driver version:    4.33.1", "server2;HBA 05:00.0 - QLogic Fibre Channel HBA Driver", "server2;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-k", "server2;HBA 05:00.1 - QLogic Fibre Channel HBA Driver", "server2;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-k"]}

TASK [debug] *************************************************************************************************************************************************************************************
ok: [server1] => {
    "out.stdout_lines": [
        "server1;0 paths failed",
        "server1;54 paths active",
        "server1;Red Hat Enterprise Linux Server release 6.9 (Santiago)",
        "server1;2.6.32-696.20.1.el6.x86_64",
        "server1;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)",
        "server1;Device Mapper - Driver version:    4.33.1",
        "server1;HBA 05:00.0 - QLogic Fibre Channel HBA Driver",
        "server1;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-k",
        "server1;HBA 05:00.1 - QLogic Fibre Channel HBA Driver",
        "server1;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-k"
    ]
}
ok: [server2] => {
    "out.stdout_lines": [
        "server2;0 paths failed",
        "server2;57 paths active",
        "server2;Red Hat Enterprise Linux Server release 6.9 (Santiago)",
        "server2;2.6.32-696.20.1.el6.x86_64",
        "server2;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)",
        "server2;Device Mapper - Driver version:    4.33.1",
        "server2;HBA 05:00.0 - QLogic Fibre Channel HBA Driver",
        "server2;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-k",
        "server2;HBA 05:00.1 - QLogic Fibre Channel HBA Driver",
        "server2;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-k"
    ]
}

TASK [copy] **************************************************************************************************************************************************************************************
changed: [server2 -> localhost] => {"changed": true, "checksum": "879550a71645dd447bb0d0f44fb07ee2c140cb26", "dest": "/planific/bin/admsys/Ansible/log/check_multipath.server2.out", "gid": 0, "group": "root", "md5sum": "2b095c9b763c576df7cfe8d2a932ca3d", "mode": "0644", "owner": "root", "size": 506, "src": "/root/.ansible/tmp/ansible-tmp-1558444377.09-156345710637833/source", "state": "file", "uid": 0}
changed: [server1 -> localhost] => {"changed": true, "checksum": "2d7e11c889851e60abc539c059644ac8f0711345", "dest": "/planific/bin/admsys/Ansible/log/check_multipath.server1.out", "gid": 0, "group": "root", "md5sum": "8843d6629b7f0e00cd8aa544f6480232", "mode": "0644", "owner": "root", "size": 506, "src": "/root/.ansible/tmp/ansible-tmp-1558444376.75-97728370581421/source", "state": "file", "uid": 0}

PLAY RECAP ***************************************************************************************************************************************************************************************
server1 : ok=4    changed=2    unreachable=0    failed=0
server2 : ok=4    changed=2    unreachable=0    failed=0

[[email protected] Ansible]# ll log
total 175
-rw-r--r-- 1 root root 174232 May 21 15:12 ansible.log
-rw-r--r-- 1 root root    506 May 21 15:12 check_multipath.server1.out
-rw-r--r-- 1 root root    506 May 21 15:12 check_multipath.server2.out
[[email protected] Ansible]#

Salida del script

Se genera un fichero para cada servidor.

[[email protected] Ansible]# cat log/check_multipath.server1.out
server1;0 paths failed
server1;54 paths active
server1;Red Hat Enterprise Linux Server release 6.9 (Santiago)
server1;2.6.32-696.20.1.el6.x86_64
server1;Device Mapper - Library version:   1.02.117-RHEL6 (2016-12-13)
server1;Device Mapper - Driver version:    4.33.1
server1;HBA 05:00.0 - QLogic Fibre Channel HBA Driver
server1;HBA 05:00.0 - Driver version: 8.07.00.08.06.0-k
server1;HBA 05:00.1 - QLogic Fibre Channel HBA Driver
server1;HBA 05:00.1 - Driver version: 8.07.00.08.06.0-k
[[email protected] Ansible]#

¿Qué es un playbook de Ansible?

En Ansible existen muchísimos módulos ya creados para todo tipo de tareas. A estos módulos se les llama “playbooks” y, básicamente, son ficheros en Yaml con una serie de variables en su interior. Por ejemplo, la información del módulo de usuarios la podemos encontrar en este enlace https://docs.ansible.com/ansible/latest/modules/user_module.html#user-module. Así que ahora vamos a crear un usuario con el playbook:

Ejecutar un comando en un servidor remoto desde un Playbook

Igual que ejecutamos un comando en un servidor remoto con Ansible, mediante la invocación de la shell, también podemos realizar esta operación desde un playbook. Veamo un ejemplo:

Código fuente del playbook

[[email protected] Ansible]# cat playbooks/send_command.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Ejecutar comando
    command: id
    register: myid
[[email protected] Ansible]#

Ejecución

[[email protected] Ansible]# ansible-playbook --extra-vars "HOSTS=lhpilox01" -i inventario/david playbooks/send_command.yml -vv
 [WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
ansible-playbook 2.7.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /etc/ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAYBOOK: send_command.yml ***********************************************************************************************************************************************************************
1 plays in playbooks/send_command.yml

PLAY [lhpilox01] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/send_command.yml:1
ok: [lhpilox01]
META: ran handlers

TASK [Ejecutar comando] **************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/send_command.yml:3
changed: [lhpilox01] => {"changed": true, "cmd": ["id"], "delta": "0:00:00.004737", "end": "2019-09-23 08:46:49.044521", "rc": 0, "start": "2019-09-23 08:46:49.039784", "stderr": "", "stderr_lines": [], "stdout": "uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)", "stdout_lines": ["uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)"]}
META: ran handlers
META: ran handlers

PLAY RECAP ***************************************************************************************************************************************************************************************
lhpilox01                  : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Enviar un comando con las credenciales de root

A través de los parámetros de Ansible, también podemos enviar las credenciales correspondientes para convertirnos en root y ejecutar el comando como si fuésemos root. Para ello, vamos a utilizar el playbook anterior pero ahora también vamos a personalizar el comando que vamos a enviar. Luego, lo ejecutaremos con las credenciales correspondientes:

Código fuente del playbook

[[email protected] Ansible]# cat playbooks/send_command.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Ejecutar comando
    command: "{{ COMANDO }}"
    register: myid
[[email protected] Ansible]#

Ejecución

[[email protected] Ansible]# ansible-playbook --extra-vars "HOSTS=lhpilox01 ansible_user=hpddpers ansible_password=ContraseñaSecretaDelUsuario ansible_become=True ansible_become_method=su ansible_become_user=root ansible_become_pass=ContraseñaSecretaDelUsuarioRoot COMANDO=id" -i inventario/david playbooks/send_command.yml -vv
 [WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
ansible-playbook 2.7.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /etc/ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAYBOOK: send_command.yml ***********************************************************************************************************************************************************************
1 plays in playbooks/send_command.yml

PLAY [lhpilox01] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/send_command.yml:1
ok: [lhpilox01]
META: ran handlers

TASK [Ejecutar comando] **************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/send_command.yml:3
changed: [lhpilox01] => {"changed": true, "cmd": ["id"], "delta": "0:00:00.005444", "end": "2019-09-23 09:41:42.869148", "rc": 0, "start": "2019-09-23 09:41:42.863704", "stderr": "", "stderr_lines": [], "stdout": "uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)", "stdout_lines": ["uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)"]}
META: ran handlers
META: ran handlers

PLAY RECAP ***************************************************************************************************************************************************************************************
lhpilox01                  : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Como podemos observar en los parámetros de ejecución del playbook, estamos pasando las credenciales del usuario root y nos convertimos a el mediante su. Lo remarco en negrita:

ansible-playbook –extra-vars «HOSTS=lhpilox01 ansible_user=hpddpers ansible_password=ContraseñaSecretaDelUsuario ansible_become=True ansible_become_method=su ansible_become_user=root ansible_become_pass=ContraseñaSecretaDelUsuarioRoot COMANDO=id» -i inventario/david playbooks/send_command.yml -vv

Solicitar la contraseña del usuario remoto por línea de comandos

Inventario de pruebas

Voy a utilizar el siguiente inventario de servidores para esta prueba:

[[email protected] Ansible]# cat inventory/david
[CENTRALIZADOR]
lhpilox01

[PRUEBAS]
lansibd0
[[email protected] Ansible]#

Contenido del Playbook

  • Become: yes –> Sirve para convertirnos en root
  • Si ponemos become_user, especificamos el usuario al que nos queremos convertir.
[[email protected] Ansible]# cat /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/limits.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Generar fichero limits.conf
    template:
       src: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/limits.conf
       dest: /etc/security/limits.conf
       mode: '0644'
       backup: yes
    become: yes
    #become_user: apache
[[email protected] Ansible]#

Ejecución del Playbook

  • -e «HOSTS=PRUEBAS»  –> Grupo de servidores que tengo definido en el inventario
  • –user hpddpers  –> Es mi usuario de HPSA
  • –ask-pass –> Solicito que me pregunte mi contraseña
  • –become-method su –> Nos convertiremos a otro usuario mediante el comando “su”
  • –ask-become-pass –> Solicito que pregunte la contraseña del usuario al que nos convertimos
  • -i inventory/david –> Inventario de servidores de pruebas que estoy utilizando
  • /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/limits.yml -vv –> Playbook a ejecutar
[[email protected] Ansible]# ansible-playbook -e "HOSTS=PRUEBAS" --user hpddpers --ask-pass --become-method su --ask-become-pass -i inventory/david /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/limits.yml -vv
[WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
ansible-playbook 2.7.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /etc/ansible/ansible.cfg as config file
SSH password:
SU password[defaults to SSH password]:
/planific/bin/admsys/Ansible/inventory/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAYBOOK: limits.yml *****************************************************************************************************************************************************************************
1 plays in /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/limits.yml

PLAY [PRUEBAS] ***********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/limits.yml:1
ok: [lansibd0]
META: ran handlers

TASK [Generar fichero limits.conf] ***************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/limits.yml:7
changed: [lansibd0] => {"backup_file": "/etc/security/[email protected]:21:44~", "changed": true, "checksum": "126e109a76ba260a26f72c524ce582e75fec2017", "dest": "/etc/security/limits.conf", "gid": 0, "group": "root", "md5sum": "f832f75581b59d21122df855afb20aa0", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 2518, "src": "/root/home/hpddpers/.ansible/tmp/ansible-tmp-1570094502.03-197689005320199/source", "state": "file", "uid": 0}
META: ran handlers
META: ran handlers

PLAY RECAP ***************************************************************************************************************************************************************************************
lansibd0                   : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Creación y eliminación de usuarios

Creamos el fichero yaml:

[[email protected]]$ cat createuser.yaml
- name: Prueba de creacion de usuario
#hosts: all
hosts: TEST
#hosts: la02sui0
gather_facts: false
vars:
kk_group: kk
kk_group_id: 8481
kk_user: kk
kk_user_id: 8497
kk_passwd: Pass8497ISIM
tasks:
- name: Creo group kk
group:
name: "{{ kk_group }}"
gid: "{{ kk_group_id }}"
state: present
- name: Creo user kk
user:
name: "{{ kk_user }}"
uid: "{{ kk_user_id }}"
group: "{{ kk_group }}"
password: "{{ kk_passwd |password_hash('sha512') }}"
state: present
update_password: on_create # Solo cambia la contraseña cuando se crea el usuario

Ejecutamos el comando para crear el usuario con el playbook:

[[email protected]]$ ansible-playbook createuser.yaml
PLAY [Prueba de creacion de usuario] *********************************************************************************************************************

TASK [Creo group kk] *************************************************************************************************************************************
changed: [la02sui1]
changed: [la02sui0]

TASK [Creo user kk] **************************************************************************************************************************************
changed: [la02sui1]
changed: [la02sui0]

PLAY RECAP ***********************************************************************************************************************************************
server1: ok=2 changed=2 unreachable=0 failed=0
server2: ok=2 changed=2 unreachable=0 failed=0

Creamos el módulo para eliminar el usuario:

[[email protected]]$ cat deleteuser.yaml
- name: Prueba de creacion de usuario
#hosts: all
hosts: TEST
#hosts: server1
gather_facts: false
vars:
kk_group: kk
kk_user: kk
tasks:
- name: Elimino user kk
user:
name: "{{ kk_user }}"
state: absent
- name: Elimino group kk
group:
name: "{{ kk_group }}"
state: absent

Y lo ejecutamos:

[[email protected]]$ ansible-playbook deleteuser.yaml
PLAY [Prueba de creacion de usuario] *********************************************************************************************************************

TASK [Elimino user kk] ***********************************************************************************************************************************
changed: [la02sui1]
changed: [la02sui0]

TASK [Elimino group kk] **********************************************************************************************************************************
ok: [la02sui1]
ok: [la02sui0]

PLAY RECAP ***********************************************************************************************************************************************
server1: ok=2 changed=1 unreachable=0 failed=0
server2 : ok=2 changed=1 unreachable=0 failed=0

Ansible también utiliza variables internas que podemos utilizar dentro de los ficheros yaml. Las podremos extraer con el comando: ansible TEST -m setup

Añadir una línea a un fichero de texto

[[email protected] Ansible]# cat playbooks/add_text_to_file.yml
- hosts: TEST
  tasks:
  - lineinfile:
     path: /tmp/david.txt
     line: 'Línea 3'
[[email protected] Ansible]#

Validación

Como vemos, al final del fichero de texto se ha añadido la línea «Línea 3».

Fichero original:
[[email protected] ~]# cat david.txt
Línea 1
Línea 2
[[email protected] ~]#

Fichero tras la ejecución del playbook:
[[email protected] tmp]# cat david.txt
Línea 1
Línea 2
Línea 3
[[email protected] tmp]#

Sustitución de una cadena de texto

Ahora, vamos a sustituir “Línea 3” por “Línea 4”.

[[email protected] Ansible]# cat playbooks/replace_text.yml
- hosts: TEST
  tasks:
  - lineinfile:
     path: /tmp/david.txt
     regexp: 'Línea 3'
     line: 'Línea 4'
[[email protected] Ansible]#

Añadir una línea de texto antes de una línea

Antes del texto “Línea 4” vamos a insertar el texto “Línea 3”.

[[email protected] Ansible]# cat playbooks/add_text_to_file_before.yml
- hosts: TEST
  tasks:
  - lineinfile:
     path: /tmp/david.txt
     insertbefore: '^Línea 4'
     line: 'Línea 3'
[[email protected] Ansible]#

Añadir una línea de texto al final del archivo

Para añadir una línea de texto al final de un fichero, utilizaremos la directiva «insertafter: EOF» dentro del Playbook:

[[email protected] Ansible]# cat playbooks/post-provisioning/RHEL7/HIST.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - lineinfile:
     path: /etc/profile
     line: "export HISTTIMEFORMAT='%F %T '"
     insertafter: EOF
     state: present
  become: yes
[[email protected] Ansible]#

Eliminar texto

Eliminamos el texto «Línea 4».

[[email protected] Ansible]# cat playbooks/remove_text.yml
- hosts: TEST
  tasks:
  - lineinfile:
     path: /tmp/david.txt
     regexp: 'Línea 4'
     state: absent
[[email protected] Ansible]#

Insertar múltiples líneas en un fichero de texto

Anteriormente hemos insertado una sola línea en un fichero de texto, pero si queremos añadir más de una, utilizaremos blokinline en vez de lineinfile. Veamos un ejemplo:

[[email protected] RHEL7]# cat logrotate.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Copiar la plantilla de rotacion de logs de Sistemas UNIX
    template:
       src: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/LOGROTATE/script/logrotate.sh
       dest: /planific/bin/admsys/logrotate.sh
       mode: '0700'
       backup: no
  - name: Configuracion de cron
    blockinfile:
      path: /var/spool/cron/root
      marker: "# ANSIBLE - Mantenimiento logrotate.status"
      block: |
         # Mantenimiento del archivo /var/lib/logrotate.status
         00 00 1 * * /planific/bin/admsys/logrotate.sh >/dev/null 2>&1
  become: yes
[[email protected] RHEL7]#

Si ejecutamos el Playbook, veremos que en el fichero de texto se han registrado las siguientes entradas al final del mismo:

[[email protected] ~]# crontab -l |tail -6
# ANSIBLE - Mantenimiento logrotate.status
# Mantenimiento del archivo /var/lib/logrotate.status
00 00 1 * * /planific/bin/admsys/logrotate.sh >/dev/null 2>&1
# ANSIBLE - Mantenimiento logrotate.status
[[email protected] ~]#

Ejecutar Playbooks con Parámetros en la Línea de Comandos

Es muy habitual crear una plantilla de Ansible con variables en su interior cuyo valor se recoge desde la línea de comandos. Para ello, utilizaremos la sintaxis Jinja2 y, a modo de ejemplo, escribiremos un fichero de texto pasándole el contenido en un parámetro de la línea de comandos:

Código fuente del Playbook

[[email protected] Ansible]# cat playbooks/add_text_to_file_var.yml
- hosts: lhpilox01
  tasks:
  - lineinfile:
     path: /tmp/david.txt
     line: "{{ texto }}"
[[email protected] Ansible]#

La variable {{ texto }} es la que pasaremos por parámetros en la línea de comandos cuando ejecutemos el Playbook.

Ejecución del Playbook

[[email protected] Ansible]# ansible-playbook --extra-vars "texto=David" -i inventario/test.txt playbooks/add_text_to_file_var.yml -v
Using /planific/bin/admsys/Ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/test.txt did not meet host_list requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/Ansible/inventario/test.txt did not meet script requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/Ansible/inventario/test.txt did not meet yaml requirements, check plugin documentation if this is unexpected

PLAY [lhpilox01] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
ok: [lhpilox01]

TASK [lineinfile] ********************************************************************************************************************************************************************************
changed: [lhpilox01] => {"backup": "", "changed": true, "msg": "line added"}

PLAY RECAP ***************************************************************************************************************************************************************************************
lhpilox01                  : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Podemos observar que hemos pasado por parámetros el valor de la variable texto. Lo vemos en «texto=David». Como podemos comprobar, se ha escrito ese nombre en el fichero indicado en el Playbook:

[[email protected] Ansible]# cat /tmp/david.txt
David
[[email protected] Ansible]#

Autentificarse con un usuario y contraseña y ejecutar un comando en remoto

Vamos a ejecutar el comando «id» en un servidor remoto una vez nos hayamos autentificado con el usuario y la contraseña que pasamos por parámetros:

Código fuente del playbook

[[email protected] Ansible]# cat playbooks/nsu.yml
- hosts: lhpilox01
  vars:
    remote_user: var={{ user }}
    password: var={{ pass }}
  tasks:
   - name: Ejecutar id
     command: id

Ejecución

[[email protected] Ansible]# ansible-playbook --extra-vars "user=hpddpers pass=ContraseñaSecreta" -i inventario/test.txt playbooks/nsu.yml -v
 [WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
Using /etc/ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/test.txt did not meet host_list requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/Ansible/inventario/test.txt did not meet yaml requirements, check plugin documentation if this is unexpected
 [WARNING]: Found variable using reserved name: remote_user


PLAY [lhpilox01] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
ok: [lhpilox01]

TASK [Ejecutar id] *******************************************************************************************************************************************************************************
changed: [lhpilox01] => {"changed": true, "cmd": ["id"], "delta": "0:00:00.004495", "end": "2019-09-19 11:14:57.645141", "rc": 0, "start": "2019-09-19 11:14:57.640646", "stderr": "", "stderr_lines": [], "stdout": "uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)", "stdout_lines": ["uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)"]}

PLAY RECAP ***************************************************************************************************************************************************************************************
lhpilox01                  : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Llamar a un Playbook dentro de otro Playbook

Es posible hacer una llamada a un playbook existente desde dentro de otro playbook que estemos creando. Por ejemplo, antes habíamos creado un playbook para sustituir una línea de texto y, además, contenía variables.

Bien, vamos a llamar a este playbook desde dentro de otro que sustituirá una línea del fichero sshd_conf y reiniciará el servicio. Veamos cómo funciona:

Playbook para sustituir una línea de texto

[[email protected] Ansible]# cat playbooks/replace_text.yml
- name: Reemplazar texto
  lineinfile:
    path: "{{ file }}"
    regexp: "{{ OriginalText }}"
    line: "{{ NewText }}"
[[email protected] Ansible]#

Playbook que realiza la llamada al anterior y reinicia el servicio de SSH

[[email protected] Ansible]# cat playbooks/disallow_root_login.yml
- hosts: lhpilox01
  tasks:
  - name: Establecer PermirRootLogin no
    include: replace_text.yml
    vars:
       file: /etc/ssh/sshd_config
       OriginalText: "# PermitRootLogin yes"
       NewText: "PermitRootLogin no"

  - name: restart SSH
    service:
       name: sshd
       state: restarted
[[email protected] Ansible]#

Como podemos observar, hacemos la llamada al primer playbook desde la línea include: replace_text.yml y, justo debajo, asignamos el valor de las variables que necesita este playbook para trabajar.

Más abajo reiniciamos el servicio SSHD para aplicar el cambio. Veamos el resultado de su ejecución:

[[email protected] Ansible]# ansible-playbook playbooks/disallow_root_login.yml -i inventario/david -v
 [WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
Using /etc/ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAY [lhpilox01] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
ok: [lhpilox01]

TASK [Reemplazar texto] **************************************************************************************************************************************************************************
changed: [lhpilox01] => {"backup": "", "changed": true, "msg": "line replaced"}

TASK [restart SSH] *******************************************************************************************************************************************************************************
changed: [lhpilox01] => {"changed": true, "name": "sshd", "state": "started"}

PLAY RECAP ***************************************************************************************************************************************************************************************
lhpilox01                  : ok=3    changed=2    unreachable=0    failed=0

[[email protected] Ansible]#

Si queremos reiniciar un servicio de RedHat7, es decir, con systemctl, configuraremos el Playbook de Ansible de la siguiente manera:

 - name: restart Network
    systemd:
       name: network
       state: restarted
Como vemos, sustituimos "service" por "systemd".

Personalizar el log de un Playbook de Ansible

Si queremos guardar el log de un Playbook de Ansible en un path o con un nombre personalizado, utilizaremos el comando tee de Linux, tal y como podemos ver a continuación:

[[email protected] Ansible]# ansible-playbook --extra-vars "HOSTS=lhpilox01 ansible_user=hpddpers ansible_password=js0Kal&[email protected]( ansible_become=True ansible_become_method=su ansible_become_user=root ansible_become_pass=R3sfriad0s2016 COMANDO=id" -i inventario/david playbooks/send_command.yml -vv |tee /tmp/send_command.log

Revisamos el fichero generado:

[[email protected] Ansible]# head -10 /tmp/send_command.log
ansible-playbook 2.7.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /etc/ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAYBOOK: send_command.yml ***********************************************************************************************************************************************************************
[[email protected] Ansible]#

Trabajando con archivos, directorios y permisos

Con Ansible podemos realizar Playbooks que copien un archivo, un directorio de manera recursiva, modique los permisos o descromprima un fichero tar. Veamos algunos ejemplo:

Copiar un fichero local a un servidor remoto

- name: Copiar el fichero ssh.tar
    copy:
       src: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/USERLOGS/ssh.tar
       dest: /home/userlogs
       owner: userlogs
       group: userlogsgrp
       mode: '0700'

Copiar un fichero que ya esté en el servidor remoto a otra ubicación

En este caso, utilizaremos la directiva remote_src: yes

- name: Configuracion de la zona horaria
    copy:
       src: /usr/share/zoneinfo/Europe/Madrid
       dest: /etc/localtime
       owner: root
       group: root
       mode: '0644'
       remote_src: yes

Modificar permisos de un fichero

- name: Modificar permisos id_rsa userlogs
    file:
       path: /home/userlogs/.ssh/id_rsa
       owner: userlogs
       group: userlogsgrp
       mode: '0700'

Crear un fichero vacío

 - name: Crear el fichero wrapper.sendlogs.sh
    file:
       path: /home/userlogs/scripts/wrapper.sendlogs.sh
       state: touch
       owner: userlogs
       group: userlogsgrp
       mode: '0755'

Crear un directorio

- name: Crear directorio .ssh
    file:
       path: /home/userlogs/.ssh
       state: directory
       owner: userlogs
       group: userlogsgrp
       mode: '0700'

Eliminar un archivo de los servidores remotos

El código fuente del Playbook, sería el siguiente:

[[email protected] Ansible]# cat playbooks/remove_file.yml
- hosts: TEST
  tasks:
  - file:
       path: /tmp/check_multipath.sh
       state: absent
[[email protected] Ansible]#

La directiva que se encarga de eliminar el fichero es «state: absent».

La ejecución es igual que el resto de Playbooks que hemos ejecutado hasta ahora.

Descomprimir un fichero tar

  - name: Descrompmir ssh.tar
    unarchive:
       src: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/USERLOGS/ssh.tar
       dest: /home/userlogs

Condicionales when

Cuando realizamos un programa o un flujo de trabajo, lo normal es que necesitemos configurar condiciones. Por ejemplo, ejecuta el comando «X» si el sistema operativo es RedHat 6, si no, ejecuta el comando «Y» si el sistema operativo es RedHat 7. Es el típico bucle if then else de cualquier lenguaje de programación.

Con Ansible también podemos configurar condiciones para que se ejecuten tareas de un Playbook si se cumplen. Veamos un ejemplo:

tasks:
  - name: "Apagar Todos los Sistemas CentOS 6 y Debian 7"
    command: /sbin/shutdown -t now
    when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or
          (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")

Bucles loop

Como en todo lenguaje de programación, también podemos preparar bucles for en Ansible:

tasks:
    - command: echo {{ numero }}
      loop: [ 0, 2, 4, 6, 8, 10 ]
      when: numero > 3

Uso de Variables Globales o Internas de Ansible

Ansible incorpora una serie de variables internas o globales que podemos utilizar en nuestros playbooks. Son las siguientes:

[[email protected] Ansible]# ansible server1 -i inventario/david -m setup
 [WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
server1 | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "10.48.0.29",
            "30.32.0.15"
        ],
        "ansible_all_ipv6_addresses": [],
        "ansible_apparmor": {
            "status": "disabled"
        },
        "ansible_architecture": "x86_64",
        "ansible_bios_date": "09/21/2015",
        "ansible_bios_version": "6.00",
        "ansible_cmdline": {
            "KEYBOARDTYPE": "pc",
            "KEYTABLE": "us",
            "LANG": "en_US.UTF-8",
            "SYSFONT": "latarcyrheb-sun16",
            "crashkernel": "[email protected]",
            "noacpi": true,
            "noapic": true,
            "quiet": true,
            "rd_LVM_LV": "vg00/swapvol",
            "rd_NO_DM": true,
            "rd_NO_LUKS": true,
            "rd_NO_MD": true,
            "ro": true,
            "root": "/dev/mapper/vg00-rootvol"
        },
        "ansible_date_time": {
            "date": "2019-10-01",
            "day": "01",
            "epoch": "1569933506",
            "hour": "14",
            "iso8601": "2019-10-01T12:38:26Z",
            "iso8601_basic": "20191001T143826475593",
            "iso8601_basic_short": "20191001T143826",
            "iso8601_micro": "2019-10-01T12:38:26.475989Z",
            "minute": "38",
            "month": "10",
            "second": "26",
            "time": "14:38:26",
            "tz": "CEST",
            "tz_offset": "+0200",
            "weekday": "Tuesday",
            "weekday_number": "2",
            "weeknumber": "39",
            "year": "2019"
        },
        "ansible_default_ipv4": {
            "address": "10.48.0.29",
            "alias": "eth0",
            "broadcast": "10.48.0.255",
            "gateway": "10.48.0.1",
            "interface": "eth0",
            "macaddress": "00:50:56:40:05:31",
            "mtu": 1500,
            "netmask": "255.255.255.0",
            "network": "10.48.0.0",
            "type": "ether"
        },
        "ansible_default_ipv6": {},
        "ansible_device_links": {
            "ids": {
                "dm-0": [
                    "dm-name-vg00-rootvol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jClaowrdCyuxd6nBgFWTcTQz59zF6uym2v"
                ],
                "dm-1": [
                    "dm-name-vg00-swapvol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCml1d0by8fdcXVq3G0CabrOhd7QX4WfwM"
                ],
                "dm-10": [
                    "dm-name-vg00-optvol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCRSd4Cy464hzdd4nscD0XzRo2u7pDOq35"
                ],
                "dm-11": [
                    "dm-name-vg00-rhomevol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCnPOnADsLze2q34yP0IXQXR9FDeyljNCD"
                ],
                "dm-12": [
                    "dm-name-vg00-tmpvol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCRQgH5N2Zrsp8SEca6BgWgevv4mZYNQM3"
                ],
                "dm-13": [
                    "dm-name-vg00-varvol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCRR6UkdFOnfARG4KisOuGoulT82zjNF9U"
                ],
                "dm-14": [
                    "dm-name-vg00-auditvol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jC9rJKMBCuWcoB8OqwcVGux4mMiMQJKOaS"
                ],
                "dm-15": [
                    "dm-name-vg00-lvstats",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCmth3Lt4kDU5PJEszjQV65BgBmawJkclx"
                ],
                "dm-16": [
                    "dm-name-vg00-lvplanific",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCnErxQJa7F60inZewhicngWY5NT6FJpbT"
                ],
                "dm-2": [
                    "dm-name-vgPostreSQL-lvbackuppostgres",
                    "dm-uuid-LVM-9O97Dnqf3HPQ1aPzwDnKkvGNehS7ZnuYv1LfXKUvMjrB3SuHBusDxTz04LNMQVdq"
                ],
                "dm-3": [
                    "dm-name-vgMySQL-lvbackupmysql",
                    "dm-uuid-LVM-635QxprwX3wn1G3Rx1bMLfr3k6OJVX2OYCjxRYRZCXKaDQqR7foGWhwpbdiI1FX8"
                ],
                "dm-4": [
                    "dm-name-vgrear-lvrear",
                    "dm-uuid-LVM-Mt4umDnrUMYdJWDjzHbnO9TkH3fbfOAHewn6LFj6Z4fiN022PyeSWBi73LPBXzDn"
                ],
                "dm-5": [
                    "dm-name-vgrear-lvopenv",
                    "dm-uuid-LVM-Mt4umDnrUMYdJWDjzHbnO9TkH3fbfOAHcC0VWc6ZCFTzfNWdD0yx7bg0zYf7sHS3"
                ],
                "dm-6": [
                    "dm-name-vgall01-lvall01",
                    "dm-uuid-LVM-5xjsuCwWWSUXb4VTo2QtvSKEzIEhzFy2kyMeTRSFGQn1CTpaUlDBTjpXSdMtmUr1"
                ],
                "dm-7": [
                    "dm-name-vgall01-lvISO",
                    "dm-uuid-LVM-5xjsuCwWWSUXb4VTo2QtvSKEzIEhzFy2BgmwtdQ4dXL1Ki0Wn00Y7O0IkYQk3I7R"
                ],
                "dm-8": [
                    "dm-name-vgall01-lvcg2html",
                    "dm-uuid-LVM-5xjsuCwWWSUXb4VTo2QtvSKEzIEhzFy2hxja23rkSNETXW5LfypLgFxiwMwHQZGy"
                ],
                "dm-9": [
                    "dm-name-vg00-homevol",
                    "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCj2kQZ7papqXRgyqbYLNkH3YToeS8Frga"
                ],
                "sda": [
                    "scsi-36000c29d91260c62ccb19b0dc0deeacb",
                    "wwn-0x6000c29d91260c62ccb19b0dc0deeacb"
                ],
                "sda1": [
                    "scsi-36000c29d91260c62ccb19b0dc0deeacb-part1",
                    "wwn-0x6000c29d91260c62ccb19b0dc0deeacb-part1"
                ],
                "sda2": [
                    "lvm-pv-uuid-M5Pni5-y74T-7tlf-pcmv-6Oib-52Vw-rYggnv",
                    "scsi-36000c29d91260c62ccb19b0dc0deeacb-part2",
                    "wwn-0x6000c29d91260c62ccb19b0dc0deeacb-part2"
                ],
                "sdb": [
                    "scsi-36000c292e059424c9b29c339e63789ea",
                    "wwn-0x6000c292e059424c9b29c339e63789ea"
                ],
                "sdb1": [
                    "lvm-pv-uuid-frVXS9-jwVV-djci-GhDG-jeFB-tZte-VmWbYn",
                    "scsi-36000c292e059424c9b29c339e63789ea-part1",
                    "wwn-0x6000c292e059424c9b29c339e63789ea-part1"
                ],
                "sdc": [
                    "scsi-36000c29bfc20fc8e375a32f9323158dc",
                    "wwn-0x6000c29bfc20fc8e375a32f9323158dc"
                ],
                "sdc1": [
                    "lvm-pv-uuid-0b2nTr-JtxA-3u4y-jFSv-O7db-G16u-TD9387",
                    "scsi-36000c29bfc20fc8e375a32f9323158dc-part1",
                    "wwn-0x6000c29bfc20fc8e375a32f9323158dc-part1"
                ],
                "sdd": [
                    "scsi-36000c2938ec36ca2fb75e037c30b8a84",
                    "wwn-0x6000c2938ec36ca2fb75e037c30b8a84"
                ],
                "sdd1": [
                    "lvm-pv-uuid-UhOknb-6uvu-aKRk-qt3p-oKof-E6xd-Odqza0",
                    "scsi-36000c2938ec36ca2fb75e037c30b8a84-part1",
                    "wwn-0x6000c2938ec36ca2fb75e037c30b8a84-part1"
                ],
                "sde": [
                    "scsi-36000c295ec89d6cb12074b793ef98328",
                    "wwn-0x6000c295ec89d6cb12074b793ef98328"
                ],
                "sde1": [
                    "lvm-pv-uuid-aylc2t-41iN-5ckz-bpgd-dTeP-ku2k-kNmqGT",
                    "scsi-36000c295ec89d6cb12074b793ef98328-part1",
                    "wwn-0x6000c295ec89d6cb12074b793ef98328-part1"
                ],
                "sdf": [
                    "scsi-36000c29633772a181f903ddf1c316676",
                    "wwn-0x6000c29633772a181f903ddf1c316676"
                ],
                "sdf1": [
                    "lvm-pv-uuid-dwADLJ-T68x-4dDi-PScW-B8sp-xmKT-dQNh37",
                    "scsi-36000c29633772a181f903ddf1c316676-part1",
                    "wwn-0x6000c29633772a181f903ddf1c316676-part1"
                ],
                "sdg": [
                    "scsi-36000c291a950c2984f08ae0bc633dc57",
                    "wwn-0x6000c291a950c2984f08ae0bc633dc57"
                ],
                "sdg1": [
                    "lvm-pv-uuid-pqIDtW-hj0b-X1og-qc1y-HddW-wcFS-xyhAnL",
                    "scsi-36000c291a950c2984f08ae0bc633dc57-part1",
                    "wwn-0x6000c291a950c2984f08ae0bc633dc57-part1"
                ],
                "sdh": [
                    "scsi-36000c29809f1442b56eb44baffb6f89f",
                    "wwn-0x6000c29809f1442b56eb44baffb6f89f"
                ],
                "sdi": [
                    "scsi-36000c29318bea84db794357ca3620d7b",
                    "wwn-0x6000c29318bea84db794357ca3620d7b"
                ],
                "sdi1": [
                    "lvm-pv-uuid-Hyog2W-uisc-bZLG-1Gyo-SxA3-2fJJ-lD6c0Q",
                    "scsi-36000c29318bea84db794357ca3620d7b-part1",
                    "wwn-0x6000c29318bea84db794357ca3620d7b-part1"
                ],
                "sdj": [
                    "scsi-36000c296afa076a2fbfafc3035ad1cd8",
                    "wwn-0x6000c296afa076a2fbfafc3035ad1cd8"
                ],
                "sdj1": [
                    "lvm-pv-uuid-zS8kl1-xWk0-ckA1-IaHR-d5vH-0kYo-LOkIf2",
                    "scsi-36000c296afa076a2fbfafc3035ad1cd8-part1",
                    "wwn-0x6000c296afa076a2fbfafc3035ad1cd8-part1"
                ],
                "sdk": [
                    "scsi-36000c29bb46aa96ed360ff416c9abc79",
                    "wwn-0x6000c29bb46aa96ed360ff416c9abc79"
                ],
                "sdk1": [
                    "lvm-pv-uuid-KWD76w-fCCp-OjPC-RVG1-pcuS-H1rs-zVZCST",
                    "scsi-36000c29bb46aa96ed360ff416c9abc79-part1",
                    "wwn-0x6000c29bb46aa96ed360ff416c9abc79-part1"
                ],
                "sdl": [
                    "lvm-pv-uuid-aVbMe2-WXTP-wp1u-p03J-Q0XG-mA0W-ItF9Jd",
                    "scsi-36000c2965bc8abf5231ef2c27b1692bd",
                    "wwn-0x6000c2965bc8abf5231ef2c27b1692bd"
                ],
                "sdm": [
                    "lvm-pv-uuid-51zrd1-E3Kc-kgGe-RBB0-zAg6-3EuT-ARbfcQ",
                    "scsi-36000c293acb7222949909466483f636b",
                    "wwn-0x6000c293acb7222949909466483f636b"
                ],
                "sdn": [
                    "scsi-36000c29b5f0d89b9c373f64d283963b5",
                    "wwn-0x6000c29b5f0d89b9c373f64d283963b5"
                ],
                "sdn1": [
                    "lvm-pv-uuid-Lc2L6v-yalf-ARzm-THyY-vR0y-2tVL-yQTE6T",
                    "scsi-36000c29b5f0d89b9c373f64d283963b5-part1",
                    "wwn-0x6000c29b5f0d89b9c373f64d283963b5-part1"
                ],
                "sdo": [
                    "lvm-pv-uuid-o1CQaT-vf3j-X0wb-eVz8-Bhag-xf0K-lrU5dH",
                    "scsi-36000c29ca6d8937536f8768d4bb47a12",
                    "wwn-0x6000c29ca6d8937536f8768d4bb47a12"
                ],
                "sr0": [
                    "ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001"
                ]
            },
            "labels": {},
            "masters": {
                "sda2": [
                    "dm-0",
                    "dm-1",
                    "dm-10",
                    "dm-11",
                    "dm-12",
                    "dm-13",
                    "dm-14",
                    "dm-15",
                    "dm-16",
                    "dm-9"
                ],
                "sdb1": [
                    "dm-6"
                ],
                "sdc1": [
                    "dm-4"
                ],
                "sdd1": [
                    "dm-4"
                ],
                "sde1": [
                    "dm-3"
                ],
                "sdf1": [
                    "dm-4",
                    "dm-5"
                ],
                "sdg1": [
                    "dm-2"
                ],
                "sdi1": [
                    "dm-4"
                ],
                "sdj1": [
                    "dm-6",
                    "dm-7",
                    "dm-8"
                ],
                "sdk1": [
                    "dm-4"
                ],
                "sdl": [
                    "dm-2"
                ],
                "sdm": [
                    "dm-3"
                ],
                "sdn1": [
                    "dm-3"
                ],
                "sdo": [
                    "dm-4"
                ]
            },
            "uuids": {
                "dm-0": [
                    "ea0df34a-b668-4288-9d3b-2c398d22c7ff"
                ],
                "dm-1": [
                    "8e70ce75-316b-401c-9586-ce284a791df9"
                ],
                "dm-10": [
                    "4c0615d5-0799-4868-a84d-d534bdad150d"
                ],
                "dm-11": [
                    "4dbd06dd-3e26-4b72-96c2-ba32fca85cb3"
                ],
                "dm-12": [
                    "795f2f3f-5e87-4bdb-8655-6b26e95c2e22"
                ],
                "dm-13": [
                    "5b19014e-b729-4a2f-95f1-70cf708d08df"
                ],
                "dm-14": [
                    "de8e0bcb-b3a8-4440-ad76-4fb3794832fd"
                ],
                "dm-15": [
                    "0ee65968-5cbf-45ed-b0eb-a172e48f330d"
                ],
                "dm-16": [
                    "9f6c5bbf-cc6a-43a9-8fae-87e1dd35346c"
                ],
                "dm-2": [
                    "2f70082b-de00-4138-b803-d9ddf4229637"
                ],
                "dm-3": [
                    "f0504c66-364a-4e3c-bd52-fb04d9ce1dbc"
                ],
                "dm-4": [
                    "52d76fb4-e32b-4f0f-a820-913d1ad732b8"
                ],
                "dm-5": [
                    "ff996c91-4e5b-4208-8752-6877f2171ef2"
                ],
                "dm-6": [
                    "ec8e6ab0-ffc0-4b91-9487-4a5728f292bd"
                ],
                "dm-7": [
                    "3308ac24-1577-4db9-93e0-0fbaeeaf9c0d"
                ],
                "dm-8": [
                    "4f34671b-4104-4e80-95e1-588bef862974"
                ],
                "dm-9": [
                    "bdd23f20-60d0-4b4b-a534-4b342a1fc8bf"
                ],
                "sda1": [
                    "0f16c4b2-0018-44d2-aa9e-e2c4e959c064"
                ],
                "sdh": [
                    "a3aa6f96-7edf-4b8a-8915-1fdbba3ff6fe"
                ]
            }
        },
        "ansible_devices": {
            "dm-0": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-rootvol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jClaowrdCyuxd6nBgFWTcTQz59zF6uym2v"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "ea0df34a-b668-4288-9d3b-2c398d22c7ff"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "18874368",
                "sectorsize": "512",
                "size": "9.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-1": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-swapvol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCml1d0by8fdcXVq3G0CabrOhd7QX4WfwM"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "8e70ce75-316b-401c-9586-ce284a791df9"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "8388608",
                "sectorsize": "512",
                "size": "4.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-10": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-optvol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCRSd4Cy464hzdd4nscD0XzRo2u7pDOq35"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "4c0615d5-0799-4868-a84d-d534bdad150d"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "4194304",
                "sectorsize": "512",
                "size": "2.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-11": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-rhomevol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCnPOnADsLze2q34yP0IXQXR9FDeyljNCD"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "4dbd06dd-3e26-4b72-96c2-ba32fca85cb3"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "4194304",
                "sectorsize": "512",
                "size": "2.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-12": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-tmpvol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCRQgH5N2Zrsp8SEca6BgWgevv4mZYNQM3"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "795f2f3f-5e87-4bdb-8655-6b26e95c2e22"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "12582912",
                "sectorsize": "512",
                "size": "6.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-13": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-varvol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCRR6UkdFOnfARG4KisOuGoulT82zjNF9U"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "5b19014e-b729-4a2f-95f1-70cf708d08df"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "20971520",
                "sectorsize": "512",
                "size": "10.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-14": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-auditvol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jC9rJKMBCuWcoB8OqwcVGux4mMiMQJKOaS"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "de8e0bcb-b3a8-4440-ad76-4fb3794832fd"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "3145728",
                "sectorsize": "512",
                "size": "1.50 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-15": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-lvstats",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCmth3Lt4kDU5PJEszjQV65BgBmawJkclx"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "0ee65968-5cbf-45ed-b0eb-a172e48f330d"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "1024000",
                "sectorsize": "512",
                "size": "500.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-16": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-lvplanific",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCnErxQJa7F60inZewhicngWY5NT6FJpbT"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "9f6c5bbf-cc6a-43a9-8fae-87e1dd35346c"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "1024000",
                "sectorsize": "512",
                "size": "500.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-2": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgPostreSQL-lvbackuppostgres",
                        "dm-uuid-LVM-9O97Dnqf3HPQ1aPzwDnKkvGNehS7ZnuYv1LfXKUvMjrB3SuHBusDxTz04LNMQVdq"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "2f70082b-de00-4138-b803-d9ddf4229637"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "1258266624",
                "sectorsize": "512",
                "size": "599.99 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-3": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgMySQL-lvbackupmysql",
                        "dm-uuid-LVM-635QxprwX3wn1G3Rx1bMLfr3k6OJVX2OYCjxRYRZCXKaDQqR7foGWhwpbdiI1FX8"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "f0504c66-364a-4e3c-bd52-fb04d9ce1dbc"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "1363116032",
                "sectorsize": "512",
                "size": "649.98 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-4": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgrear-lvrear",
                        "dm-uuid-LVM-Mt4umDnrUMYdJWDjzHbnO9TkH3fbfOAHewn6LFj6Z4fiN022PyeSWBi73LPBXzDn"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "52d76fb4-e32b-4f0f-a820-913d1ad732b8"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "11324530688",
                "sectorsize": "512",
                "size": "5.27 TB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-5": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgrear-lvopenv",
                        "dm-uuid-LVM-Mt4umDnrUMYdJWDjzHbnO9TkH3fbfOAHcC0VWc6ZCFTzfNWdD0yx7bg0zYf7sHS3"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "ff996c91-4e5b-4208-8752-6877f2171ef2"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "209715200",
                "sectorsize": "512",
                "size": "100.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-6": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgall01-lvall01",
                        "dm-uuid-LVM-5xjsuCwWWSUXb4VTo2QtvSKEzIEhzFy2kyMeTRSFGQn1CTpaUlDBTjpXSdMtmUr1"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "ec8e6ab0-ffc0-4b91-9487-4a5728f292bd"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "3028287488",
                "sectorsize": "512",
                "size": "1.41 TB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-7": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgall01-lvISO",
                        "dm-uuid-LVM-5xjsuCwWWSUXb4VTo2QtvSKEzIEhzFy2BgmwtdQ4dXL1Ki0Wn00Y7O0IkYQk3I7R"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "3308ac24-1577-4db9-93e0-0fbaeeaf9c0d"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "67108864",
                "sectorsize": "512",
                "size": "32.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-8": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vgall01-lvcg2html",
                        "dm-uuid-LVM-5xjsuCwWWSUXb4VTo2QtvSKEzIEhzFy2hxja23rkSNETXW5LfypLgFxiwMwHQZGy"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "4f34671b-4104-4e80-95e1-588bef862974"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "3072000",
                "sectorsize": "512",
                "size": "1.46 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "dm-9": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [
                        "dm-name-vg00-homevol",
                        "dm-uuid-LVM-Q4JH79dThqBCt0fg92JyJkEcPaJH02jCj2kQZ7papqXRgyqbYLNkH3YToeS8Frga"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "bdd23f20-60d0-4b4b-a534-4b342a1fc8bf"
                    ]
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "8388608",
                "sectorsize": "512",
                "size": "4.00 GB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop0": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop1": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop2": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop3": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop4": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop5": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop6": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "loop7": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "0",
                "sectorsize": "512",
                "size": "0.00 Bytes",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram0": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram1": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram10": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram11": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram12": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram13": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram14": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram15": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram2": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram3": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram4": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram5": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram6": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram7": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram8": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "ram9": {
                "holders": [],
                "host": "",
                "links": {
                    "ids": [],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": null,
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "",
                "sectors": "32768",
                "sectorsize": "512",
                "size": "16.00 MB",
                "support_discard": "0",
                "vendor": null,
                "virtual": 1
            },
            "sda": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29d91260c62ccb19b0dc0deeacb",
                        "wwn-0x6000c29d91260c62ccb19b0dc0deeacb"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sda1": {
                        "holders": [],
                        "links": {
                            "ids": [
                                "scsi-36000c29d91260c62ccb19b0dc0deeacb-part1",
                                "wwn-0x6000c29d91260c62ccb19b0dc0deeacb-part1"
                            ],
                            "labels": [],
                            "masters": [],
                            "uuids": [
                                "0f16c4b2-0018-44d2-aa9e-e2c4e959c064"
                            ]
                        },
                        "sectors": "409600",
                        "sectorsize": 512,
                        "size": "200.00 MB",
                        "start": "2048",
                        "uuid": "0f16c4b2-0018-44d2-aa9e-e2c4e959c064"
                    },
                    "sda2": {
                        "holders": [
                            "vg00-rootvol",
                            "vg00-swapvol",
                            "vg00-homevol",
                            "vg00-optvol",
                            "vg00-rhomevol",
                            "vg00-tmpvol",
                            "vg00-varvol",
                            "vg00-auditvol",
                            "vg00-lvstats",
                            "vg00-lvplanific"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-M5Pni5-y74T-7tlf-pcmv-6Oib-52Vw-rYggnv",
                                "scsi-36000c29d91260c62ccb19b0dc0deeacb-part2",
                                "wwn-0x6000c29d91260c62ccb19b0dc0deeacb-part2"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-0",
                                "dm-1",
                                "dm-10",
                                "dm-11",
                                "dm-12",
                                "dm-13",
                                "dm-14",
                                "dm-15",
                                "dm-16",
                                "dm-9"
                            ],
                            "uuids": []
                        },
                        "sectors": "209303552",
                        "sectorsize": 512,
                        "size": "99.80 GB",
                        "start": "411648",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "209715200",
                "sectorsize": "512",
                "size": "100.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29d91260c62ccb19b0dc0deeacb"
            },
            "sdb": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c292e059424c9b29c339e63789ea",
                        "wwn-0x6000c292e059424c9b29c339e63789ea"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdb1": {
                        "holders": [
                            "vgall01-lvall01"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-frVXS9-jwVV-djci-GhDG-jeFB-tZte-VmWbYn",
                                "scsi-36000c292e059424c9b29c339e63789ea-part1",
                                "wwn-0x6000c292e059424c9b29c339e63789ea-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-6"
                            ],
                            "uuids": []
                        },
                        "sectors": "2097141102",
                        "sectorsize": 512,
                        "size": "999.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c292e059424c9b29c339e63789ea"
            },
            "sdc": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29bfc20fc8e375a32f9323158dc",
                        "wwn-0x6000c29bfc20fc8e375a32f9323158dc"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdc1": {
                        "holders": [
                            "vgrear-lvrear"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-0b2nTr-JtxA-3u4y-jFSv-O7db-G16u-TD9387",
                                "scsi-36000c29bfc20fc8e375a32f9323158dc-part1",
                                "wwn-0x6000c29bfc20fc8e375a32f9323158dc-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-4"
                            ],
                            "uuids": []
                        },
                        "sectors": "2097141102",
                        "sectorsize": 512,
                        "size": "999.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29bfc20fc8e375a32f9323158dc"
            },
            "sdd": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c2938ec36ca2fb75e037c30b8a84",
                        "wwn-0x6000c2938ec36ca2fb75e037c30b8a84"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdd1": {
                        "holders": [
                            "vgrear-lvrear"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-UhOknb-6uvu-aKRk-qt3p-oKof-E6xd-Odqza0",
                                "scsi-36000c2938ec36ca2fb75e037c30b8a84-part1",
                                "wwn-0x6000c2938ec36ca2fb75e037c30b8a84-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-4"
                            ],
                            "uuids": []
                        },
                        "sectors": "2097141102",
                        "sectorsize": 512,
                        "size": "999.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c2938ec36ca2fb75e037c30b8a84"
            },
            "sde": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c295ec89d6cb12074b793ef98328",
                        "wwn-0x6000c295ec89d6cb12074b793ef98328"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sde1": {
                        "holders": [
                            "vgMySQL-lvbackupmysql"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-aylc2t-41iN-5ckz-bpgd-dTeP-ku2k-kNmqGT",
                                "scsi-36000c295ec89d6cb12074b793ef98328-part1",
                                "wwn-0x6000c295ec89d6cb12074b793ef98328-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-3"
                            ],
                            "uuids": []
                        },
                        "sectors": "1048562487",
                        "sectorsize": 512,
                        "size": "499.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "1048576000",
                "sectorsize": "512",
                "size": "500.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c295ec89d6cb12074b793ef98328"
            },
            "sdf": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29633772a181f903ddf1c316676",
                        "wwn-0x6000c29633772a181f903ddf1c316676"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdf1": {
                        "holders": [
                            "vgrear-lvrear",
                            "vgrear-lvopenv"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-dwADLJ-T68x-4dDi-PScW-B8sp-xmKT-dQNh37",
                                "scsi-36000c29633772a181f903ddf1c316676-part1",
                                "wwn-0x6000c29633772a181f903ddf1c316676-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-4",
                                "dm-5"
                            ],
                            "uuids": []
                        },
                        "sectors": "2097141102",
                        "sectorsize": 512,
                        "size": "999.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29633772a181f903ddf1c316676"
            },
            "sdg": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c291a950c2984f08ae0bc633dc57",
                        "wwn-0x6000c291a950c2984f08ae0bc633dc57"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdg1": {
                        "holders": [
                            "vgPostreSQL-lvbackuppostgres"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-pqIDtW-hj0b-X1og-qc1y-HddW-wcFS-xyhAnL",
                                "scsi-36000c291a950c2984f08ae0bc633dc57-part1",
                                "wwn-0x6000c291a950c2984f08ae0bc633dc57-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-2"
                            ],
                            "uuids": []
                        },
                        "sectors": "1048562487",
                        "sectorsize": 512,
                        "size": "499.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "1048576000",
                "sectorsize": "512",
                "size": "500.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c291a950c2984f08ae0bc633dc57"
            },
            "sdh": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29809f1442b56eb44baffb6f89f",
                        "wwn-0x6000c29809f1442b56eb44baffb6f89f"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": [
                        "a3aa6f96-7edf-4b8a-8915-1fdbba3ff6fe"
                    ]
                },
                "model": "Virtual disk",
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29809f1442b56eb44baffb6f89f"
            },
            "sdi": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29318bea84db794357ca3620d7b",
                        "wwn-0x6000c29318bea84db794357ca3620d7b"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdi1": {
                        "holders": [
                            "vgrear-lvrear"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-Hyog2W-uisc-bZLG-1Gyo-SxA3-2fJJ-lD6c0Q",
                                "scsi-36000c29318bea84db794357ca3620d7b-part1",
                                "wwn-0x6000c29318bea84db794357ca3620d7b-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-4"
                            ],
                            "uuids": []
                        },
                        "sectors": "2097141102",
                        "sectorsize": 512,
                        "size": "999.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29318bea84db794357ca3620d7b"
            },
            "sdj": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c296afa076a2fbfafc3035ad1cd8",
                        "wwn-0x6000c296afa076a2fbfafc3035ad1cd8"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdj1": {
                        "holders": [
                            "vgall01-lvall01",
                            "vgall01-lvISO",
                            "vgall01-lvcg2html"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-zS8kl1-xWk0-ckA1-IaHR-d5vH-0kYo-LOkIf2",
                                "scsi-36000c296afa076a2fbfafc3035ad1cd8-part1",
                                "wwn-0x6000c296afa076a2fbfafc3035ad1cd8-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-6",
                                "dm-7",
                                "dm-8"
                            ],
                            "uuids": []
                        },
                        "sectors": "1048562487",
                        "sectorsize": 512,
                        "size": "499.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "1048576000",
                "sectorsize": "512",
                "size": "500.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c296afa076a2fbfafc3035ad1cd8"
            },
            "sdk": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29bb46aa96ed360ff416c9abc79",
                        "wwn-0x6000c29bb46aa96ed360ff416c9abc79"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdk1": {
                        "holders": [
                            "vgrear-lvrear"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-KWD76w-fCCp-OjPC-RVG1-pcuS-H1rs-zVZCST",
                                "scsi-36000c29bb46aa96ed360ff416c9abc79-part1",
                                "wwn-0x6000c29bb46aa96ed360ff416c9abc79-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-4"
                            ],
                            "uuids": []
                        },
                        "sectors": "2097141102",
                        "sectorsize": 512,
                        "size": "999.99 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097152000",
                "sectorsize": "512",
                "size": "1000.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29bb46aa96ed360ff416c9abc79"
            },
            "sdl": {
                "holders": [
                    "vgPostreSQL-lvbackuppostgres"
                ],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "lvm-pv-uuid-aVbMe2-WXTP-wp1u-p03J-Q0XG-mA0W-ItF9Jd",
                        "scsi-36000c2965bc8abf5231ef2c27b1692bd",
                        "wwn-0x6000c2965bc8abf5231ef2c27b1692bd"
                    ],
                    "labels": [],
                    "masters": [
                        "dm-2"
                    ],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "209715200",
                "sectorsize": "512",
                "size": "100.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c2965bc8abf5231ef2c27b1692bd"
            },
            "sdm": {
                "holders": [
                    "vgMySQL-lvbackupmysql"
                ],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "lvm-pv-uuid-51zrd1-E3Kc-kgGe-RBB0-zAg6-3EuT-ARbfcQ",
                        "scsi-36000c293acb7222949909466483f636b",
                        "wwn-0x6000c293acb7222949909466483f636b"
                    ],
                    "labels": [],
                    "masters": [
                        "dm-3"
                    ],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "104857600",
                "sectorsize": "512",
                "size": "50.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c293acb7222949909466483f636b"
            },
            "sdn": {
                "holders": [],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "scsi-36000c29b5f0d89b9c373f64d283963b5",
                        "wwn-0x6000c29b5f0d89b9c373f64d283963b5"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {
                    "sdn1": {
                        "holders": [
                            "vgMySQL-lvbackupmysql"
                        ],
                        "links": {
                            "ids": [
                                "lvm-pv-uuid-Lc2L6v-yalf-ARzm-THyY-vR0y-2tVL-yQTE6T",
                                "scsi-36000c29b5f0d89b9c373f64d283963b5-part1",
                                "wwn-0x6000c29b5f0d89b9c373f64d283963b5-part1"
                            ],
                            "labels": [],
                            "masters": [
                                "dm-3"
                            ],
                            "uuids": []
                        },
                        "sectors": "209712447",
                        "sectorsize": 512,
                        "size": "100.00 GB",
                        "start": "63",
                        "uuid": null
                    }
                },
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "209715200",
                "sectorsize": "512",
                "size": "100.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29b5f0d89b9c373f64d283963b5"
            },
            "sdo": {
                "holders": [
                    "vgrear-lvrear"
                ],
                "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
                "links": {
                    "ids": [
                        "lvm-pv-uuid-o1CQaT-vf3j-X0wb-eVz8-Bhag-xf0K-lrU5dH",
                        "scsi-36000c29ca6d8937536f8768d4bb47a12",
                        "wwn-0x6000c29ca6d8937536f8768d4bb47a12"
                    ],
                    "labels": [],
                    "masters": [
                        "dm-4"
                    ],
                    "uuids": []
                },
                "model": "Virtual disk",
                "partitions": {},
                "removable": "0",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "1048576000",
                "sectorsize": "512",
                "size": "500.00 GB",
                "support_discard": "0",
                "vendor": "VMware",
                "virtual": 1,
                "wwn": "0x6000c29ca6d8937536f8768d4bb47a12"
            },
            "sr0": {
                "holders": [],
                "host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)",
                "links": {
                    "ids": [
                        "ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001"
                    ],
                    "labels": [],
                    "masters": [],
                    "uuids": []
                },
                "model": "VMware IDE CDR10",
                "partitions": {},
                "removable": "1",
                "rotational": "1",
                "sas_address": null,
                "sas_device_handle": null,
                "scheduler_mode": "cfq",
                "sectors": "2097151",
                "sectorsize": "512",
                "size": "1024.00 MB",
                "support_discard": "0",
                "vendor": "NECVMWar",
                "virtual": 1
            }
        },
        "ansible_distribution": "RedHat",
        "ansible_distribution_file_parsed": true,
        "ansible_distribution_file_path": "/etc/redhat-release",
        "ansible_distribution_file_search_string": "Red Hat",
        "ansible_distribution_file_variety": "RedHat",
        "ansible_distribution_major_version": "6",
        "ansible_distribution_release": "Santiago",
        "ansible_distribution_version": "6.10",
        "ansible_dns": {
            "nameservers": [
                "10.48.33.10",
                "10.48.33.11",
                "10.48.33.12"
            ],
            "search": [
                "cpd1pre.intranet.gencat.cat",
                "cpd1.intranet.gencat.cat"
            ]
        },
        "ansible_domain": "7143.1286.ecs.hp.com",
        "ansible_effective_group_id": 45005,
        "ansible_effective_user_id": 8402895,
        "ansible_env": {
            "HOME": "/root/home/hpddpers",
            "LANG": "en_US.UTF-8",
            "LOGNAME": "hpddpers",
            "MAIL": "/var/mail/hpddpers",
            "PATH": "/usr/local/bin:/bin:/usr/bin",
            "PWD": "/root/home/hpddpers",
            "SHELL": "/bin/bash",
            "SHLVL": "2",
            "SSH_CLIENT": "30.32.0.15 44350 22",
            "SSH_CONNECTION": "30.32.0.15 44350 30.32.0.15 22",
            "SSH_TTY": "/dev/pts/3",
            "TERM": "xterm",
            "USER": "hpddpers",
            "_": "/usr/bin/python"
        },
        "ansible_eth0": {
            "active": true,
            "device": "eth0",
            "features": {
                "fcoe_mtu": "off [fixed]",
                "generic_receive_offload": "on",
                "generic_segmentation_offload": "on",
                "highdma": "off [fixed]",
                "large_receive_offload": "off [fixed]",
                "loopback": "off [fixed]",
                "netns_local": "off [fixed]",
                "ntuple_filters": "off [fixed]",
                "receive_hashing": "off [fixed]",
                "rx_checksumming": "on",
                "rx_vlan_filter": "on [fixed]",
                "rx_vlan_offload": "on [fixed]",
                "scatter_gather": "on",
                "tcp_segmentation_offload": "on",
                "tx_checksum_fcoe_crc": "off [fixed]",
                "tx_checksum_ip_generic": "on",
                "tx_checksum_ipv4": "off",
                "tx_checksum_ipv6": "off",
                "tx_checksum_sctp": "off [fixed]",
                "tx_checksum_unneeded": "off",
                "tx_checksumming": "on",
                "tx_fcoe_segmentation": "off [fixed]",
                "tx_gre_segmentation": "off [fixed]",
                "tx_gso_robust": "off [fixed]",
                "tx_lockless": "off [fixed]",
                "tx_scatter_gather": "on",
                "tx_scatter_gather_fraglist": "off [fixed]",
                "tx_tcp6_segmentation": "off",
                "tx_tcp_ecn_segmentation": "off",
                "tx_tcp_segmentation": "on",
                "tx_udp_tnl_segmentation": "off [fixed]",
                "tx_vlan_offload": "on [fixed]",
                "udp_fragmentation_offload": "off [fixed]",
                "vlan_challenged": "off [fixed]"
            },
            "hw_timestamp_filters": [],
            "ipv4": {
                "address": "10.48.0.29",
                "broadcast": "10.48.0.255",
                "netmask": "255.255.255.0",
                "network": "10.48.0.0"
            },
            "macaddress": "00:50:56:40:05:31",
            "module": "e1000",
            "mtu": 1500,
            "pciid": "0000:02:00.0",
            "promisc": false,
            "speed": 1000,
            "timestamping": [
                "rx_software",
                "software"
            ],
            "type": "ether"
        },
        "ansible_eth0_1": {
            "ipv4": {
                "address": "30.32.0.15",
                "broadcast": "30.32.0.255",
                "netmask": "255.255.255.0",
                "network": "30.32.0.0"
            }
        },
        "ansible_fips": false,
        "ansible_form_factor": "Other",
        "ansible_fqdn": "server1.7143.1286.ecs.hp.com",
        "ansible_hostname": "server1",
        "ansible_interfaces": [
            "lo",
            "eth0_1",
            "eth0"
        ],
        "ansible_is_chroot": false,
        "ansible_iscsi_iqn": "",
        "ansible_kernel": "2.6.32-754.12.1.el6.x86_64",
        "ansible_lo": {
            "active": true,
            "device": "lo",
            "features": {
                "fcoe_mtu": "off [fixed]",
                "generic_receive_offload": "on",
                "generic_segmentation_offload": "on",
                "highdma": "on [fixed]",
                "large_receive_offload": "off [fixed]",
                "loopback": "on [fixed]",
                "netns_local": "on [fixed]",
                "ntuple_filters": "off [fixed]",
                "receive_hashing": "off [fixed]",
                "rx_checksumming": "on [fixed]",
                "rx_vlan_filter": "off [fixed]",
                "rx_vlan_offload": "off [fixed]",
                "scatter_gather": "on",
                "tcp_segmentation_offload": "on",
                "tx_checksum_fcoe_crc": "off [fixed]",
                "tx_checksum_ip_generic": "on [fixed]",
                "tx_checksum_ipv4": "off [fixed]",
                "tx_checksum_ipv6": "off [fixed]",
                "tx_checksum_sctp": "off [fixed]",
                "tx_checksum_unneeded": "off [fixed]",
                "tx_checksumming": "on",
                "tx_fcoe_segmentation": "off [fixed]",
                "tx_gre_segmentation": "off [fixed]",
                "tx_gso_robust": "off [fixed]",
                "tx_lockless": "on [fixed]",
                "tx_scatter_gather": "on [fixed]",
                "tx_scatter_gather_fraglist": "on [fixed]",
                "tx_tcp6_segmentation": "on",
                "tx_tcp_ecn_segmentation": "on",
                "tx_tcp_segmentation": "on",
                "tx_udp_tnl_segmentation": "off [fixed]",
                "tx_vlan_offload": "off [fixed]",
                "udp_fragmentation_offload": "on",
                "vlan_challenged": "on [fixed]"
            },
            "hw_timestamp_filters": [],
            "ipv4": {
                "address": "127.0.0.1",
                "broadcast": "host",
                "netmask": "255.0.0.0",
                "network": "127.0.0.0"
            },
            "mtu": 65536,
            "promisc": false,
            "timestamping": [
                "rx_software",
                "software"
            ],
            "type": "loopback"
        },
        "ansible_local": {},
        "ansible_lsb": {
            "codename": "Santiago",
            "description": "Red Hat Enterprise Linux Server release 6.10 (Santiago)",
            "id": "RedHatEnterpriseServer",
            "major_release": "6",
            "release": "6.10"
        },
        "ansible_machine": "x86_64",
        "ansible_machine_id": "e51e1d0b3c2dc0830c065aaf00000034",
        "ansible_memfree_mb": 233,
        "ansible_memory_mb": {
            "nocache": {
                "free": 872,
                "used": 1004
            },
            "real": {
                "free": 233,
                "total": 1876,
                "used": 1643
            },
            "swap": {
                "cached": 28,
                "free": 3334,
                "total": 4095,
                "used": 761
            }
        },
        "ansible_memtotal_mb": 1876,
        "ansible_mounts": [
            {
                "block_available": 1325421,
                "block_size": 4096,
                "block_total": 2322270,
                "block_used": 996849,
                "device": "/dev/mapper/vg00-rootvol",
                "fstype": "ext3",
                "inode_available": 463305,
                "inode_total": 589824,
                "inode_used": 126519,
                "mount": "/",
                "options": "rw",
                "size_available": 5428924416,
                "size_total": 9512017920,
                "uuid": "ea0df34a-b668-4288-9d3b-2c398d22c7ff"
            },
            {
                "block_available": 79766,
                "block_size": 1024,
                "block_total": 198337,
                "block_used": 118571,
                "device": "/dev/sda1",
                "fstype": "ext2",
                "inode_available": 51146,
                "inode_total": 51200,
                "inode_used": 54,
                "mount": "/boot",
                "options": "rw,nodev",
                "size_available": 81680384,
                "size_total": 203097088,
                "uuid": "0f16c4b2-0018-44d2-aa9e-e2c4e959c064"
            },
            {
                "block_available": 894056,
                "block_size": 4096,
                "block_total": 1032112,
                "block_used": 138056,
                "device": "/dev/mapper/vg00-homevol",
                "fstype": "ext3",
                "inode_available": 261829,
                "inode_total": 262144,
                "inode_used": 315,
                "mount": "/home",
                "options": "rw,nosuid,nodev",
                "size_available": 3662053376,
                "size_total": 4227530752,
                "uuid": "bdd23f20-60d0-4b4b-a534-4b342a1fc8bf"
            },
            {
                "block_available": 282546,
                "block_size": 4096,
                "block_total": 516052,
                "block_used": 233506,
                "device": "/dev/mapper/vg00-optvol",
                "fstype": "ext3",
                "inode_available": 116219,
                "inode_total": 131072,
                "inode_used": 14853,
                "mount": "/opt",
                "options": "rw,nodev",
                "size_available": 1157308416,
                "size_total": 2113748992,
                "uuid": "4c0615d5-0799-4868-a84d-d534bdad150d"
            },
            {
                "block_available": 459009,
                "block_size": 4096,
                "block_total": 516052,
                "block_used": 57043,
                "device": "/dev/mapper/vg00-rhomevol",
                "fstype": "ext3",
                "inode_available": 130610,
                "inode_total": 131072,
                "inode_used": 462,
                "mount": "/root/home",
                "options": "rw,nosuid,nodev",
                "size_available": 1880100864,
                "size_total": 2113748992,
                "uuid": "4dbd06dd-3e26-4b72-96c2-ba32fca85cb3"
            },
            {
                "block_available": 1385829,
                "block_size": 4096,
                "block_total": 1548176,
                "block_used": 162347,
                "device": "/dev/mapper/vg00-tmpvol",
                "fstype": "ext3",
                "inode_available": 392618,
                "inode_total": 393216,
                "inode_used": 598,
                "mount": "/tmp",
                "options": "rw,nosuid,nodev",
                "size_available": 5676355584,
                "size_total": 6341328896,
                "uuid": "795f2f3f-5e87-4bdb-8655-6b26e95c2e22"
            },
            {
                "block_available": 1326099,
                "block_size": 4096,
                "block_total": 2580302,
                "block_used": 1254203,
                "device": "/dev/mapper/vg00-varvol",
                "fstype": "ext3",
                "inode_available": 625087,
                "inode_total": 655360,
                "inode_used": 30273,
                "mount": "/var",
                "options": "rw,nodev",
                "size_available": 5431701504,
                "size_total": 10568916992,
                "uuid": "5b19014e-b729-4a2f-95f1-70cf708d08df"
            },
            {
                "block_available": 351756,
                "block_size": 4096,
                "block_total": 387036,
                "block_used": 35280,
                "device": "/dev/mapper/vg00-auditvol",
                "fstype": "ext3",
                "inode_available": 98288,
                "inode_total": 98304,
                "inode_used": 16,
                "mount": "/var/log/audit",
                "options": "rw,noexec,nosuid,nodev",
                "size_available": 1440792576,
                "size_total": 1585299456,
                "uuid": "de8e0bcb-b3a8-4440-ad76-4fb3794832fd"
            },
            {
                "block_available": 68765265,
                "block_size": 4096,
                "block_total": 372563692,
                "block_used": 303798427,
                "device": "/dev/mapper/vgall01-lvall01",
                "fstype": "ext4",
                "inode_available": 92964202,
                "inode_total": 94633984,
                "inode_used": 1669782,
                "mount": "/AL",
                "options": "rw,acl",
                "size_available": 281662525440,
                "size_total": 1526020882432,
                "uuid": "ec8e6ab0-ffc0-4b91-9487-4a5728f292bd"
            },
            {
                "block_available": 238044,
                "block_size": 1024,
                "block_total": 487652,
                "block_used": 249608,
                "device": "/dev/mapper/vg00-lvstats",
                "fstype": "ext4",
                "inode_available": 127635,
                "inode_total": 128016,
                "inode_used": 381,
                "mount": "/stats",
                "options": "rw",
                "size_available": 243757056,
                "size_total": 499355648,
                "uuid": "0ee65968-5cbf-45ed-b0eb-a172e48f330d"
            },
            {
                "block_available": 268489,
                "block_size": 1024,
                "block_total": 487652,
                "block_used": 219163,
                "device": "/dev/mapper/vg00-lvplanific",
                "fstype": "ext4",
                "inode_available": 127729,
                "inode_total": 128016,
                "inode_used": 287,
                "mount": "/planific",
                "options": "rw",
                "size_available": 274932736,
                "size_total": 499355648,
                "uuid": "9f6c5bbf-cc6a-43a9-8fae-87e1dd35346c"
            },
            {
                "block_available": 38142548,
                "block_size": 4096,
                "block_total": 1393321310,
                "block_used": 1355178762,
                "device": "/dev/mapper/vgrear-lvrear",
                "fstype": "ext4",
                "inode_available": 278995857,
                "inode_total": 353894400,
                "inode_used": 74898543,
                "mount": "/REAR",
                "options": "rw",
                "size_available": 156231876608,
                "size_total": 5707044085760,
                "uuid": "52d76fb4-e32b-4f0f-a820-913d1ad732b8"
            },
            {
                "block_available": 33606733,
                "block_size": 4096,
                "block_total": 167683180,
                "block_used": 134076447,
                "device": "/dev/mapper/vgMySQL-lvbackupmysql",
                "fstype": "ext4",
                "inode_available": 42594658,
                "inode_total": 42598400,
                "inode_used": 3742,
                "mount": "/Backup_MySQL",
                "options": "rw",
                "size_available": 137653178368,
                "size_total": 686830305280,
                "uuid": "f0504c66-364a-4e3c-bd52-fb04d9ce1dbc"
            },
            {
                "block_available": 2942657,
                "block_size": 4096,
                "block_total": 8224220,
                "block_used": 5281563,
                "device": "/dev/mapper/vgall01-lvISO",
                "fstype": "ext4",
                "inode_available": 2097043,
                "inode_total": 2097152,
                "inode_used": 109,
                "mount": "/ISO",
                "options": "rw",
                "size_available": 12053123072,
                "size_total": 33686405120,
                "uuid": "3308ac24-1577-4db9-93e0-0fbaeeaf9c0d"
            },
            {
                "block_available": 116483,
                "block_size": 4096,
                "block_total": 369772,
                "block_used": 253289,
                "device": "/dev/mapper/vgall01-lvcg2html",
                "fstype": "ext4",
                "inode_available": 94384,
                "inode_total": 96000,
                "inode_used": 1616,
                "mount": "/cfg2html",
                "options": "rw",
                "size_available": 477114368,
                "size_total": 1514586112,
                "uuid": "4f34671b-4104-4e80-95e1-588bef862974"
            },
            {
                "block_available": 5391462,
                "block_size": 4096,
                "block_total": 25770312,
                "block_used": 20378850,
                "device": "/dev/mapper/vgrear-lvopenv",
                "fstype": "ext4",
                "inode_available": 6551271,
                "inode_total": 6553600,
                "inode_used": 2329,
                "mount": "/usr/openv",
                "options": "rw",
                "size_available": 22083428352,
                "size_total": 105555197952,
                "uuid": "ff996c91-4e5b-4208-8752-6877f2171ef2"
            },
            {
                "block_available": 13428522,
                "block_size": 4096,
                "block_total": 154782658,
                "block_used": 141354136,
                "device": "/dev/mapper/vgPostreSQL-lvbackuppostgres",
                "fstype": "ext4",
                "inode_available": 39307783,
                "inode_total": 39321600,
                "inode_used": 13817,
                "mount": "/BackupPostgreSQL",
                "options": "rw",
                "size_available": 55003226112,
                "size_total": 633989767168,
                "uuid": "2f70082b-de00-4138-b803-d9ddf4229637"
            },
            {
                "block_available": 244872551,
                "block_size": 4096,
                "block_total": 257998016,
                "block_used": 13125465,
                "device": "/dev/sdh",
                "fstype": "ext4",
                "inode_available": 65535989,
                "inode_total": 65536000,
                "inode_used": 11,
                "mount": "/CASOS_ESPECIALES",
                "options": "rw",
                "size_available": 1002997968896,
                "size_total": 1056759873536,
                "uuid": "a3aa6f96-7edf-4b8a-8915-1fdbba3ff6fe"
            }
        ],
        "ansible_nodename": "server1",
        "ansible_os_family": "RedHat",
        "ansible_pkg_mgr": "yum",
        "ansible_processor": [
            "0",
            "AuthenticAMD",
            "AMD Opteron(TM) Processor 6238"
        ],
        "ansible_processor_cores": 1,
        "ansible_processor_count": 1,
        "ansible_processor_threads_per_core": 1,
        "ansible_processor_vcpus": 1,
        "ansible_product_name": "VMware Virtual Platform",
        "ansible_product_serial": "NA",
        "ansible_product_uuid": "NA",
        "ansible_product_version": "None",
        "ansible_python": {
            "executable": "/usr/bin/python",
            "has_sslcontext": false,
            "type": "CPython",
            "version": {
                "major": 2,
                "micro": 6,
                "minor": 6,
                "releaselevel": "final",
                "serial": 0
            },
            "version_info": [
                2,
                6,
                6,
                "final",
                0
            ]
        },
        "ansible_python_version": "2.6.6",
        "ansible_real_group_id": 45005,
        "ansible_real_user_id": 8402895,
        "ansible_selinux": {
            "status": "disabled"
        },
        "ansible_selinux_python_present": true,
        "ansible_service_mgr": "upstart",
        "ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBAIR0ZC1exbxG31Ulg8kb0SKxZaz42ocQ4SseWoaIGZpD+be8AaALccO4SsS8p8yj2klIznlAx3E6aYurxmnuEV++5SjbmsRL0F2fyAxx+t4Pc8blOdm+N635rCYvGR+rNtw0X8Lx9YLTFmiyr3OitOmnkRvF3OXezcK/CoAJtCVtAAAAFQCOh8ibRhyaba9a/tZmKlHK7wZpSQAAAIAERFpLppj/9rcDqM2S5XxmAn01/YgBK9QbISCbNpTQjtg7s8PBT+5JW//1mj4K3pNMOoJx47yRjCCPOlw6ns9dIgyxYl8ur6AOr4Hp4akmzg1SWvU3/ydNTSl8FHLr3+mFamEM0GhUrHXI90nRSXqqpfwYawJtZpUuuCdQMKyOtQAAAIAzHSAlHi8emeojUBm5o4+3d7ZJnpKUFwONHJ3xfJ8pfrrGh76abMRzzF89zNpUVAc8m1zyqYX17wNREc29RAF+B4wmKD6mbkaxS5w86yHUxj1aSA0mA2HRo5kurtqHRJfIusigDqGG51neN+RaPuxey+Bd1kjHuGiy85q7Ltq7Kg==",
        "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAABIwAAAQEAnoU9A1+CN7EFHDVSHMY6FnxXVA3xrjJeW3/WmjoWY2hPFF6UcP4iFf12t5nqCPK4mv5m9TAtviBtmI5rbeqw0JI0NSv0lYqLt8GCHly9vPguRp863QUAvqm9nRlOrdW/IGHhREpJYwXG2tfvgccVdLSbNkwctOvuRvg9DBhB1YxOemgQEi3Q742qi+3QkLZcEyoWiwLzxK/AW3u+R5e81z4Z9xL68aqOz2NJQ5QLSJ1iGgUU9nWoQMos+VywylFQKQCN1T8Q4MeoH+UVOZcskgrQylMcAsOvdbNsMQxU0nKGVkDNEugEM+X1K0xZOMu4nSHA+ezJ71/939xDihFdEw==",
        "ansible_swapfree_mb": 3334,
        "ansible_swaptotal_mb": 4095,
        "ansible_system": "Linux",
        "ansible_system_capabilities": [
            ""
        ],
        "ansible_system_capabilities_enforced": "True",
        "ansible_system_vendor": "VMware, Inc.",
        "ansible_uptime_seconds": 11415836,
        "ansible_user_dir": "/root/home/hpddpers",
        "ansible_user_gecos": "David Martinez,,personal,[email protected]",
        "ansible_user_gid": 45005,
        "ansible_user_id": "hpddpers",
        "ansible_user_shell": "/bin/bash",
        "ansible_user_uid": 8402895,
        "ansible_userspace_architecture": "x86_64",
        "ansible_userspace_bits": "64",
        "ansible_virtualization_role": "guest",
        "ansible_virtualization_type": "VMware",
        "gather_subset": [
            "all"
        ],
        "module_setup": true
    },
    "changed": false
}
[[email protected] Ansible]#

Si quisiéramos acceder a alguna de ellas, llamaríamos al nombre de la variable de Ansible por su nombre. Por ejemplo, ansible.distribution:

[[email protected] playbooks]# cat send_command_if_os.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Ejecutar comando
    command: "{{ COMANDO }}"
    when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
[[email protected] playbooks]#

O trabajar con variables del inventario de Ansible como, por ejemplo, con los grupos en combinación con la sintaxis JINJA2, de la cuál, hablo más abajo:

{% for host in groups['PRE'] + groups['WEBSERVERS'] %}

o

{% for host in groups['PRE']|union(groups['WEBSERVERS'] %}

También podemos trabajar con grupos de Ansible sin utilizar JINJA2. El fichero Yaml, tendría que incluir el nombre de los grupos que cumplen la condición, de la siguiente manera:

when: ('PRE' in group_names) or ('WEBSERVERS' in group_names)

o

when: ['PRE', 'WEBSERVERS'] | intersect(group_names) | count > 0

Si queremos trabajar con bucles «if…else», también lo podemos hacer. Veamos un ejemplo:

Código fuente del Playbook

[[email protected] Ansible]# cat playbooks/post-provisioning/RHEL7/resolv.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Generar fichero resolv.conf
    template:
       src: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/resolv.conf.j2
       dest: /etc/resolv.conf
       mode: '0644'
       backup: yes
    become: yes
[[email protected] Ansible]#

Código fuente de la plantilla JINJA2

Si el grupo que figura en el inventario de Ansible es PRO, configuraremos unos servidores de DNS diferentes en el fichero resolv.conf. Aquí es donde interviene el bucle if…else, tal y como podemos comprobar a continuación:

[[email protected] Ansible]# cat playbooks/post-provisioning/RHEL7/config_templates/resolv.conf.j2
{% if 'PRO' in group_names %}
search cpd1.intranet.gencat.cat cpd1pre.intranet.gencat.cat
nameserver 10.48.33.10
nameserver 10.48.33.11
nameserver 10.48.33.12
{% else  %}
search cpd1pre.intranet.gencat.cat cpd1.intranet.gencat.cat
nameserver 10.49.33.10
nameserver 10.49.33.11
nameserver 10.49.33.12
{% endif %}
[[email protected] Ansible]#

Configuración del inventario de pruebas

Lógicamente, en el inventario de Ansible debe figurar el nombre del grupo de servidores «PRO»:

[[email protected] Ansible]# cat playbooks/post-provisioning/RHEL7/config_templates/resolv.conf.j2
{% if 'PRO' in group_names %}
search cpd1.intranet.gencat.cat cpd1pre.intranet.gencat.cat
nameserver 10.48.33.10
nameserver 10.48.33.11
nameserver 10.48.33.12
{% else  %}
search cpd1pre.intranet.gencat.cat cpd1.intranet.gencat.cat
nameserver 10.49.33.10
nameserver 10.49.33.11
nameserver 10.49.33.12
{% endif %}
[[email protected] Ansible]#

Ejecución del Playbook

Prestar especial atención a la variable HOSTS=PRO, que hace referencia a los servidores del grupo de Ansible llamado PRO en el fichero de inventario:

[[email protected] Ansible]# ansible-playbook -e "HOSTS=PRO" --user hpddpers --ask-pass --become-method su --ask-become-pass -i inventory/david /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/resolv.yml -vv
ansible-playbook 2.7.9
  config file = /planific/bin/admsys/Ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /planific/bin/admsys/Ansible/ansible.cfg as config file
SSH password:
SU password[defaults to SSH password]:
/planific/bin/admsys/Ansible/inventory/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAYBOOK: resolv.yml *****************************************************************************************************************************************************************************
1 plays in /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/resolv.yml

PLAY [PRO] ***************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/resolv.yml:1
ok: [lansibd0]
META: ran handlers

TASK [Generar fichero resolv.conf] ***************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/resolv.yml:3
changed: [lansibd0] => {"backup_file": "/etc/[email protected]:47:30~", "changed": true, "checksum": "d5c7b7b3e8b921aff6176f4bdc04d890f72c6b3f", "dest": "/etc/resolv.conf", "gid": 0, "group": "root", "md5sum": "d03d899fac7afaaf2e576db51e380672", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:net_conf_t:s0", "size": 129, "src": "/root/home/hpddpers/.ansible/tmp/ansible-tmp-1570171641.29-54998608965087/source", "state": "file", "uid": 0}
META: ran handlers
META: ran handlers

PLAY RECAP ***************************************************************************************************************************************************************************************
lansibd0                   : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Comprobamos que el Playbook ha configurado los servidores de DNS correctos:

[[email protected] ~]# cat /etc/resolv.conf
search cpd1.intranet.gencat.cat cpd1pre.intranet.gencat.cat
nameserver 10.48.33.10
nameserver 10.48.33.11
nameserver 10.48.33.12
[[email protected] ~]#

Ocultar las credenciales en un fichero YML

En los playbooks anteriores hemos pasado las variables de acceso por SSH por línea de comandos y configurándolas en el propio inventario de servidores, pero esto puede ser un problema de seguridad o si tenemos varios usuarios que pueden ejecutar los mismos playbooks pero cada uno con sus propias credenciales.

Para solucionar esto, podemos crear un fichero personalizado de credenciales con permisos restrictivos para que ningún otro usuario del sistema pueda leerlas.

Fichero de credenciales

[[email protected] LUNs_PVs_Ansible]# cat root_credentials.yml
ansible_become: yes
ansible_become_method: su
ansible_become_user: root
ansible_become_pass: Contraseña_Secreta_de_root
[[email protected] LUNs_PVs_Ansible]#

Configuración del playbook

Dentro del playbook hacemos una llamada al fichero de credenciales. Para este ejemplo he puesto el fichero que acabo de crear pero también lo podemos configurar como una variables para que el resto de usuarios pueda llamar a su propio fichero de credenciales:

[[email protected] LUNs_PVs_Ansible]# cat inventario.yml
- hosts: "{{ HOSTS }}"

  tasks:
  - include_vars: /planific/bin/admsys/LUNs_PVs_Ansible/root_credentials.yml

  - name: Ejecutando luns.sh
    script: /planific/bin/admsys/LUNs_PVs_Ansible/luns.sh
    register: luns

  - debug: var={{ item }}
    with_items:
    - luns.stdout

  - local_action: copy content={{ luns.stdout }} dest="/planific/bin/admsys/LUNs_PVs_Ansible/luns.txt"

  - name: Ejecutando pvs.sh
    script: /planific/bin/admsys/LUNs_PVs_Ansible/pvs.sh
    register: pvs

  - debug: var={{ item }}
    with_items:
    - pvs.stdout

  - local_action: copy content={{ pvs.stdout }} dest="/planific/bin/admsys/LUNs_PVs_Ansible/pvs.txt"
[[email protected] LUNs_PVs_Ansible]#

Ejecución del playbook

[[email protected] LUNs_PVs_Ansible]#  ansible-playbook -e "HOSTS=la01wai0.7376.1286.ecs.hp.com"  --user hpddpers --ask-pass -i /planific/bin/admsys/LUNs_PVs_Ansible/servers.txt /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml -vv
ansible-playbook 2.7.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /etc/ansible/ansible.cfg as config file
SSH password:
/planific/bin/admsys/LUNs_PVs_Ansible/servers.txt did not meet host_list requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/LUNs_PVs_Ansible/servers.txt did not meet script requirements, check plugin documentation if this is unexpected
/planific/bin/admsys/LUNs_PVs_Ansible/servers.txt did not meet yaml requirements, check plugin documentation if this is unexpected

PLAYBOOK: inventario.yml *************************************************************************************************************************************************************************
1 plays in /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml

PLAY [la01wai0.7376.1286.ecs.hp.com] *************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:1
ok: [la01wai0.7376.1286.ecs.hp.com]
META: ran handlers

TASK [include_vars] ******************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:4
ok: [la01wai0.7376.1286.ecs.hp.com] => {"ansible_facts": {"ansible_become": true, "ansible_become_method": "su", "ansible_become_pass": "R3sfriad0s2016", "ansible_become_user": "root"}, "ansible_included_var_files": ["/planific/bin/admsys/LUNs_PVs_Ansible/root_credentials.yml"], "changed": false}

TASK [Ejecutando luns.sh] ************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:6
changed: [la01wai0.7376.1286.ecs.hp.com] => {"changed": true, "rc": 0, "stderr": "Shared connection to la01wai0.7376.1286.ecs.hp.com closed.\r\n", "stderr_lines": ["Shared connection to la01wai0.7376.1286.ecs.hp.com closed."], "stdout": "\r\nla01wai0;/dev/sda;107.4;GB\r\nla01wai0;/dev/sdb;53.7;GB\r\n", "stdout_lines": ["", "la01wai0;/dev/sda;107.4;GB", "la01wai0;/dev/sdb;53.7;GB"]}

TASK [debug] *************************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:10
ok: [la01wai0.7376.1286.ecs.hp.com] => (item=luns.stdout) => {
    "item": "luns.stdout",
    "luns.stdout": "\r\nla01wai0;/dev/sda;107.4;GB\r\nla01wai0;/dev/sdb;53.7;GB\r\n"
}

TASK [copy] **************************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:14
ok: [la01wai0.7376.1286.ecs.hp.com -> localhost] => {"changed": false, "checksum": "204d5129a0f710160d1862fcb13214f750b54aa7", "dest": "/planific/bin/admsys/LUNs_PVs_Ansible/luns.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/planific/bin/admsys/LUNs_PVs_Ansible/luns.txt", "size": 57, "state": "file", "uid": 0}

TASK [Ejecutando pvs.sh] *************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:16
changed: [la01wai0.7376.1286.ecs.hp.com] => {"changed": true, "rc": 0, "stderr": "Shared connection to la01wai0.7376.1286.ecs.hp.com closed.\r\n", "stderr_lines": ["Shared connection to la01wai0.7376.1286.ecs.hp.com closed."], "stdout": "\r\nla01wai0;/dev/sda2;vg00;99.80g;14.30g\r\nla01wai0;/dev/sdb1;vgserveis;50.00g;40.00g\r\n", "stdout_lines": ["", "la01wai0;/dev/sda2;vg00;99.80g;14.30g", "la01wai0;/dev/sdb1;vgserveis;50.00g;40.00g"]}

TASK [debug] *************************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:20
ok: [la01wai0.7376.1286.ecs.hp.com] => (item=pvs.stdout) => {
    "item": "pvs.stdout",
    "pvs.stdout": "\r\nla01wai0;/dev/sda2;vg00;99.80g;14.30g\r\nla01wai0;/dev/sdb1;vgserveis;50.00g;40.00g\r\n"
}

TASK [copy] **************************************************************************************************************************************************************************************
task path: /planific/bin/admsys/LUNs_PVs_Ansible/inventario.yml:24
ok: [la01wai0.7376.1286.ecs.hp.com -> localhost] => {"changed": false, "checksum": "b704a769985d6ff170139b2e2cdc8d48dd9e4f92", "dest": "/planific/bin/admsys/LUNs_PVs_Ansible/pvs.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/planific/bin/admsys/LUNs_PVs_Ansible/pvs.txt", "size": 85, "state": "file", "uid": 0}
META: ran handlers
META: ran handlers

PLAY RECAP ***************************************************************************************************************************************************************************************
la01wai0.7376.1286.ecs.hp.com : ok=8    changed=2    unreachable=0    failed=0

[[email protected] LUNs_PVs_Ansible]# 

Ansible Tower

Ansible Tower es una interfaz WEB para administrar Ansible de una manera visual.

Instalación de la Versión Oficial de RedHat

Voy a instalar Ansible Tower en un Linux CentOS 7 con 4GB de RAM. Lo primero que haré será instalar los paquetes de yum correspondientes:

yum -y install epel-release
yum -y install ansible vim curl

A continuación, me descargo la última versión del producto:

mkdir /tmp/ansibletower
cd /tmp/ansibletower/
curl -k -O https://releases.ansible.com/ansible-tower/setup/ansible-tower-setup-latest.tar.gz
tar xvf ansible-tower-setup-latest.tar.gz
cd ansible-tower-setup-3.5.2-1

Editamos la información del fichero de inventario:

vi inventory

[[email protected] ansible-tower-setup-3.5.2-1]# cat inventory
[tower]
localhost ansible_connection=local

[database]

[all:vars]
admin_password='MyPassword'

pg_host=''
pg_port=''

pg_database='awx'
pg_username='awx'
pg_password='PgPassword'

rabbitmq_username=tower
rabbitmq_password='RBPassword'
rabbitmq_cookie=cookiemonster

# Isolated Tower nodes automatically generate an RSA key for authentication;
# To disable this behavior, set this value to false
# isolated_key_generation=true
[[email protected] ansible-tower-setup-3.5.2-1]#

Procedemos con la instalación de Ansible Tower:

./setup.sh

Establecemos una contraseña de acceso a la consola para el usuario administrador (admin):

[[email protected] ~]# awx-manage changepassword admin
Changing password for user 'admin'
Password:
Password (again):
Password changed successfully for user 'admin'
[[email protected] ~]#

Una vez finalizada la instalación, podremos acceder a la consola de Ansible Tower por HTTPS:

Consola de Ansible Tower

Instalación de la Versión Libre o Community (AWX) con Dockers

Como en la versión anterior, instalamos los repositorios y dependencias necesarias:

yum install -y epel-release

yum install -y yum-utils device-mapper-persistent-data lvm2 ansible git python-devel python-pip python-docker-py vim-enhanced

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y

Arrancamos el servicio de Dockers:

systemctl start docker
systemctl enable docker

Descargamos el repositorio de AWX de GitHub:

git clone https://github.com/ansible/awx.git
cd awx
git clone https://github.com/ansible/awx-logos.git

Configuramos el fichero de inventario de Ansible:

cd installer
vi inventory

[[email protected] installer]# grep -v "#" inventory |grep -v ^$
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"
[all:vars]
dockerhub_base=ansible
awx_secret_key=awxpassword
awx_task_hostname=awx
awx_web_hostname=awxweb
postgres_data_dir=/var/lib/pgdocker
host_port=80
host_port_ssl=443
docker_compose_dir=/var/lib/awx
pg_username=awx
pg_password=awxpass
pg_admin_password=postgrespass
pg_database=awx
pg_port=5432
rabbitmq_password=awxpass
rabbitmq_erlang_cookie=cookiemonster
admin_user=admin
admin_password=password
create_preload_data=True
secret_key=awxsecret
awx_official=true
project_data_dir=/var/lib/awx/projects
[[email protected] installer]#

Instalamos Ansible Tower (AWX):

ansible-playbook -i inventory install.yml -vv

Después de la instalación ya podremos acceder a la consola por HTTPS como en el apartado anterior.

Creación de Playbooks de Ansible con JINJA2

JINJA2 es un lenguaje de programación para la creación de plantillas compatible con Ansible, en el que podemos insertar los siguientes tipos de código:

  • {% … %} – Condiciones if…else
  • {{ … }} – Insertamos variables (ya lo hemos visto anteriormente)
  • {# … #} – Insertamos comentarios dentro del código para describir qué hace

Para mostrar su utilidad, vamos a crear una plantilla para el fichero de configuración /etc/resolv.conf con condiciones, es decir, si es para un servidor de producción, se utilizarán unos DNSs concretos pero si no, se utilizarán otros.

Para saber que el tipo de entorno en el que vamos a trabajar, pasaremos una variable de Ansible que leerá la plantilla escrita en JINJA2.

Código fuente de la plantilla JINJA2

[[email protected] Ansible]# cat playbooks/resolv.j2
{% if ENV == "PRO" %}
nameserver 10.48.33.10
nameserver 10.48.33.11
nameserver 10.48.33.12
{% else %}
nameserver 10.49.33.10
nameserver 10.49.33.11
nameserver 10.49.33.12
{% endif %}
[[email protected] Ansible]#

Código fuente del Playbook

[[email protected] Ansible]# cat playbooks/resolv.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Generar fichero resolv.conf
    template:
       src: resolv.j2
       dest: /tmp/resolv.conf
[[email protected] Ansible]#

Ejecución del Playbook

[[email protected] Ansible]# ansible-playbook --extra-vars "HOSTS=lhpilox01 ENV=PRO" -i inventario/david playbooks/resolv.yml -vv
 [WARNING] Ansible is being run in a world writable directory (/planific/bin/admsys/Ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
ansible-playbook 2.7.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.6.6 (r266:84292, Aug  9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
Using /etc/ansible/ansible.cfg as config file
/planific/bin/admsys/Ansible/inventario/david did not meet host_list requirements, check plugin documentation if this is unexpected

PLAYBOOK: resolv.yml *****************************************************************************************************************************************************************************
1 plays in playbooks/resolv.yml

PLAY [lhpilox01] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/resolv.yml:1
ok: [lhpilox01]
META: ran handlers

TASK [Generar fichero resolv.conf] ***************************************************************************************************************************************************************
task path: /planific/bin/admsys/Ansible/playbooks/resolv.yml:3
changed: [lhpilox01] => {"changed": true, "checksum": "4d9da048ea4febaa3beab549ce3f7c46e5b2b746", "dest": "./resolv.conf", "gid": 45005, "group": "uxsup3", "md5sum": "1cef5857e7d86192cf72ae83c3b2617b", "mode": "0644", "owner": "hpddpers", "size": 69, "src": "/root/home/hpddpers/.ansible/tmp/ansible-tmp-1569840462.78-109557576510887/source", "state": "file", "uid": 8402895}
META: ran handlers
META: ran handlers

PLAY RECAP ***************************************************************************************************************************************************************************************
lhpilox01                  : ok=2    changed=1    unreachable=0    failed=0

[[email protected] Ansible]#

Resultado

Como podemos observar, hemos pasado la variable ENV=PRO en la línea de comandos del Playbook para que JINJA2 sepa que estamos hablando del entorno de producción. Como resultado, se ha generado el fichero /tmp/resolv.conf que hemos configurado en el Playbook con los DNSs de producción indicados en la plantilla JINJA2:

[[email protected] Ansible]# cat /tmp/resolv.conf
nameserver 10.48.33.10
nameserver 10.48.33.11
nameserver 10.48.33.12
[[email protected] Ansible]#

Llamar a variables de Ansible desde plantillas de JINJA2

Desde una plantilla JINJA2 podemos utilizar variables de Ansbile. A modo de ejemplo, crearemos el fichero /etc/sysconfig/network-scripts/route-eth0, en el que se configuran las rutas estáticas en sistemas Linux RedHat.

En este fichero, configuraremos las mismas rutas para todos los servidores, a excepción del default gateway que no siempre es el mismo y tenemos que obtener este valor desde una variable de Ansible.

Código fuente de la plantilla JINJA2

[[email protected] RHEL7]# cat /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/route-eth0.j2
ADDRESS0=15.84.40.0
NETMASK0=255.255.255.0
GATEWAY0={{ ansible_default_ipv4.gateway }}

Configuraríamos todas las rutas restantes con la misma sintaxis (ADDRESS1...)

Como vemos, el campo GATEWAY0 es la variable de Ansible que obtiene el default gateway que ya está configurado en el sistema operativo.

Código fuente del Playbook

[[email protected] RHEL7]# cat addroutes.yml
- hosts: "{{ HOSTS }}"
  tasks:
  - name: Generar fichero route-eth0
    template:
       src: /planific/bin/admsys/Ansible/playbooks/post-provisioning/RHEL7/config_templates/route-eth0.j2
       dest: /etc/sysconfig/network-scripts/route-eth0
       mode: '0644'
       backup: yes
  - name: restart Network
    systemd:
       name: network
       state: restarted
  become: yes
[[email protected] RHEL7]# 

¿Te ha gustado? ¡Compártelo!

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on telegram
Share on email

Deja un comentario

Tal vez también te gustaría leer...

Corrompiendo un filesystem ext4 a propósito

He tenido la necesidad de corromper un filesystem ext4 en un RedHat 6 a propósito para realizar un test de chequeo de filesystem. En mi caso, el filesystem era el /dev/sdb. Para corromperlo, he utilizado el siguiente comando: Rebajas Logitech B100 – Ratón óptico, color negro 4,99 EUR Comprar en Amazon Rebajas Logitech G203 Prodigy,

Leer más »

GlusterFS – Filesystems con Alta Disponibilidad en Linux

Alguna vez os he hablado de ServiceGuard para montar entornos de alta disponibilidad robustos, que monten filesystems y levanten servicios, pero este es un software de pago que no quería utilizar para montar un único filesystem con alta disponibilidad. En su lugar, he elegido GlusterFS, que es opensource y con soporte de RedHat. Tabla de

Leer más »

RHEL y Oracle RAC – Paquetes Dropped

En un entorno de dos servidores Linux RedHat 6.9 con un cluster de Oracle RAC, hemos detectado muchos paquetes dropped en las tarjetas de red (dropped: 311815750): El equipo de Oracle ejecuta un análisis del sistema (OSWatcher) donde se indica que debemos aumentar el valor de MTU de todos los nodos del cluster, y así

Leer más »

lun4194304 has a LUN larger than allowed by the host adapter

En un RedHat 7.6 he pedido una LUN para ampliar un filesystem. Esta es una tarea rutinaria de cualquier técnico de sistemas Linux, si embargo, hoy me he encontrado con que la LUN estaba asignada al servidor pero no la veía. En el log del sistema operativo (/var/log/messages) aparecía el siguiente mensaje: Es decir, que

Leer más »

Conexiones seguras con SSH

SSH es un protocolo de comunicaciones que proporciona seguridad criptográfica cuando nos conectamos a un servidor para iniciar una sesión o transfereir archivos por SFTP o SCP. Los comandos SSH, SFTP y SCP utilizan el mismo puerto de comunicaciones, que es el 22. Teclado Mecánico Gaming de VicTsing, 104 Teclas y… 39,88 EUR Comprar en

Leer más »

root: fork failed: Cannot allocate memory

En uno de los servidores que administro me ha ocurrido que no podía entrar con SSH ni ejecutar procesos en remoto a través de un software que tiene un agente instalado, etc. dando el error root: fork failed: Cannot allocate memory. Sin embargo, la aplicación del usuario no se estaba viendo afectada. Rebajas Logitech B100

Leer más »