¿Qué es ElasticSearch?
ElasticSearch es un servicio de base de datos opensource NO SQL diseñado para realizar búsquedas entre grandes cantidades de datos.
Es por eso que se utiliza mucho en entornos de BigData, en donde se utilizan diversas fuentes de datos, aglutinando toneladas de información, y ElasticSearch es una solución óptima de alto rendimiento para extraer rápidamente aquella información que queremos.
La estructura de sus datos se basa en ficheros de texto en formato JSON.
Instalación de ElasticSearch
Voy a montar un entorno de pruebas en un Linux CentOS 7 para instalar ElasticSearch.
Instalación de Java
ElasticSearch está programado en Java, así que necesitaremos instalarlo para que funcione el producto.
yum install java-1.8.0-openjdk-devel -y
[root@server1 ~]# java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
[root@server1 ~]#
Instalación de ElasticSearch en Linux CentOS 7
Una vez que ya tenemos instalado Java, ya podemos instalar la base de datos, tal y como explico a continuación:
Configuración del repositorio de Software
[root@server1 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@server1 ~]#
[root@server1 ~]# cat /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@server1 ~]#
Instalación de ElasticSearch con yum
yum install elasticsearch -y
[root@server1 ~]# rpm -qa |grep -i elasticsearch
elasticsearch-7.1.1-1.x86_64
[root@server1 ~]#
Habilitación del servicio
[root@server1 ~]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@server1 ~]# systemctl start elasticsearch.service
[root@server1 ~]# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-06-05 08:54:12 CEST; 7s ago
Docs: http://www.elastic.co
Main PID: 8307 (java)
Tasks: 18
CGroup: /system.slice/elasticsearch.service
├─8307 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress...
└─8376 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
Jun 05 08:54:12 server1 systemd[1]: Started Elasticsearch.
[root@server1 ~]#
Hacemos alguna prueba de testo tras la instalación
Voy a atacar a la API de ElasticSearch con curl:
[root@server1 ~]# curl -s http://localhost:9200
{
"name" : "5uhXUnJ",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Z--VOxHFRmivIuiCoPG0Lw",
"version" : {
"number" : "6.8.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "65b6179",
"build_date" : "2019-05-15T20:06:13.172855Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
[root@server1 ~]#
Ficheros de configuración
Los ficheros de configuración de ElasticSearch se ubican en /etc/elasticsearch. Una de las tareas que se suelen hacer es permitir el rango de IPs que pueden acceder al servicio, por temas de seguridad. Para ello, modificaremos el campo “network.host” del fichero /etc/elasticsearch/elasticsearch.yml, con el rango de red que nos interese y reiniciar el servicio de ElasticSearch.
[root@server1 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep network
#network.host: 192.168.0.1
# For more information, consult the network module documentation.
[root@server1 ~]#
Insertar un dato en la base de datos de ElasticSearch
Insertamos un dato con cuyo ID será 1:
[root@server1 ~]# curl -XPOST -H "Content-Type: application/json" "localhost:9200/article/news/1" -d '{ "title" : "David" }'
{"_index":"article","_type":"news","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1}[root@server1 ~]#
[root@server1 ~]#
Si quisiera crear datos relacionados con otro proyecto, por ejemplo, con Twitter, la URL sería:
curl -XPOST -H «Content-Type: application/json» «localhost:9200/article/twitter/1″ -d ‘{ «title» : «Trending Topic» }’
Si ni queremos configurar el ID manualmente, también podemos utilizar esta otra sintaxis:
[root@server1 ~]# curl -XPOST -H "Content-Type: application/json" 'localhost:9200/article/news?' -d '{ "title" : "David Garcia" }'
{"_index":"article","_type":"news","_id":"zAe0JmsBqAv9hEcDnq5E","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1}[root@server1 ~]#
[root@server1 ~]#
Obtener un dato de la base de datos de ElasticSearch por su ID
Hacemos una consulta sobre el dato que hemos insertado anteriormente con ID 1:
[root@server1 ~]# curl -XGET -H "Content-Type: application/json" 'localhost:9200/article/news/1'
{"_index":"article","_type":"news","_id":"1","_version":1,"_seq_no":0,"_primary_term":1,"found":true,"_source":{ "title" : "David" }}[root@server1 ~]#
[root@server1 ~]#
Borrar un dato de la base de datos de ElasticSearch por su ID
[root@server1 ~]# curl -XDELETE -H "Content-Type: application/json" 'http://localhost:9200/article/news/1'
{"_index":"article","_type":"news","_id":"1","_version":2,"result":"deleted","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":1,"_primary_term":1}[root@server1 ~]#
[root@server1 ~]#
Si lo consultamos, ya no existe:
[root@server1 ~]# curl -XGET -H "Content-Type: application/json" 'localhost:9200/article/news/1'
{"_index":"article","_type":"news","_id":"1","found":false}[root@server1 ~]#
[root@server1 ~]#
Actualizar un dato de la base de datos de ElasticSearch por su ID
Antes de actualizar el dato, vuelvo a ejecutar la instrucción de insert, ya que lo había eliminado anteriormente. Y ahora, lo actualizamos:
[root@server1 ~]# curl -XPOST -H "Content-Type: application/json" 'localhost:9200/article/news/1/_update?' -d'
> {
> "doc": { "title": "David Martinez" }
> }'
{"_index":"article","_type":"news","_id":"1","_version":2,"result":"updated","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":3,"_primary_term":1}[root@server1 ~]#
[root@server1 ~]#
Si vuelvo a realizar la consulta, vemos que está actualizado:
[root@server1 ~]# curl -XGET -H "Content-Type: application/json" 'localhost:9200/article/news/1'
{"_index":"article","_type":"news","_id":"1","_version":2,"_seq_no":3,"_primary_term":1,"found":true,"_source":{"title":"David Martinez"}}[root@server1 ~]#
Realizar búsquedas por palabra clave en la base de datos de ElasticSearch
En este caso, vamos a buscar todos los registros que contengan la palabra clave “David”.
[root@server1 ~]# curl -XGET -H "Content-Type: application/json" 'localhost:9200/article/_search?q=David'
{"took":5,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":2,"max_score":0.6931472,"hits":[{"_index":"article","_type":"news","_id":"zQe2JmsBqAv9hEcDTK7U","_score":0.6931472,"_source":{ "title" : "David Garcia" }},{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{"title":"David Martinez"}}]}}[root@server1 ~]#
[root@server1 ~]#
Eliminar una base de datos de ElasticSearch
Para este ejemplo, vamos a eliminar la base de datos “article”:
[root@server1 ~]# curl -XDELETE 'localhost:9200/article'
{"acknowledged":true}[root@server1 ~]#
Si intentamos consultar algún dato, ya no podremos:
[root@server1 ~]# curl -XGET -H "Content-Type: application/json" 'localhost:9200/article/_search?q=David'
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"article","index_uuid":"_na_","index":"article"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"article","index_uuid":"_na_","index":"article"},"status":404}[root@server1 ~]#
[root@server1 ~]#
Realizar un backup de la base de datos de ElasticSearch
Configuramos el directorio donde guardaremos las copias:
[root@server1 ~]# grep repo /etc/elasticsearch/elasticsearch.yml
path.repo: ["/backup/elasticsearch"]
[root@server1 ~]#
[root@server1 ~]# curl -XPUT 'localhost:9200/_snapshot/my_backup?pretty' -H 'Content-Type: application/json' -d '{ "type": "fs", "settings": { "location": "/backup/elasticsearch" } }'
{
"acknowledged" : true
}
[root@server1 ~]#
Creamos un snapshot de la base de datos:
[root@server1 ~]# curl -X PUT localhost:9200/_snapshot/my_backup/snapshot_1?pretty -H 'Content-Type: application/json'
{
"accepted" : true
}
[root@server1 ~]#
Comprobamos que se han generado los ficheros correspondientes de la copia:
[root@server1 ~]# ll /backup/elasticsearch/
total 36
-rw-r--r-- 1 elasticsearch elasticsearch 92 Jun 5 11:13 index-0
-rw-r--r-- 1 elasticsearch elasticsearch 8 Jun 5 11:13 index.latest
-rw-r--r-- 1 elasticsearch elasticsearch 22627 Jun 5 11:13 meta--Ckr0WAySsWLXmvY7Z-WXg.dat
-rw-r--r-- 1 elasticsearch elasticsearch 236 Jun 5 11:13 snap--Ckr0WAySsWLXmvY7Z-WXg.dat
[root@server1 ~]#
Consultamos los snapshots creados:
[root@server1 ~]# curl -X GET localhost:9200/_snapshot/my_backup/_all?pretty -H 'Content-Type: application/json'
{
"snapshots" : [
{
"snapshot" : "snapshot_1",
"uuid" : "-Ckr0WAySsWLXmvY7Z-WXg",
"version_id" : 7010199,
"version" : "7.1.1",
"indices" : [ ],
"include_global_state" : true,
"state" : "SUCCESS",
"start_time" : "2019-06-05T09:13:50.178Z",
"start_time_in_millis" : 1559726030178,
"end_time" : "2019-06-05T09:13:50.275Z",
"end_time_in_millis" : 1559726030275,
"duration_in_millis" : 97,
"failures" : [ ],
"shards" : {
"total" : 0,
"failed" : 0,
"successful" : 0
}
}
]
}
[root@server1 ~]#
Si queremos eliminar el snapshot, ejecutaremos el siguiente comando:
curl -X DELETE localhost:9200/_snapshot/my_backup/snapshot_1?pretty -H 'Content-Type: application/json'
Restaurar un snapshot
[root@server1 ~]# curl -XPOST 'localhost:9200/_snapshot/my_backup/snapshot_1/_restore'
{"snapshot":{"snapshot":"snapshot_1","indices":[],"shards":{"total":0,"failed":0,"successful":0}}}[root@server1 ~]#
[root@server1 ~]#
Configuración de un Cluster de ElasticSearch
En los entornos de producción es muy importante configurar la alta disponibilidad para que una avería hardware o la caída de alguno de los servidores sea totalmente transparente para el usuario final, ya que siempre habrá algún servidor con el servicio arrancado.
El funcionamiento de ElasticSearch consiste en replicar los datos entre todos los nodos del cluster cada vez que se inserta, actualiza o elimina alguno de ellos.
No se comparte la misma LUN entre todos los nodos del cluster, si no que cada uno tiene sus propios discos independientes y los datos se van replicando por red.
Vamos a construir un cluster de dos nodos. Uno será el master y otro cogerá el servicio si el master cae.
10.0.1.193 elkbn
10.0.1.105 elkbnmaster
Para ello, configuraremos el fichero de configuración de ElasticSearch de cada nodo de la siguiente manera (/etc/elasticsearch/elasticsearch.yml):
Master
#give your cluster a name.
cluster.name: ElasticCluster
#give your nodes a name (change node number from node to node).
node.name: "elkbnmaster"
#define node 1 as master-eligible:
node.master: true
node.data: false
#enter the private IP and port of your node:
network.host: ["elkbnmaster", "localhost"]
http.port: 9200
cluster.initial_master_nodes: ["elkbnmaster"]
#detail the private IPs of your nodes:
discovery.zen.ping.unicast.hosts: ["elkbn", "elkbnmaster"]
Nodo de datos
#give your cluster a name.
cluster.name: ElasticCluster
#give your nodes a name (change node number from node to node).
node.name: "elkbn"
#define node 2 as master-eligible:
node.master: false
node.data: true
#enter the private IP and port of your node:
network.host: ["elkbn", "localhost"]
http.port: 9200
cluster.initial_master_nodes: ["elkbnmaster"]
#detail the private IPs of your nodes:
discovery.zen.ping.unicast.hosts: ["elkbn", "elkbnmaster"]
Una vez configurados los dos ficheros de configuración, reiniciamos el servicio de ElasticSearch en ambos nodos con el siguiente comando:
Nodo Master:
systemctl stop elasticsearch
systemctl start elasticsearch
Nodo de datos:
systemctl stop elasticsearch
systemctl start elasticsearch
Podemos consultar el estado del cluster con el siguiente comando ejecutado desde el Master:
[root@elkbnmaster ~]# curl 'localhost:9200/_cat/health?v'
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1561709520 08:12:00 ElasticCluster yellow 2 1 2 2 0 0 2 0 - 50.0%
[root@elkbnmaster ~]#
Si queremos información más ampliada:
[root@elkbnmaster ~]# curl -XGET 'http://localhost:9200/_cluster/state?pretty' |more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"cluster_name" : "ElasticCluster",
"cluster_uuid" : "qU2DLWbiRS6rbwTEoSs06w",
"version" : 37,
"state_uuid" : "-wFPfY9eRq2QJI5HMHEINQ",
"master_node" : "7jxbP8c7TS2Zeo4Xr0G0BQ",
"blocks" : { },
"nodes" : {
"7jxbP8c7TS2Zeo4Xr0G0BQ" : {
"name" : "elkbnmaster",
"ephemeral_id" : "6U-y3GgHRyyhoAxfUnu2Dg",
"transport_address" : "10.0.1.105:9300",
"attributes" : {
"ml.machine_memory" : "3971977216",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
}
},
"u7GsXNtGSbaGsv3AP_eU5Q" : {
"name" : "elkbn",
"ephemeral_id" : "52EoN4OfS_2SIObNcLWQuA",
"transport_address" : "10.0.1.193:9300",
"attributes" : {
"ml.machine_memory" : "3971964928",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
}
}
},
Ahora vamos a repetir las instrucciones para insertar y consultar un dato en uno de los nodos para comprobar que la operativa funciona con normalidad:
[root@elkbn ~]# curl -XPOST -H "Content-Type: application/json" "localhost:9200/article/news/1" -d '{ "title" : "David" }'
{"_index":"article","_type":"news","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1}[root@elkbn ~]#
[root@elkbn ~]#
[root@elkbn ~]# curl -XGET -H "Content-Type: application/json" '10.0.1.193:9200/article/_search?q=David'
{"took":29,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.2876821,"hits":[{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{ "title" : "David" }}]}}[root@elkbn ~]#
[root@elkbn ~]#
Añadir un nuevo nodo de datos al cluster
Si queremos crecer el cluster añadiendo un nuevo nodo para almacenamiento de datos, crearemos un nuevo servidor y seguiremos el procedimiento de instalación de ElasticSearch comentado anteriormente en esta guía.
Para esta prueba vamos a añadir el servidor elkbn2 y añadiremos su correspondiente entrada en el fichero /etc/hosts de los tres nodos del cluster:
10.0.1.193 elkbn
10.0.1.45 elkbn2
10.0.1.105 elkbnmaster
El fichero de configuración de ElasticSearch en el nuevo nodo, quedaría de la siguiente manera:
[root@elkbn2 ~]# grep -v "#" /etc/elasticsearch/elasticsearch.yml
cluster.name: ElasticCluster
node.name: elkbn2
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ["elkbn2", "localhost"]
http.port: 9200
cluster.initial_master_nodes: ["elkbnmaster"]
discovery.zen.ping.unicast.hosts: ["elkbn", "elkbn2", "elkbnmaster"]
[root@elkbn2 ~]#
En los nodos elbkb y elkbnmaster, únicamente modificaríamos la siguiente línea, añadiendo el nuevo nodo:
discovery.zen.ping.unicast.hosts: ["elkbn", "elkbn2", "elkbnmaster"]
Luego, reiniciaremos el servicio de ElasticSearch, primero el master y luego los dos nodos de datos:
systemctl restart elasticsearch
Si volvemos a comprobar el estado del cluster, veremos que ya aparecen los tres nodos configurados:
[root@elkbnmaster ~]# curl 'localhost:9200/_cat/health?v'
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1561711184 08:39:44 ElasticCluster green 3 2 6 3 0 0 0 0 - 100.0%
[root@elkbnmaster ~]#
[root@elkbnmaster ~]# curl -XGET 'http://localhost:9200/_cluster/state?pretty' |more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"cluster_name" : "ElasticCluster",
"cluster_uuid" : "qU2DLWbiRS6rbwTEoSs06w",
"version" : 53,
"state_uuid" : "-H68f96XSLS8iV0O1vDwAg",
"master_node" : "7jxbP8c7TS2Zeo4Xr0G0BQ",
"blocks" : { },
"nodes" : {
"7jxbP8c7TS2Zeo4Xr0G0BQ" : {
"name" : "elkbnmaster",
"ephemeral_id" : "bvr-MSmXRneox8DtRQBHUw",
"transport_address" : "10.0.1.105:9300",
"attributes" : {
"ml.machine_memory" : "3971977216",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
}
},
"u7GsXNtGSbaGsv3AP_eU5Q" : {
"name" : "elkbn",
"ephemeral_id" : "SQR4Qhm5TceHs6U-DPS7OQ",
"transport_address" : "10.0.1.193:9300",
"attributes" : {
"ml.machine_memory" : "3971964928",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
}
},
"cN3Z2uBCRWKGjZXjazsDnA" : {
"name" : "elkbn2",
"ephemeral_id" : "ndUVKEjqRKSxS75k8gAlgw",
"transport_address" : "10.0.1.45:9300",
"attributes" : {
"ml.machine_memory" : "1925652480",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
}
}
},
Ahora vamos a comprobar que podemos acceder al dato insertado anteriormente apuntando a ambos nodos de datos de ElasticSearch (elkbn y elkbn2):
[root@elkbnmaster ~]# curl -XGET -H "Content-Type: application/json" 'elkbn:9200/article/_search?q=David'
{"took":30,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.2876821,"hits":[{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{ "title" : "David" }}]}}[root@elkbnmaster ~]#
[root@elkbnmaster ~]#
[root@elkbnmaster ~]# curl -XGET -H "Content-Type: application/json" 'elkbn2:9200/article/_search?q=David'
{"took":73,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.2876821,"hits":[{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{ "title" : "David" }}]}}[root@elkbnmaster ~]#
[root@elkbnmaster ~]#
Obviamente, si insertamos un nuevo dato desde cualquier nodo, también lo podremos consultar desde cualquiera de los nodos:
Insertamos el campo "Pepe" desde elkbn2:
[root@elkbn2 ~]# curl -XPOST -H "Content-Type: application/json" "localhost:9200/article/news/1" -d '{ "title" : "Pepe" }'
{"_index":"article","_type":"news","_id":"1","_version":2,"result":"updated","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":1,"_primary_term":2}[root@elkbn2 ~]#
[root@elkbn2 ~]#
Lo consultamos desde el nodo elkbn:
[root@elkbn ~]# curl -XGET -H "Content-Type: application/json" 'elkbn:9200/article/_search?q=Pepe'
{"took":386,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.2876821,"hits":[{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{ "title" : "Pepe" }}]}}[root@elkbn ~]#
[root@elkbn ~]#
Y desde el nodo elkbn2:
[root@elkbn ~]# curl -XGET -H "Content-Type: application/json" 'elkbn2:9200/article/_search?q=Pepe'
{"took":7,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.2876821,"hits":[{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{ "title" : "Pepe" }}]}}[root@elkbn ~]#
[root@elkbn ~]#
systemctl restart elasticsearch
¿Qué pasa si cae el servidor master?
Lo que ha ocurrido cuando he parado el servicio de ElasticSearch del nodo master (para simular una incidencia), ha sido que he podido consultar datos pero no insertar nuevos:
[root@elkbn2 ~]# curl -XGET -H "Content-Type: application/json" 'elkbn:9200/article/_search?q=Pepe'
{"took":6,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.2876821,"hits":[{"_index":"article","_type":"news","_id":"1","_score":0.2876821,"_source":{ "title" : "Pepe" }}]}}[root@elkbn2 ~]#
[root@elkbn2 ~]#
[root@elkbn2 ~]# curl -XPOST -H "Content-Type: application/json" "localhost:9200/article/news/1" -d '{ "title" : "Manuel" }'
{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/2/no master];"}],"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/2/no master];"},"status":503}[root@elkbn2 ~]#
[root@elkbn2 ~]#
Espero que os haya gustado el tutorial y, sobretodo, os sea útil.
Mi pasión por la tecnología me lleva constantemente a explorar las últimas tendencias y aplicaciones, buscando siempre formas de implementar soluciones innovadoras que mejoren la eficiencia. En puerto53.com comparto contenido valioso para ayudar a otros profesionales y entusiastas de la informática a navegar y dominar el complejo mundo de la tecnología. Mi especialidad en Linux RedHat.
Más sobre mí en el este enlace,