1、Elasticsearch 6.x版本全文检索学习之分布式特性介绍。

  1)、Elasticsearch支持集群默认,是一个分布式系统,其好处主要有两个。
    a、增大系统容量,如内存、磁盘、使得es集群可以支持PB级别的数据。
    b、提供系统可用性,即使部分节点停止服务,整个集群依然可以正常服务。
  2)、Elasticsearch集群由多个es实例组成。
    a、不同集群通过集群名字来区分,可以通过cluster.name进行修改,默认为elasticsearch。
    b、每个es实例本质上是一个JVM进程,且有自己的名字,通过node.name进行修改。

2、cerebro的安装与运行。Github地址:https://github.com/lmenezes/cerebro。下载,安装,部署。

 1 [root@slaver4 package]# wget https://github.com/lmenezes/cerebro/releases/download/v0.7.2/cerebro-0.7.2.tgz
 2 --2019-11-01 09:20:22--  https://github.com/lmenezes/cerebro/releases/download/v0.7.2/cerebro-0.7.2.tgz
 3 正在解析主机 github.com (github.com)... 13.250.177.223
 4 正在连接 github.com (github.com)|13.250.177.223|:443... 已连接。
 5 已发出 HTTP 请求,正在等待回应... 302 Found
 6 位置:https://github-production-release-asset-2e65be.s3.amazonaws.com/54560347/a5bf160e-d454-11e7-849b-758511101a2f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20191101%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20191101T012023Z&X-Amz-Expires=300&X-Amz-Signature=8b121e4e2a72d997441ebf78e2d8bea9deeeb322d1a3fbc676bc8398099b73a3&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dcerebro-0.7.2.tgz&response-content-type=application%2Foctet-stream [跟随至新的 URL]
 7 --2019-11-01 09:20:23--  https://github-production-release-asset-2e65be.s3.amazonaws.com/54560347/a5bf160e-d454-11e7-849b-758511101a2f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20191101%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20191101T012023Z&X-Amz-Expires=300&X-Amz-Signature=8b121e4e2a72d997441ebf78e2d8bea9deeeb322d1a3fbc676bc8398099b73a3&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dcerebro-0.7.2.tgz&response-content-type=application%2Foctet-stream
 8 正在解析主机 github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.130.123
 9 正在连接 github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.130.123|:443... 已连接。
10 已发出 HTTP 请求,正在等待回应... 200 OK
11 长度:52121825 (50M) [application/octet-stream]
12 正在保存至: “cerebro-0.7.2.tgz”
13
14 100%[======================================================================================================================================================================================>] 52,121,825  1.34MB/s 用时 30s
15
16 2019-11-01 09:20:55 (1.65 MB/s) - 已保存 “cerebro-0.7.2.tgz” [52121825/52121825])
17
18 [root@slaver4 package]# ls
19 cerebro-0.7.2.tgz  elasticsearch-6.7.0.tar.gz  erlang-solutions-1.0-1.noarch.rpm  filebeat-6.7.0-linux-x86_64.tar.gz  kibana-6.7.0-linux-x86_64.tar.gz  logstash-6.7.0.tar.gz  materials  rabbitmq-server-3.5.1-1.noarch.rpm
20 [root@slaver4 package]# tar -zxvf cerebro-0.7.2.tgz -C /home/hadoop/soft/

将cerebro赋予新创建的elsearch用户,用户组。

 1 [root@slaver4 package]# cd ../soft/
 2 [root@slaver4 soft]# ls
 3 cerebro-0.7.2  elasticsearch-6.7.0  filebeat-6.7.0-linux-x86_64  kibana-6.7.0-linux-x86_64  logstash-6.7.0
 4 [root@slaver4 soft]# chown elsearch:elsearch cerebro-0.7.2/
 5 [root@slaver4 soft]# su elsearch
 6 [elsearch@slaver4 soft]$ ls
 7 cerebro-0.7.2  elasticsearch-6.7.0  filebeat-6.7.0-linux-x86_64  kibana-6.7.0-linux-x86_64  logstash-6.7.0
 8 [elsearch@slaver4 soft]$ ll
 9 总用量 0
10 drwxr-xr-x.  5 elsearch elsearch  57 11月 28 2017 cerebro-0.7.2
11 drwxr-xr-x.  9 elsearch elsearch 155 10月 25 15:09 elasticsearch-6.7.0
12 drwxr-xr-x.  6 elsearch elsearch 252 10月 26 11:22 filebeat-6.7.0-linux-x86_64
13 drwxr-xr-x. 13 elsearch elsearch 246 10月 25 16:13 kibana-6.7.0-linux-x86_64
14 drwxr-xr-x. 12 elsearch elsearch 255 10月 26 14:37 logstash-6.7.0
15 [elsearch@slaver4 soft]$

安装了jvm即java就可以运行了。我启动报了如下错误,修改一下日志记录的路径。

  1 [elsearch@slaver4 bin]$ ./cerebro
  2 09:27:05,150 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
  3 09:27:05,151 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
  4 09:27:05,151 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/home/hadoop/soft/cerebro-0.7.2/conf/logback.xml]
  5 09:27:05,363 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
  6 09:27:05,371 |-INFO in ch.qos.logback.core.joran.action.ConversionRuleAction - registering conversion word coloredLevel with class [play.api.libs.logback.ColoredLevel]
  7 09:27:05,371 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender]
  8 09:27:05,381 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
  9 09:27:05,393 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
 10 09:27:05,427 |-INFO in ch.qos.logback.core.FileAppender[FILE] - File property is set to [./logs/application.log]
 11 09:27:05,428 |-ERROR in ch.qos.logback.core.FileAppender[FILE] - Failed to create parent directories for [/home/hadoop/soft/cerebro-0.7.2/./logs/application.log]
 12 09:27:05,429 |-ERROR in ch.qos.logback.core.FileAppender[FILE] - openFile(./logs/application.log,true) call failed. java.io.FileNotFoundException: ./logs/application.log (没有那个文件或目录)
 13     at java.io.FileNotFoundException: ./logs/application.log (没有那个文件或目录)
 14     at     at java.io.FileOutputStream.open0(Native Method)
 15     at     at java.io.FileOutputStream.open(FileOutputStream.java:270)
 16     at     at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
 17     at     at ch.qos.logback.core.recovery.ResilientFileOutputStream.<init>(ResilientFileOutputStream.java:26)
 18     at     at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:186)
 19     at     at ch.qos.logback.core.FileAppender.start(FileAppender.java:121)
 20     at     at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:90)
 21     at     at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309)
 22     at     at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193)
 23     at     at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179)
 24     at     at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62)
 25     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:155)
 26     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:142)
 27     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:103)
 28     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53)
 29     at     at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75)
 30     at     at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:150)
 31     at     at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
 32     at     at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
 33     at     at play.api.libs.logback.LogbackLoggerConfigurator.configure(LogbackLoggerConfigurator.scala:80)
 34     at     at play.api.libs.logback.LogbackLoggerConfigurator.configure(LogbackLoggerConfigurator.scala:62)
 35     at     at play.api.inject.guice.GuiceApplicationBuilder$$anonfun$applicationModule$1.apply(GuiceApplicationBuilder.scala:102)
 36     at     at play.api.inject.guice.GuiceApplicationBuilder$$anonfun$applicationModule$1.apply(GuiceApplicationBuilder.scala:102)
 37     at     at scala.Option.foreach(Option.scala:257)
 38     at     at play.api.inject.guice.GuiceApplicationBuilder.applicationModule(GuiceApplicationBuilder.scala:101)
 39     at     at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:181)
 40     at     at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:123)
 41     at     at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
 42     at     at play.core.server.ProdServerStart$.start(ProdServerStart.scala:47)
 43     at     at play.core.server.ProdServerStart$.main(ProdServerStart.scala:22)
 44     at     at play.core.server.ProdServerStart.main(ProdServerStart.scala)
 45 09:27:05,429 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
 46 09:27:05,431 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
 47 09:27:05,431 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
 48 09:27:05,439 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [play] to INFO
 49 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to INFO
 50 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebean.config.PropertyMapLoader] to OFF
 51 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.core.XmlConfigLoader] to OFF
 52 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.lib.BackgroundThread] to OFF
 53 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.gargoylesoftware.htmlunit.javascript] to OFF
 54 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to ERROR
 55 09:27:05,440 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
 56 09:27:05,441 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
 57 09:27:05,441 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
 58 09:27:05,442 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@53aac487 - Registering current configuration as safe fallback point
 59
 60 09:27:05,150 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
 61 09:27:05,151 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
 62 09:27:05,151 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/home/hadoop/soft/cerebro-0.7.2/conf/logback.xml]
 63 09:27:05,363 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
 64 09:27:05,371 |-INFO in ch.qos.logback.core.joran.action.ConversionRuleAction - registering conversion word coloredLevel with class [play.api.libs.logback.ColoredLevel]
 65 09:27:05,371 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender]
 66 09:27:05,381 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
 67 09:27:05,393 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
 68 09:27:05,427 |-INFO in ch.qos.logback.core.FileAppender[FILE] - File property is set to [./logs/application.log]
 69 09:27:05,428 |-ERROR in ch.qos.logback.core.FileAppender[FILE] - Failed to create parent directories for [/home/hadoop/soft/cerebro-0.7.2/./logs/application.log]
 70 09:27:05,429 |-ERROR in ch.qos.logback.core.FileAppender[FILE] - openFile(./logs/application.log,true) call failed. java.io.FileNotFoundException: ./logs/application.log (没有那个文件或目录)
 71     at java.io.FileNotFoundException: ./logs/application.log (没有那个文件或目录)
 72     at     at java.io.FileOutputStream.open0(Native Method)
 73     at     at java.io.FileOutputStream.open(FileOutputStream.java:270)
 74     at     at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
 75     at     at ch.qos.logback.core.recovery.ResilientFileOutputStream.<init>(ResilientFileOutputStream.java:26)
 76     at     at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:186)
 77     at     at ch.qos.logback.core.FileAppender.start(FileAppender.java:121)
 78     at     at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:90)
 79     at     at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309)
 80     at     at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193)
 81     at     at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179)
 82     at     at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62)
 83     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:155)
 84     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:142)
 85     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:103)
 86     at     at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53)
 87     at     at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75)
 88     at     at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:150)
 89     at     at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
 90     at     at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
 91     at     at play.api.libs.logback.LogbackLoggerConfigurator.configure(LogbackLoggerConfigurator.scala:80)
 92     at     at play.api.libs.logback.LogbackLoggerConfigurator.configure(LogbackLoggerConfigurator.scala:62)
 93     at     at play.api.inject.guice.GuiceApplicationBuilder$$anonfun$applicationModule$1.apply(GuiceApplicationBuilder.scala:102)
 94     at     at play.api.inject.guice.GuiceApplicationBuilder$$anonfun$applicationModule$1.apply(GuiceApplicationBuilder.scala:102)
 95     at     at scala.Option.foreach(Option.scala:257)
 96     at     at play.api.inject.guice.GuiceApplicationBuilder.applicationModule(GuiceApplicationBuilder.scala:101)
 97     at     at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:181)
 98     at     at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:123)
 99     at     at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
100     at     at play.core.server.ProdServerStart$.start(ProdServerStart.scala:47)
101     at     at play.core.server.ProdServerStart$.main(ProdServerStart.scala:22)
102     at     at play.core.server.ProdServerStart.main(ProdServerStart.scala)
103 09:27:05,429 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
104 09:27:05,431 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
105 09:27:05,431 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
106 09:27:05,439 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [play] to INFO
107 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to INFO
108 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebean.config.PropertyMapLoader] to OFF
109 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.core.XmlConfigLoader] to OFF
110 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.lib.BackgroundThread] to OFF
111 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.gargoylesoftware.htmlunit.javascript] to OFF
112 09:27:05,440 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to ERROR
113 09:27:05,440 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
114 09:27:05,441 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
115 09:27:05,441 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
116 09:27:05,442 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@53aac487 - Registering current configuration as safe fallback point
117 09:27:05,457 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
118 09:27:05,457 |-INFO in ch.qos.logback.core.joran.action.ConversionRuleAction - registering conversion word coloredLevel with class [play.api.libs.logback.ColoredLevel]
119 09:27:05,457 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender]
120 09:27:05,457 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
121 09:27:05,458 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
122 09:27:05,459 |-INFO in ch.qos.logback.core.FileAppender[FILE] - File property is set to [/home/hadoop/soft/cerebro-0.7.2/logs/application.log]
123 09:27:05,459 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
124 09:27:05,459 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
125 09:27:05,461 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
126 09:27:05,473 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [play] to INFO
127 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to INFO
128 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebean.config.PropertyMapLoader] to OFF
129 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.core.XmlConfigLoader] to OFF
130 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.avaje.ebeaninternal.server.lib.BackgroundThread] to OFF
131 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.gargoylesoftware.htmlunit.javascript] to OFF
132 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to ERROR
133 09:27:05,474 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
134 09:27:05,474 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
135 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
136 09:27:05,474 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@534a5a98 - Registering current configuration as safe fallback point
137
138 [info] play.api.Play - Application started (Prod)
139 [info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000

修改vim logback.xml,这个配置文件,<file>/home/hadoop/soft/cerebro-0.7.2/logs/application.log</file>,配置成自己的日志文件路径即可。

 1 <configuration>
 2
 3     <conversionRule conversionWord="coloredLevel" converterClass="play.api.libs.logback.ColoredLevel"/>
 4
 5     <appender name="FILE" class="ch.qos.logback.core.FileAppender">
 6         <file>/home/hadoop/soft/cerebro-0.7.2/logs/application.log</file>
 7         <encoder>
 8             <pattern>%date - [%level] - from %logger in %thread %n%message%n%xException%n</pattern>
 9         </encoder>
10     </appender>
11
12     <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
13         <encoder>
14             <pattern>%coloredLevel %logger{15} - %message%n%xException{5}</pattern>
15         </encoder>
16     </appender>
17
18     <logger name="play" level="INFO"/>
19     <logger name="application" level="INFO"/>
20
21     <!-- Off these ones as they are annoying, and anyway we manage configuration ourself -->
22     <logger name="com.avaje.ebean.config.PropertyMapLoader" level="OFF"/>
23     <logger name="com.avaje.ebeaninternal.server.core.XmlConfigLoader" level="OFF"/>
24     <logger name="com.avaje.ebeaninternal.server.lib.BackgroundThread" level="OFF"/>
25     <logger name="com.gargoylesoftware.htmlunit.javascript" level="OFF"/>
26
27     <root level="ERROR">
28         <appender-ref ref="STDOUT"/>
29         <appender-ref ref="FILE"/>
30     </root>
31
32 </configuration>

重新启动,如下所示:

1 [root@slaver4 bin]$ ./cerebro
2 [info] play.api.Play - Application started (Prod)
3 [info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000

现在启动你的Elasticseach。访问地址http://192.168.110.133:9000,界面如下所示:

 连上如下所示:

3、Elasticsearch快速构建集群。指定集群名称cluster.name、path.data的名称、node.name的名称、http.port端口号。最后的-d参数在后台运行。

1 [elsearch@slaver4 bin]$ ./elasticsearch -Ecluster.name=my_cluster -Epath.data=my_cluster_node1 -Enode.name=node1 -Ehttp.port=5200 -d
2 OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
3 [elsearch@slaver4 bin]$ 

4、Cluster state。

  答:Elasticsearch集群相关的数据称为cluster state,主要记录如下信息。cluster state有版本控制的,创建索引以后,版本更新。
    节点信息,比如节点名称、连接地址等等。
    索引信息,比如索引名称、配置等等。
    五角星代表的是主节点、圆代表的是Coordinating节点、方框代表的是data节点。

5、Master Node。

  答:可以修改cluster state的节点称为master节点,一个集群只能有一个。
    cluster state存储在每个节点上,master维护最新版本并同步给其他节点。
    master节点是通过集群中所有节点选举产生的,可以被选举的节点称为master-eligible节点,相关配置如下所示:node.master:true。
6、Coordinating Node。

  答:处理请求的节点即为coordinating节点,该节点为所有节点的默认角色,不能取消。
    路由请求到正确的节点处理,比如创建索引的请求到master节点。

7、Data Node。

  答:存储数据的节点即为data节点,默认节点都是data类型,相关配置如下。node.data:true。

8、解决单点问题,如果单节点,一个节点挂了,集群停止服务,可以通过新增节点保障集群健壮性,运行如下命令,可以启动一个es节点实例。启动多个节点依次类推。

  ./elasticsearch -Ecluster.name=my_cluster -Epath.data=my_cluster_node2 -Enode.name=node2 -Ehttp.port=5300 -d。

 1 [root@slaver4 ~]# su elsearch
 2 [elsearch@slaver4 root]$ cd /home/hadoop/soft/
 3 [elsearch@slaver4 soft]$ ls
 4 cerebro-0.7.2  elasticsearch-6.7.0  filebeat-6.7.0-linux-x86_64  kibana-6.7.0-linux-x86_64  logstash-6.7.0
 5 [elsearch@slaver4 soft]$ cd elasticsearch-6.7.0/
 6 [elsearch@slaver4 elasticsearch-6.7.0]$ ls
 7 bin  config  data  lib  LICENSE.txt  logs  modules  my_cluster_node1  NOTICE.txt  plugins  README.textile
 8 [elsearch@slaver4 elasticsearch-6.7.0]$ cd bin/
 9 [elsearch@slaver4 bin]$ ./elasticsearch -Ecluster.name=my_cluster -Epath.data=my_cluster_node2 -Enode.name=node2 -Ehttp.port=5300 -d
10 OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
11 [elsearch@slaver4 bin]$ ./elasticsearch -Ecluster.name=my_cluster -Epath.data=my_cluster_node1 -Enode.name=node1 -Ehttp.port=5200 -d
12 OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
13 [elsearch@slaver4 bin]$ 

如果使用的是虚拟机,一定要将此配置修改为这样,不然你是将节点加不到一个集群里面的。

[elsearch@slaver4 config]$ vim elasticsearch.yml

1 network.host: 0.0.0.0

效果如下所示:

9、提供系统可用性。

  答:服务可用性:2个节点的情况下,允许其中1个节点停止服务。
    数据可用性。a、引入副本(Replication)解决。b、每个节点上都有完备的数据。

10、elasticsearch的副本和分片。

  如何将数据分布到所有节点上,引入分片(Shard)解决问题。

  分片是ES支持PB级数据的基石,特点:a、分片存储了部分数据,可以分布于任意节点上。b、分片数在索引创建时指定且后续不允许再更改,默认为5个。c、分片有主分片和副本分片之分,以实现数据的高可用。d、副本分片的数据由主分片同步,可以有多个,从而提高读取的吞吐量。

创建索引的两种方式,第一种,直接使用api进行创建即可,如下所示:

在集群中test_index索引,创建3个分片和1个副本。Primary Shard主分片、Replication Shard副本分片。

# 在集群中test_index索引,创建3个分片和1个副本。Primary Shard主分片、Replication Shard副本分片。
PUT test_index
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 1
  }
}

方式二,直接使用页面创建即可,如下所示:

实线表示主分片,虚线表示副分片,主分片和副本分片一一对应。效果如下所示:

11、如果集群是三个节点,此时增加节点是否能提高test_index的数据容量?

  答:不能,因为只有3个分片,已经分布在3台节点上,新增的节点无法利用。

12、如果集群是三个节点,此时增加副本数是否能提高test_index的读取吞吐量?。

  答:不能,因为新增的副本是分布在3个节点上,还是利用了同样的资源,如果要增加吞吐量,还需要增加节点。

13、分片数的设定非常重要,需要提前规划好。

  分片数太少,导致后续无法通过增加节点实现水平扩容。
  分片数过大,导致一个节点上分布多个分片,造成资源浪费,同时会影响查询性能。

14、集群状态Cluster Health。三种状态只是代表分片的工作状态,并不是代表整个es集群是否能够对外提供服务Rest api获取状态:GET _cluster/health。

  通过如下api可以查看集群健康状况,包括以下三种:
    a、green 健康状态,指所有主副分片都正常分配。
    b、yellow 指所有主分片都正常分配,但是有副本分片未正常分配。
    c、red 有主分片未分配。但是可以访问该集群的。

Rest api获取状态:GET _cluster/health。 

15、elasticsearch故障转移。

  假设集群由3个节点组成,此时集群状态是green。初始状态,node1节点(P0、R1分片)、node2节点(P1、R2分片)、node3节点(P2、R0分片)。
  如果node1(node1节点是主节点)所在机器宕机导致服务终止,此时集群会如何处理?
    第一步、node2和node3发现node1无法响应一段时间后会发起master选举,比如这里选举node2为master节点,此时由于主分片P0下线,集群状态变为red。
    第二步、node2发现主分片P0未分配,将R0提升为主分片。此时由于所有主分片都正常分配,集群状态变为yellow。node2节点(P1、R2分片)、node3节点(P2、P0分片)。
    第三步、node2发现主分片P0和P1生成新的副本,集群状态变为green。node2节点(P1、R2、R0分片)、node3节点(P2、P0、R1分片)。

16、elasticsearch文档分布式存储。

  文档最终会存储在分片上,elasticsearch使用文档存储算法。假设doc1存储到分片P1,doc1是如何存储到分片P1的呢,需要文档到分片的映射算法,目的使得文档均匀分布在所有分片上, 以充分利用资源。elasticsearch的分布式文档存储方案使用的是hash算法。

 

待续......

12-30 01:22
查看更多