整体架构
1 192.168.43.16 jdk,elasticsearch-master ,logstash,kibana 2 192.168.43.17 jdk,elasticsearch-node1 3 192.168.43.18 jdk,elasticsearch-node2 4 192.168.43.19 liunx ,filebeat
1 #解压 2 tar -zxvf jdk-12.0.2_linux-x64_bin.tar.gz -C /usr/ 3 4 #设置环境变量 5 vim /etc/profile 6 export JAVA_HOME=/usr/jdk-12.0.2/ 7 export JRE_HOME=$JAVA_HOME/jre 8 export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH 9 export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH 10 11 #使环境变量生效 12 source /etc/profile
1 # 修改系统文件 2 vim /etc/security/limits.conf 3 4 #增加的内容 5 * soft nofile 65536 6 * hard nofile 65536 7 * soft nproc 2048 8 * hard nproc 4096 9 10 #修改系统文件 11 vim /etc/security/limits.d/20-nproc.conf 12 13 #调整成以下配置 14 * soft nproc 4096 15 root soft nproc unlimited 16 17 vim /etc/sysctl.conf 18 #在最后追加 19 vm.max_map_count=262144 20 fs.file-max=655360 21 22 #使用 sysctl -p 查看修改结果 23 sysctl -p
1 vim /etc/hosts 2 192.168.43.16 elk-master-node 3 192.168.43.17 elk-data-node1 4 192.168.43.18 elk-data-node2
1 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 2 setenforce 0 3 systemctl stop firewalld 4 systemctl disable firewalld
1 groupadd elk 2 useradd ‐g elk elk
1 mkdir -p /home/app/elk 2 chown -R elk:elk /home/app/elk
1 wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gz 2 wget https://artifacts.elastic.co/downloads/logstash/logstash-7.3.2.tar.gz 3 wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz 4 tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz -C /home/app/elk && \ 5 tar -zxvf logstash-7.3.2.tar.gz -C /home/app/elk && \ 6 tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz -C /home/app/elk
二、安装elasticsearch
1、配置elasticsearch(切换至elk用户)
创建Elasticsearch数据目录 mkdir /home/app/elk/elasticsearch-7.3.2/data -p
创建Elasticsearch日志目录 mkdir /home/app/elk/elasticsearch-7.3.2/logs -p
主节点配置:vim /home/app/elk/elasticsearch-7.3.2/config/elasticsearch.yml
1 # 集群名称 2 cluster.name: es 3 # 节点名称 4 node.name: es-master 5 # 存放数据目录,先创建该目录 6 path.data: /home/app/elk/elasticsearch-7.3.2/data 7 # 存放日志目录,先创建该目录 8 path.logs: /home/app/elk/elasticsearch-7.3.2/logs 9 # 节点IP 10 network.host: 192.168.43.16 11 # tcp端口 12 transport.tcp.port: 9300 13 # http端口 14 http.port: 9200 15 # 种子节点列表,主节点的IP地址必须在seed_hosts中 16 discovery.seed_hosts: ["192.168.43.16:9300","192.168.43.17:9300","192.168.43.18:9300"] 17 # 主合格节点列表,若有多个主节点,则主节点进行对应的配置 18 cluster.initial_master_nodes: ["192.168.43.16:9300"] 19 # 主节点相关配置 20 21 # 是否允许作为主节点 22 node.master: true 23 # 是否保存数据 24 node.data: true 25 node.ingest: false 26 node.ml: false 27 cluster.remote.connect: false 28 29 # 跨域 30 http.cors.enabled: true 31 http.cors.allow-origin: "*"
192.168.43.17数据节点从配置:vim /home/app/elk/elasticsearch-7.3.2/config/elasticsearch.yml
1 # 集群名称 2 cluster.name: es 3 # 节点名称 4 node.name: es-data1 5 # 存放数据目录,先创建该目录 6 path.data: /home/app/elk/elasticsearch-7.3.2/data 7 # 存放日志目录,先创建该目录 8 path.logs: /home/app/elk/elasticsearch-7.3.2/logs 9 # 节点IP 10 network.host: 192.168.43.17 11 # tcp端口 12 transport.tcp.port: 9300 13 # http端口 14 http.port: 9200 15 # 种子节点列表,主节点的IP地址必须在seed_hosts中 16 discovery.seed_hosts: ["192.168.43.16:9300","192.168.43.17:9300","192.168.43.18:9300"] 17 # 主合格节点列表,若有多个主节点,则主节点进行对应的配置 18 cluster.initial_master_nodes: ["192.168.43.16:9300"] 19 # 主节点相关配置 20 21 # 是否允许作为主节点 22 node.master: false 23 # 是否保存数据 24 node.data: true 25 node.ingest: false 26 node.ml: false 27 cluster.remote.connect: false 28 29 # 跨域 30 http.cors.enabled: true 31 http.cors.allow-origin: "*"
1 # 集群名称 2 cluster.name: es 3 # 节点名称 4 node.name: es-data2 5 # 存放数据目录,先创建该目录 6 path.data: /home/app/elk/elasticsearch-7.3.2/data 7 # 存放日志目录,先创建该目录 8 path.logs: /home/app/elk/elasticsearch-7.3.2/logs 9 # 节点IP 10 network.host: 192.168.43.18 11 # tcp端口 12 transport.tcp.port: 9300 13 # http端口 14 http.port: 9200 15 # 种子节点列表,主节点的IP地址必须在seed_hosts中 16 discovery.seed_hosts: ["192.168.43.16:9300","192.168.43.17:9300","192.168.43.18:9300"] 17 # 主合格节点列表,若有多个主节点,则主节点进行对应的配置 18 cluster.initial_master_nodes: ["192.168.43.16:9300"] 19 # 主节点相关配置 20 21 # 是否允许作为主节点 22 node.master: false 23 # 是否保存数据 24 node.data: true 25 node.ingest: false 26 node.ml: false 27 cluster.remote.connect: false 28 29 # 跨域 30 http.cors.enabled: true 31 http.cors.allow-origin: "*"
2、启动elasticserach
1 sh /home/app/elk/elasticsearch-7.3.2/bin/elasticsearch -d
3、监控检查
1 curl -X GET 'http://192.168.43.16:9200/_cluster/health?pretty' 2 [root@localhost elk]# curl -X GET 'http://192.168.43.16:9200/_cluster/health?pretty' 3 { 4 "cluster_name" : "es", 5 "status" : "green", 6 "timed_out" : false, 7 "number_of_nodes" : 3, 8 "number_of_data_nodes" : 3, 9 "active_primary_shards" : 5, 10 "active_shards" : 10, 11 "relocating_shards" : 0, 12 "initializing_shards" : 0, 13 "unassigned_shards" : 0, 14 "delayed_unassigned_shards" : 0, 15 "number_of_pending_tasks" : 0, 16 "number_of_in_flight_fetch" : 0, 17 "task_max_waiting_in_queue_millis" : 0, 18 "active_shards_percent_as_number" : 100.0 19 } 20 #status=green表示服务正常
三、安装kibana
1、修改配置文件
1 cd /home/app/elk/kibana-7.3.2-linux-x86_64/config 2 vim kibana.yml 3 # 配置kibana的端口 4 server.port: 5601 5 # 配置监听ip 6 server.host: "192.168.43.16" 7 # 配置es服务器的ip,如果是集群则配置该集群中主节点的ip 8 elasticsearch.hosts: "http://192.168.43.16:9200/" 9 # 配置kibana的日志文件路径,不然默认是messages里记录日志 10 logging.dest:/home/app/elk/kibana-7.3.2-linux-x86_64/logs/kibana.log
2、启动kibana
1 nohup /home/app/elk/kibana-7.3.2-linux-x86_64/bin/kibana &
三、安装filebeat(192.168.43.19上事先跑了jumpserver服务)
本次实验我们在192.168.43.19上安装filebeat单独对nginx的访问日志和错误日志进行采集,网上有关于发送json格式的配置,在此为了练习grok,直接发送原格式进行配置
1、下载filebeat
1 wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.2-linux-x86_64.tar.gz 2 mkdir -p /opt/software 3 tar -zxvf filebeat-7.3.2-linux-x86_64.tar.gz -C /opt/software
2、配置filebeat.yml
1 vim /opt/software/filebeat-7.3.2/filebeat.yml 2 #=========================== Filebeat inputs ============================= 3 filebeat.inputs: 4 - type: log 5 paths: 6 - /var/log/nginx/access.log 7 fields: 8 log_source: nginx-access 9 - type: log 10 paths: 11 - /var/log/nginx/error.log 12 fields: 13 log_source: nginx-error 14 #============================== Dashboards ===================================== 15 setup.dashboards.enabled: false 16 #============================== Kibana ===================================== 17 #添加libana仪表盘 18 setup.kibana: 19 host: "192.168.43.16:5601" 20 #----------------------------- Logstash output -------------------------------- 21 output.logstash: 22 # The Logstash hosts 23 hosts: ["192.168.43.16:5044"]
3、启动filebeat
1 cd /opt/software/filebeat-7.3.2 2 nohup ./filebeat -c filebeat.yml &
四、安装logstash
1、创建lostash.conf文件
1 vim /home/app/elk/logstash-7.3.2/config/logstash.conf 2 input { 3 beats { 4 port => 5044 5 } 6 } 7 filter { 8 if [fields][log_source]=="nginx-access"{ 9 grok { 10 match => { 11 "message" => '%{IP:clientip}\s*%{DATA}\s*%{DATA}\s*\[%{HTTPDATE:requesttime}\]\s*"%{WORD:requesttype}.*?"\s*%{NUMBER:status:int}\s*%{NUMBER:bytes_read:int}\s*"%{DATA:requesturl}"\s*%{QS:ua}' 12 } 13 overwrite => ["message"] 14 } 15 } 16 if [fields][log_source]=="nginx-error"{ 17 grok { 18 match => { 19 "message" => '(?<time>.*?)\s*\[%{LOGLEVEL:loglevel}\]\s*%{DATA}:\s*%{DATA:errorinfo},\s*%{WORD}:\s*%{IP:clientip},\s*%{WORD}:%{DATA:server},\s*%{WORD}:\s*%{QS:request},\s*%{WORD}:\s*%{QS:upstream},\s*%{WORD}:\s*"%{IP:hostip}",\s*%{WORD}:\s*%{QS:referrer}' 20 } 21 overwrite => ["message"] 22 } 23 } 24 } 25 output { 26 if [fields][log_source]=="nginx-access"{ 27 elasticsearch { 28 hosts => ["http://192.168.43.16:9200"] 29 action => "index" 30 index => "nginx-access-%{+YYYY.MM.dd}" 31 } 32 } 33 if [fields][log_source]=="nginx-error"{ 34 elasticsearch { 35 hosts => ["http://192.168.43.16:9200"] 36 action => "index" 37 index => "nginx-error-%{+YYYY.MM.dd}" 38 } 39 } 40 stdout { codec => rubydebug } 41 }
2、启动logstash
1 /home/app/elk/logstash-7.3.2/bin/logstash -f /home/app/elk/logstash-7.3.2/config/logstash.conf
六、登陆kibana平台
分别点击管理--》索引管理,这时候就能看到nginx的访问日志和错误日志的数据了
接下来创建索引,分别对访问日志和错误日志建立索引,建立完之后点击discover,就能看到日志数据了
nginx-access
nginx-error
参考文档:
https://elkguide.elasticsearch.cn/logstash/plugins/filter/mutate.html