ELK之安装kibana6.5,这里采用rpm安装:

https://www.elastic.co/guide/en/kibana/current/rpm.html

# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-x86_64.rpm
# rpm --install kibana-6.5.4-x86_64.rpm
# systemctl daemon-reload
# systemctl enable kibana.service
# systemctl start kibana.service
# systemctl status kibana.service
[root@node1 ELK]# netstat -tnlp|grep 5601
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 7673/node

kibana服务监听在5601端口,将kibana配置文件做一些相应的修改:

[root@node1 ELK]# cd /etc/kibana/
[root@node1 kibana]# ll
总用量 8
-rw-r--r--. 1 root root 5054 12月 18 05:40 kibana.yml
[root@node1 kibana]# vim kibana.yml
[root@node1 kibana]# egrep -v "^$|^#" kibana.yml
server.port: 5601
server.host: "172.16.23.129"
server.name: "node1"
elasticsearch.url: "http://172.16.23.129:9200"

重启kibana服务:

# systemctl restart kibana

通过浏览器进行访问kibana:

查看上次收集的nginx日志存放在elasticsearch上:

[root@node1 kibana]# curl -X GET "localhost:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open test1 ZAjj9y_sSPmGz8ZscIXUsA 5 1 0 0 1.2kb 1.2kb
green open .kibana_1 CV1LRTOXQV-I04AEh7hcow 1 0 3 0 11.8kb 11.8kb
yellow open nginx-log-2018.12.25 Zr4q_U5bTk2dY9PfEpZz_Q 5 1 14 0 31.8kb 31.8kb

上面nginx-log-2018.12.25这个index即是收集的nginx的日志,现在通过将es与kibana结合起来进行展示出来:

选择左边栏Management然后就可以看见es的index管理,点进去

选择Management---->Kibana----->create index pattern:

index创建好后,选择discover:

可以看见并没有图形展示出来,因为右上角是today,我们将时间改为this week:

现在将nginx日志输出到redis,然后es到redis中进行消费,这边手动进行访问nginx,然后查询redis库存数据:

1.将nginx日志输出到redis:

[root@node1 conf.d]# /usr/share/logstash/bin/logstash -f nginx_output_redis.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-12-29T14:04:14,403][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-12-29T14:04:14,470][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2018-12-29T14:04:20,600][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-12-29T14:04:31,474][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d883144359d3b4f516b37dba51fab2a2", :path=>["/var/log/nginx/access.log"]}
[2018-12-29T14:04:31,574][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7223aec6 run>"}
[2018-12-29T14:04:31,871][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2018-12-29T14:04:31,917][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-12-29T14:04:33,725][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

2.将es通过redis去消费nginx日志生成index:

[root@node1 conf.d]# /usr/share/logstash/bin/logstash -f redis_output_es.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-12-29T14:04:44,604][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-12-29T14:04:44,965][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[2018-12-29T14:04:45,058][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

由上面报错知道logstash在同一主机上运行多个会报错,于是采用另外一台主机进行logstash收集到es中:

[root@master conf.d]# /usr/share/logstash/bin/logstash -f redis_output_es.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-12-29T14:40:06,749][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2018-12-29T14:40:06,765][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2018-12-29T14:40:07,651][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-12-29T14:40:07,670][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2018-12-29T14:40:07,742][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7059ccab-6ba6-4082-ad0c-6320a1121ed2", :path=>"/var/lib/logstash/uuid"}
[2018-12-29T14:40:13,024][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-12-29T14:40:13,957][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.16.23.129:9200/]}}
[2018-12-29T14:40:14,439][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://172.16.23.129:9200/"}
[2018-12-29T14:40:14,558][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-12-29T14:40:14,567][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-12-29T14:40:14,640][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.16.23.129"]}
[2018-12-29T14:40:14,678][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-12-29T14:40:14,766][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-12-29T14:40:14,823][INFO ][logstash.inputs.redis ] Registering Redis {:identity=>"redis://@172.16.23.129:6379/0 list:nginx_log"}
[2018-12-29T14:40:14,892][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3f08e08c run>"}
[2018-12-29T14:40:15,134][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-12-29T14:40:16,254][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

3.查看生成的index:

[root@node1 conf.d]# curl -X GET "localhost:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana_1 CV1LRTOXQV-I04AEh7hcow 1 0 4 0 19kb 19kb
yellow open nginx-log-2018.12.25 Zr4q_U5bTk2dY9PfEpZz_Q 5 1 14 0 31.8kb 31.8kb
yellow open nginx-log-2018.12.29 KTWG3qeGTCeuknJCE4juaA 5 1 10 0 35.1kb 35.1kb
yellow open test1 ZAjj9y_sSPmGz8ZscIXUsA 5 1 0 0 1.2kb 1.2kb

然后查看kibana的index:

ELK之kibana6.5的更多相关文章

  1. ELK Stack 笔记

    ELK Stack ELK Stack ELK Stack ELK 介绍 架构 Elasticsearch 安装 常见问题 关闭 Elasticsearch Elasticsearch-head Ki ...

  2. elk之[logstash-input-file]插件使用详解

    https://www.cnblogs.com/xing901022/p/4805586.html http://www.cnblogs.com/xing901022/p/4802822.html   ...

  3. 从零开始搭建系统2.2——ELK安装及配置

    ELK 最新版本对JDK的最低要求是1.8,安装java_1.8版本 一.Elasticsearch 1.创建目录 2.下载安装包 wget https://artifacts.elastic.co/ ...

  4. CentOS7.5搭建ELK6.2.4集群及插件安装

    一 简介 Elasticsearch是一个高度可扩展的开源全文搜索和分析引擎.它允许您快速,近实时地存储,搜索和分析大量数据.它通常用作支持具有复杂搜索功能和需求的应用程序的底层引擎/技术. 下载地址 ...

  5. ELK6.2.4集群

    ELK6.2.4集群安装使用 https://www.cnblogs.com/frankdeng/p/9139035.html 一 简介 Elasticsearch是一个高度可扩展的开源全文搜索和分析 ...

  6. Centos7.5搭建ELK-6.5.0日志分析平台

    Centos7.5搭建ELK-6.5.0日志分析平台 1. 简介 工作工程中,不论是开发还是运维,都会遇到各种各样的日志,主要包括系统日志.应用程序日志和安全日志,对于开发人员来说,查看日志,可以实时 ...

  7. ElasticSearch实战系列一: ElasticSearch集群+Kinaba安装教程

    前言 本文主要介绍的是ElasticSearch集群和kinaba的安装教程. ElasticSearch介绍 ElasticSearch是一个基于Lucene的搜索服务器,其实就是对Lucene进行 ...

  8. ELK6.x_Kafka 安装配置文档

    1. 环境描述 1.1.   环境拓扑 如上图所示:Kafka为3节点集群负责提供消息队列,ES为3节点集群.日志通过logstash或者filebeat传送至Kafka集群,再通过logstash发 ...

  9. CentOS7.5搭建ES6.2.4集群与简单测试

    一 简介 Elasticsearch是一个高度可扩展的开源全文搜索和分析引擎.它允许您快速,近实时地存储,搜索和分析大量数据.它通常用作支持具有复杂搜索功能和需求的应用程序的底层引擎/技术. 下载地址 ...

随机推荐

  1. codeforces 779D - String Game

    time limit per test 2 seconds memory limit per test 512 megabytes input standard input output standa ...

  2. CodeForces - 812B Sagheer, the Hausmeister 搜索 dp

    题意:给你n行长度为m的01串(n<15,m<100) .每次只能走一步,要将所有的1变为零,问最少的步数,注意从左下角开始,每次要将一层清完才能走到上一层,每次只有在第一列或者最后一列才 ...

  3. 【Git 使用笔记】第四部分:git在公司中的开发流程

    先声明几个变量 仓管A:主分支,只有master分支仓管B:开发分支,只有各个业务开发分支   仓管B fork 于 A 如下图 为了保证 代码的稳定性,只有 仓管B中的某个分支测试完毕并进行了代码r ...

  4. LightOj 1118 - Incredible Molecules(两圆的交集面积)

    题目链接:http://lightoj.com/volume_showproblem.php?problem=1118 给你两个圆的半径和圆心,求交集的面积: 就是简单数学题,但是要注意acos得到的 ...

  5. 洛谷P3953 逛公园 [noip2017] 图论+dp

    正解:图论(最短路)+dp(记忆化搜索) 解题报告: 这题真的是个好东西! 做了这题我才发现我的dij一直是错的...但是我以前用dij做的题居然都A了?什么玄学事件啊...我哭了TT 不过其实感觉还 ...

  6. BPDU报文(RSTP)

    与STP 的BPDU报文格式相同,就是在flags字段报文中间几位得到应用 主要原理:利用flages位中的Proposal与Agreement来进行协商,从而快速从 discarding 转成 fo ...

  7. 对SQL SERVER数据类型理解最好的一篇文章

    字符串前加N SQL SERVER中生成的语句中,字符串前加N,N 前缀必须是大写字母,是Unicode编码的意思. 一般来说,英文字符是一个字节组成,但是国际上的字太多了,因此就用两个字节来表示字符 ...

  8. oracle显示转换字段类型cast()函数

    今天遇到一个查询类型转换的问题:表的字段是varchar2类型,然后查询到的结果要转换为number(20,2),刚开始的时候使用to_number()函数,发现不能满足需求.后来才知道,原来还有ca ...

  9. (1.3)mysql 事务控制和锁定语句

    (1.3)mysql 事务控制和锁定语句 lock table 参考转载自:https://www.cnblogs.com/kerrycode/p/6991502.html 关键词:mysql loc ...

  10. (1.1)mysql 选择合适的数据类型

    (1.1)mysql 选择合适的数据类型 1.char与varchar [1.1]char 在内容未满定义长度时,做空格填充,且字符串末尾空格会被截断:超出定义长度也会被截断.  如:char(4)  ...