Ready to load some data and build a dashboard? This tutorial shows you how to:

  • Load a data set into Elasticsearch
  • Define an index pattern
  • Discover and explore the data
  • Visualize the data
  • Add visualizations to a dashboard
  • Inspect the data behind a visualization

Loading sample data

This tutorial requires three data sets:

  • The complete works of William Shakespeare, suitably parsed into fields. Download shakespeare.json.
  • A set of fictitious accounts with randomly generated data. Download accounts.zip.
  • A set of randomly generated log files. Download logs.jsonl.gz.

Two of the data sets are compressed. To extract the files, use these commands:

unzip accounts.zip
gunzip logs.jsonl.gz

Structure of the data sets

The Shakespeare data set has this structure:

{
"line_id": INT,
"play_name": "String",
"speech_number": INT,
"line_number": "String",
"speaker": "String",
"text_entry": "String",
}

The accounts data set is structured as follows:

{
"account_number": INT,
"balance": INT,
"firstname": "String",
"lastname": "String",
"age": INT,
"gender": "M or F",
"address": "String",
"employer": "String",
"email": "String",
"city": "String",
"state": "String"
}

The logs data set has dozens of different fields. Here are the notable fields for this tutorial:

{
"memory": INT,
"geo.coordinates": "geo_point"
"@timestamp": "date"
}

Set up mappings

Before you load the Shakespeare and logs data sets, you must set up mappings for the fields.

Mappings divide the documents in the index into logical groups and specify the characteristics of the fields.

These characteristics include the searchability of the field and whether it’s tokenized, or broken up into separate words.

In Kibana Dev Tools > Console, set up a mapping for the Shakespeare data set:

PUT /shakespeare
{
"mappings": {
"doc": {
"properties": {
"speaker": {"type": "keyword"},
"play_name": {"type": "keyword"},
"line_id": {"type": "integer"},
"speech_number": {"type": "integer"}
}
}
}
}

This mapping specifies field characteristics for the data set:

  • The speaker and play_name fields are keyword fields. These fields are not analyzed. The strings are treated as a single unit even if they contain multiple words.
  • The line_id and speech_number fields are integers.

响应

{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "shakespeare"
}

The logs data set requires a mapping to label the latitude and longitude pairs as geographic locations by applying the geo_point type.

PUT /logstash-2015.05.18
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}

{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "logstash-2015.05.18"
}

The accounts data set doesn’t require any mappings.

查询一下当前的所有indices

GET /_cat/indices?v HTTP/1.1
Host: localhost:9200

新导入的logstash-2015.05.18,logstash-2015.05.19,logstash-2015.05.20这个三个index的docs.count的个数都是0。

bank是之前在学习elastic search时候导入的

health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-2015.05. dL2ZaIelR_uvKMnPYy_8Eg .2kb .2kb
yellow open logstash-2015.05. M1PWnqXLRgClt-iwqN4OUg .2kb .2kb
yellow open customer p6H8gEOdQAWBuSN2HDEjZA .4kb .4kb
yellow open shakespeare I8mqiFkkTdK9IlcarIZA4A .2kb .2kb
yellow open bank l45mhl-7QNibqbmbi2Jmbw .1kb .1kb
green open .kibana_1 CUsQj9zkSCSC-XiDJgXYQQ .6kb .6kb
yellow open logstash-2015.05. 14rDFdQFTQK-GNgDXtlmeQ .2kb .2kb

Load the data sets

At this point, you’re ready to use the Elasticsearch bulk API to load the data sets:

curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl

Or for Windows users, in Powershell:

Invoke-RestMethod "http://localhost:9200/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
Invoke-RestMethod "http://localhost:9200/shakespeare/doc/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare_6.0.json"
Invoke-RestMethod "http://localhost:9200/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"

可以保存为一个ps1的脚本文件,然后直接运行这个脚本文件进行导入

These commands might take some time to execute, depending on the available computing resources.

Verify successful loading:

再次查询所有的index

GET /_cat/indices?v

Your output should look similar to this:

health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-2015.05. dL2ZaIelR_uvKMnPYy_8Eg .5mb .5mb
yellow open logstash-2015.05. M1PWnqXLRgClt-iwqN4OUg .1mb .1mb
yellow open customer p6H8gEOdQAWBuSN2HDEjZA .4kb .4kb
yellow open shakespeare I8mqiFkkTdK9IlcarIZA4A .5mb .5mb
yellow open bank l45mhl-7QNibqbmbi2Jmbw .1kb .1kb
green open .kibana_1 CUsQj9zkSCSC-XiDJgXYQQ .6kb .6kb
yellow open logstash-2015.05. 14rDFdQFTQK-GNgDXtlmeQ .6mb .6mb

Defining your index patterns

Index patterns tell Kibana which Elasticsearch indices you want to explore. An index pattern can match the name of a single index, or include a wildcard (*) to match multiple indices.

For example, Logstash typically creates a series of indices in the format logstash-YYYY.MMM.DD. To explore all of the log data from May 2018, you could specify the index pattern logstash-2018.05*.

You’ll create patterns for the Shakespeare data set, which has an index named shakespeare, and the accounts data set, which has an index named bank. These data sets don’t contain time-series data.

  1. In Kibana, open Management, and then click Index Patterns.
  2. If this is your first index pattern, the Create index pattern page opens automatically. Otherwise, click Create index pattern in the upper left.
  3. Enter shakes* in the Index pattern field.

Kibana --> Getting Started -->Building your own dashboard的更多相关文章

  1. 使用Kibana 分析Nginx 日志并在 Dashboard上展示

    一.Kibana之Visualize 功能 在首页上Visualize 标签页用来设计可视化图形.你可以保存之前在discovery中的搜索来进行画图,然后保存该visualize,或者加载合并到 d ...

  2. Kibana:如何周期性地为 Dashboard 生成 PDF Report

    转载自:https://blog.csdn.net/UbuntuTouch/article/details/108449775 按照上面的方式填写.记得把之前的 URL 拷贝到 webhook 下的 ...

  3. How To Use Logstash and Kibana To Centralize Logs On CentOS 6

    原文链接:https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-l ...

  4. 性能优化工具 MVC Mini Profiler

    性能优化工具 MVC Mini Profiler   MVC MiniProfiler是Stack Overflow团队设计的一款对ASP.NET MVC.WebForm 以及WCF 的性能分析的小程 ...

  5. ELK学习笔记之F5利用EELK进行应用数据挖掘系列(2)-DNS

    0x00 概述 很多客户使用GTM/DNS为企业业务提供动态智能解析,解决应用就近性访问.优选问题.对于已经实施多数据中心双活的客户,则会使用GSLB提供双活流量调度.DNS作为企业业务访问的指路者, ...

  6. ELK+Redis 解析Nginx日志

    一.ELK简介 Elk是指logstash,elasticsearch,kibana三件套,我们一般使用它们做日志分析. ELK工作原理图: 简单来讲ELK具体的工作流程就是客户端的logstash ...

  7. [Metricbeat] Metricbeat监控golang服务器

    0x0 前言 最近这几天研究了一下ElasticSearch相关的技术栈.前面一篇转发了别人些的非常详细的ElasticSearch和Kibana搭建的过程.发现Elastic家族还有Metricbe ...

  8. I am a legend: Hacking Hearthstone with machine-learning Defcon talk wrap-up

    I am a legend: Hacking Hearthstone with machine-learning Defcon talk wrap-up: video and slides avail ...

  9. Building real-time dashboard applications with Apache Flink, Elasticsearch, and Kibana

    https://www.elastic.co/cn/blog/building-real-time-dashboard-applications-with-apache-flink-elasticse ...

随机推荐

  1. notepad使用列选

    列选有两种方法: 1.按住ALT + 鼠标从某点按住开始向下或向上拖动. 2.按住ALT+SHIFT+上下方向键. 列编辑: 1.ALT+C 2.插入相同文本还是自增数字

  2. 9.if/else/elif

    简单的条件是通过使用 if/else/elif 语法创建的.条件的括号是允许的,但不是必需的.考虑到基于表的缩进的性质,可以使用 elif 而不是 else/if 来维持缩进的级别. if [expr ...

  3. VMWare虚拟机 window文件传递

    无论是将虚拟机的文件传到window上或者是将window上文件传到虚拟机上: 都可以选中文件,然后拖动文件到另一个系统上 提前:虚拟机安装了VMWARE Tools 1)window上文件拖到虚拟机 ...

  4. jQuery筛选--find(expr|obj|ele)和siblings([expr])

    find(expr|obj|ele) 概述 搜索所有与指定表达式匹配的元素.这个函数是找出正在处理的元素的后代元素的好方法 参数 expr  用于查找的表达式 jQuery object   一个用于 ...

  5. Hive中变量的使用

    1.Hive配置属性 (1)命令行方式 Hive配置属性存储于 hiveconf 命名空间中,该命名空间中的属性是可读写的.在查询语句中插入 '${hiveconf:变量名}',就可以通过 hive ...

  6. python字典的排序,按key排序和按value排序---sorted()

    >>> d{'a': 5, 'c': 3, 'b': 4} >>> d.items()[('a', 5), ('c', 3), ('b', 4)] 字典的元素是成键 ...

  7. ref 参数与out参数

    变量作为参数传给方法,同时希望在方法执行完成后对参数,反应到变量上面.就需要用到ref和out这两个参数. ref参数:在 传入前必须先初始化 out参数:不需要做预先的处理

  8. idea 项目转 eclipse项目

    接到一个很紧急的活,我很着急,也很兴奋,打开邮件一看,有点懵逼.   idea项目.idea不熟啊,网上搜攻略.我做个总结,归根结底就是一句话.   下个idea,然后一步一步的安装好.   然后也是 ...

  9. POJ 1018 Communication System (动态规划)

    We have received an order from Pizoor Communications Inc. for a special communication system. The sy ...

  10. [转载]C#中IndexOf的使用

    注:此方法无法找出目标字符串第二次.第三次等出现的位置. 具体代码如下所示: 1 2 3 4 5 var array=['REG','2018','2018'];   array.indexOf(‘R ...