# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: hna-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: hna-es-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /var/lib/elasticsearch
path.data: /data/components/elasticsearch
#
# Path to log files:
#
path.logs: /data/logs/elasticsearch
#path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.100.130"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
# ---------------------------------- X-pack-----------------------------------
xpack.ssl.key: /etc/elasticsearch/config/hna-es-1/hna-es-1.key
xpack.ssl.certificate: /etc/elasticsearch/config/hna-es-1/hna-es-1.crt
xpack.ssl.certificate_authorities: /etc/elasticsearch/config/ca/ca.crt
xpack.security.transport.ssl.enabled: true
xpack.ssl.verification_mode: certificate

elasticsearch1.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: hna-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: hna-es-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /var/lib/elasticsearch
path.data: /data/components/elasticsearch
#
# Path to log files:
#
path.logs: /data/logs/elasticsearch
#path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.100.129"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true # ---------------------------------- X-pack-----------------------------------
xpack.ssl.key: /etc/elasticsearch/config/hna-es-2/hna-es-2.key
xpack.ssl.certificate: /etc/elasticsearch/config/hna-es-2/hna-es-2.crt
xpack.ssl.certificate_authorities: /etc/elasticsearch/config/ca/ca.crt
xpack.security.transport.ssl.enabled: true
xpack.ssl.verification_mode: certificate

elasticsearch2.yml

 # Kibana is served by a back end server. This setting specifies the port to use.
server.port: 8080 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0" # Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: "" # The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname" # The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.100.129:9200" # When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana" # The default application to load.
#kibana.defaultAppId: "discover" # If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana"
elasticsearch.password: "123456" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000 # Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output.
#logging.dest: stdout # Set the value of this setting to true to suppress all logging output.
#logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false # Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false # Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000 # The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

kibana.yml

elasticserch 由java编写运行时需要安装JDK

kibana 由nodejs编写

安装时应注意elasticsearch版本要高于kibana版本安装完成后修改配置文件即可使用

安全组件x-pack安装 https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-installing-offline

重点详细说一下logstash的安装及配置

ELK 日志查询分析nginx日志的更多相关文章

  1. elk+redis分布式分析nginx日志

    一.elk套件介绍 ELK 由 ElasticSearch . Logstash 和 Kiabana 三个开源工具组成.官方网站: https://www.elastic.co/products El ...

  2. elk实战分析nginx日志文档

    elk实战分析nginx日志文档 架构: kibana <--- es-cluster <--- logstash <--- filebeat 环境准备:192.168.3.1 no ...

  3. elk平台分析nginx日志的基本搭建

    一.elk套件介绍 ELK 由 ElasticSearch . Logstash 和 Kiabana 三个开源工具组成.官方网站: https://www.elastic.co/products El ...

  4. 使用Docker快速部署ELK分析Nginx日志实践(二)

    Kibana汉化使用中文界面实践 一.背景 笔者在上一篇文章使用Docker快速部署ELK分析Nginx日志实践当中有提到如何快速搭建ELK分析Nginx日志,但是这只是第一步,后面还有很多仪表盘需要 ...

  5. 使用Docker快速部署ELK分析Nginx日志实践

    原文:使用Docker快速部署ELK分析Nginx日志实践 一.背景 笔者所在项目组的项目由多个子项目所组成,每一个子项目都存在一定的日志,有时候想排查一些问题,需要到各个地方去查看,极为不方便,此前 ...

  6. ELK 6安装配置 nginx日志收集 kabana汉化

    #ELK 6安装配置 nginx日志收集 kabana汉化 #环境 centos 7.4 ,ELK 6 ,单节点 #服务端 Logstash 收集,过滤 Elasticsearch 存储,索引日志 K ...

  7. 使用Hive的正则解析器RegexSerDe分析nginx日志

    1.环境: hadoop-2.6.0 + apache-hive-1.2.0-bin 2.使用Hive分析nginx日志,站点的訪问日志部分内容为: cat /home/hadoop/hivetest ...

  8. 使用awstats分析nginx日志

    1.awstats介绍 本文主要是记录centos6.5下安装配置awstats,并统计nginx访问日志 1.1 awstats介绍 awstats是一款日志统计工具,它使用Perl语言编写,可统计 ...

  9. 烂泥:利用awstats分析nginx日志

    本文由ilanniweb提供友情赞助,首发于烂泥行天下 想要获得更多的文章,可以关注我的微信ilanniweb 昨天把nginx的日志进行了切割,关于如何切割nginx日志,可以查看<烂泥:切割 ...

随机推荐

  1. python基础(八)——多线程

    [root@bogon python]# cat test.py #!/usr/bin/ptyhon import thread import time def print_time(threadNa ...

  2. mtail 提取应用日志数据到时序数据库的工具-支持prometheus

    mtail 是谷歌开源的一款很不错的应用日志提取工具,我们可以方便的用来提取应用的数据 到常见的监控系统(prometheus,stats,collectd,gragphite....) 说明: de ...

  3. 改变codeblocks里面各种注释的颜色。

    ->Setting->Editor->Syntax highlighting->“Comment(normal)” /* 改变块注释颜色 */"Comment lin ...

  4. HDU1370(中国剩余定理)

    昨天我细致一想,发现自己之前的分类(用OJ来划分,毫无意义啊.)太失败了,所以我又一次划分了一下大分类,在分到数论的时候,我就想起了中国剩余定理了.于是乎今天就刷了一题中国剩余定理的题目了.话说太久没 ...

  5. HTTP请求属性说明

    1)URL:页面地址. 2)Method :页面的提交方式,POST或GET. 3)EncType:编码类型.此参数给出一个内容类型(Content-Type),指定其做为回放脚本时“Content- ...

  6. KiCad 5.1.0 正式版终于发布

    KiCad 5.1.0 正式版终于发布 前几天看到 KiCad 5.1.0 在官方的测试文件夹中,过了三天正式发布了,看来没什么问题了. 据说比 5.0 快了很多. 以下为官方的新闻. KiCad 5 ...

  7. 利用JSON将Map转换为类对象

    Map类型做为一种常见的Java类型,经常在开发过程中使用,笔者最近遇到要将Map对象做为一种通用的参数变量,下传到多个业务类方法中,然后在各个业务类方法中将Map转换为指定类对象的情况.如何将Map ...

  8. bootstrap-table设置表头宽度无效的解决方案

    bootstrap-table设置colmuns中某列的宽度无效时,需要给整个表设置css属性: .table { table-layout: fixed; }

  9. [c/c++]可变参数的使用

    一.可变参数简介 当一个函数需要传递未知个数的参数时,就需要用到可变参数, 比如常见的printf()函数,输出多个变量: printf("print para1:%d ,para2 :%d ...

  10. Eclipse配置问题

    1.eclipse中通过search打开第二个文件时第一个文件自动关闭问题: 解决方案: window-preferences-general-search找到第一行的一个选项  reuse edit ...