hadoop生态搭建(3节点)-09.flume配置
# http://archive.apache.org/dist/flume/1.8.0/
# ==================================================================安装 flume
tar -zxvf ~/apache-flume-1.8.0-bin.tar.gz -C /usr/local
mv /usr/local/apache-flume-1.8.0-bin /usr/local/flume-1.8.0
rm -r ~/apache-flume-1.8.0-bin.tar.gz
# 环境变量
# ==================================================================node1 node2 node3
vi /etc/profile # 在export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL下添加
export JAVA_HOME=/usr/java/jdk1.8.0_111
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.12
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.6
export MYSQL_HOME=/usr/local/mysql
export HBASE_HOME=/usr/local/hbase-1.2.4
export HIVE_HOME=/usr/local/hive-2.1.1
export SCALA_HOME=/usr/local/scala-2.12.4
export KAFKA_HOME=/usr/local/kafka_2.12-0.10.2.1
export FLUME_HOME=/usr/local/flume-1.8.0 export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$MYSQL_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$SCALA_HOME/bin:$KAFKA_HOME/bin:$FLUME_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
# ==================================================================node1
# 使环境变量生效
source /etc/profile # 查看配置结果
echo $FLUME_HOME
# ==================================================================node1
cp $FLUME_HOME/conf/flume-env.sh.template $FLUME_HOME/conf/flume-env.sh
vi $FLUME_HOME/conf/flume-env.sh export JAVA_HOME=/usr/java/jdk1.8.0_111 # rpm -qa | grep telnet-server
# rpm -qa | grep telnet
# rpm -qa | grep xinetd
# rpm -qa | grep net-tools yum -y install xinetd telnet telnet-server net-tools systemctl enable telnet.socket
systemctl start telnet.socket
systemctl enable xinetd
systemctl start xinetd # 在使用hadoop会冲突,需去掉下面的文件
rm -r $FLUME_HOME/lib/slf4j-api-1.6.1.jar
rm -r $FLUME_HOME/lib/slf4j-log4j12-1.6.1.jar
rm -r $HBASE_HOME/lib/slf4j-log4j12-1.7.5.jar # hdfs.path is required
cp $HADOOP_HOME/share/hadoop/common/lib/commons-configuration-1.6.jar $FLUME_HOME/lib/
cp $HADOOP_HOME/share/hadoop/common/lib/hadoop-auth-2.7.6.jar $FLUME_HOME/lib/
cp $HADOOP_HOME/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar $FLUME_HOME/lib/ cp $HADOOP_HOME/share/hadoop/common/hadoop-common-2.7.6.jar $FLUME_HOME/lib/
cp $HADOOP_HOME/share/hadoop/common/hadoop-nfs-2.7.6.jar $FLUME_HOME/lib/ cp $HADOOP_HOME/share/hadoop/hdfs/hadoop-hdfs-2.7.6.jar $FLUME_HOME/lib/
cp $HADOOP_HOME/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.6.jar $FLUME_HOME/lib/ cp $HADOOP_HOME/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar $FLUME_HOME/lib/ scp -r $FLUME_HOME node2:/usr/local/
scp -r $FLUME_HOME node3:/usr/local/
# ==================================================================node2 node3
# 使环境变量生效
source /etc/profile # 查看配置结果
echo $FLUME_HOME
shutdown -h now
# 快照 flume
# ==================================================================参考
Source——日志来源,其中包括:Avro Source、Thrift Source、Exec Source、JMS Source、Spooling Directory Source、Kafka Source、
NetCat Source、Sequence Generator Source、Syslog Source、HTTP Source、Stress Source、Legacy Source、Custom Source、Scribe
Source以及Twitter 1% firehose Source。
Channel——日志管道,所有从Source过来的日志数据都会以队列的形式存放在里面,它包括:
Memory Channel、JDBC Channel、Kafka Channel、File Channel、Spillable Memory Channel、Pseudo Transaction Channel、Custom Channel。
Sink——日志出口,日志将通过Sink向外发射,它包括:HDFS Sink、Hive Sink、Logger Sink、Avro Sink、Thrift Sink、IRC Sink、File Roll Sink、
Null Sink、HBase Sink、Async HBase Sink、Morphline Solr Sink、Elastic Search Sink、Kite Dataset Sink、Kafka Sink、Custom Sink
# ==================================================================实例
后续更新
hadoop生态搭建(3节点)-09.flume配置的更多相关文章
- hadoop生态搭建(3节点)
软件:CentOS-7 VMware12 SSHSecureShellClient shell工具:Xshell 规划 vm网络配置 01.基础配置 02.ssh配置 03.zookeep ...
- hadoop生态搭建(3节点)-08.kafka配置
如果之前没有安装jdk和zookeeper,安装了的请直接跳过 # https://www.oracle.com/technetwork/java/javase/downloads/java-arch ...
- hadoop生态搭建(3节点)-04.hadoop配置
如果之前没有安装jdk和zookeeper,安装了的请直接跳过 # https://www.oracle.com/technetwork/java/javase/downloads/java-arch ...
- hadoop生态搭建(3节点)-10.spark配置
# https://www.scala-lang.org/download/2.12.4.html# ================================================= ...
- hadoop生态搭建(3节点)-13.mongodb配置
# 13.mongodb配置_副本集_认证授权# ==================================================================安装 mongod ...
- hadoop生态搭建(3节点)-15.Nginx_Keepalived_Tomcat配置
# Nginx+Tomcat搭建高可用服务器名称 预装软件 IP地址Nginx服务器 Nginx1 192.168.6.131Nginx服务器 Nginx2 192.168.6.132 # ===== ...
- hadoop生态搭建(3节点)-03.zookeeper配置
# https://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html # ===== ...
- hadoop生态搭建(3节点)-06.hbase配置
# http://archive.apache.org/dist/hbase/1.2.4/ # ==================================================== ...
- hadoop生态搭建(3节点)-12.rabbitmq配置
# 安装 需要相关包# ==================================================================node1 node2 node3 yum ...
随机推荐
- 安装Access Database Engine后,提示未注册Microsoft.ACE.OLEDB.12.0
未注册Microsoft.ACE.OLEDB.12.0 ,下载安装 Microsoft Access Database Engine:https://www.microsoft.com/en-us/d ...
- db2巡检小脚本
写了下db2巡检的一个小脚本,只能做常规检查,减少日常工作量,脚本内容如下: #!/bash/bin echo "物理CPU个数为:"cat /proc/cpuinfo| grep ...
- July 22nd 2017 Week 29th Saturday
If you are not brave enough, no one will back you up. 如果你不够勇敢,没人会替你坚强. I was told that the real man ...
- SQLi-db 批量注入工具+教程
这款工具在中国还没人发.所以我发来给大家玩玩:顺便带了教程. 关键字:inurl:article.asp?id= inurl:article.php?id=inurl:article.jsp?id=( ...
- 国外优秀JavaScript资源推荐
JavaScript的优秀资源 原文链接:http://code.tutsplus.com/articles/resources-for-staying-on-top-of-java ...
- 关于使用eclipse maven UpdateProject时报错,无法更新本地仓库的问题解决方案
在做项目中,需要从同事电脑中把Maven项目copy过来,但是copy的过程中只copy了代码,setting.xml文件和pom.xml,使用eclipse把项目导入,有红色的感叹号提示,由于我没有 ...
- 20145238-荆玉茗《网络对抗技术》MSF基础应用
20145238荆玉茗-<网络攻防>-MSF基础应用 实践过程 MS08_067漏洞渗透攻击实践(主动攻击) 实验工具: kali&windows xp 将xp的网络设为NAT模式 ...
- mybatis框架的核心配置Mapper.xml
映射管理器resultMap:映射管理器,是Mybatis中最强大的工具,使用其可以进行实体类之间的关系,并管理结果和实体类间的映射关系 需要配置的属性:<resultMap id=" ...
- mongo删除、添加分片
MongoDB 分片的原理.搭建.应用 一.概念: 分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程.将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处 ...
- 【luogu P3366 最小生成树】 题解 Prim
include include include include using namespace std; const int maxn = 505000; int n, m, dis[maxn], v ...