日志=>flume=>kafka=>spark streaming=>hbase

日志部分

#coding=UTF-8
import random
import time url_paths = [
"class/112.html",
"class/128.html",
"learn/821",
"class/145.html",
"class/146.html",
"class/131.html",
"class/130.html",
"course/list"
] ip_slices = [132,156,124,10, 29, 167,143,187,30, 46, 55, 63, 72, 87,98,168] http_referers = [
"http://www.baidu.com/s?wd={query}",
"http://www.sogou.com/web?query={query}",
"https://search.yahoo.com/search?p={query}",
"http://www.bing.com/search?q={query}"
] search_keyword = ["Spark SQL实战", "Hadoop基础", "Storm实战", "Spark Streaming实战", "大数据面试"] status_codes = ["", "", ""] def sample_url():
return random.sample(url_paths,1)[0] def sample_ip():
slice = random.sample(ip_slices,4)
return ".".join([str(item) for item in slice]) def sample_status_code():
return random.sample(status_codes,1)[0] def sample_referer():
if random.uniform(0, 1) > 0.2:
return "-" refer_str = random.sample(http_referers, 1)
query_str = random.sample(search_keyword, 1)
return refer_str[0].format(query=query_str[0]) def generate_log(count=3):
time_str = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
f = open("/home/hadoop/data/project/logs/access.log", "w+")
while count >= 1:
query_log = "{ip}\t[{local_time}]\t\"GET /{url} HTTP/1.1\"\t{status_code}\t\"{referer}\"".format(ip=sample_ip() , local_time=time_str, url=sample_url(), status_code=sample_status_code(), referer=sample_referer())
print query_log
f.write(query_log + "\n")
count = count - 1 if __name__ == '__main__':
#print sample_ip()
#print sample_url()
generate_log(10)

flume对接日志部分

exec-memory-kafka.conf
#exec-memory-kafka

exec-memory-kafka.sources = exec-source
exec-memory-kafka.channels = memory-channel
exec-memory-kafka.sinks = kafka-sink exec-memory-kafka.sources.exec-source.type = exec
exec-memory-kafka.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-kafka.sources.exec-source.shell = /bin/sh -c
exec-memory-kafka.sources.exec-source.channels = memory-channel exec-memory-kafka.channels.memory-channel.type = memory exec-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
exec-memory-kafka.sinks.kafka-sink.topic = streamingtopic
exec-memory-kafka.sinks.kafka-sink.brokerList = hadoop:9092
exec-memory-kafka.sinks.kafka-sink.batchSize = 5
exec-memory-kafka.sinks.kafka-sink.requiredAcks = 1
exec-memory-kafka.sinks.kafka-sink.channel = memory-channel

flume-ng agent \
--name exec-memory-kafka \
--conf $FLUME_HOME/conf \
--conf-file /home/hadoop/data/project/exec-memory-kafka.conf \
-Dflume.root.logger=INFO,console

启动kafka测试消费:kafka-console-consumer.sh --zookeeper hadoop:2181 --topic streamingtopic --from-beginning

启动Hadoop:start-dfs.sh

启动hbase: start-hbase.sh

进入hbase shell:hbase shell -> 查看: list
hbase表设计:
create 'lin_course_clickcount' ,'info'
create 'lin_course_search_clickcount','info'
查看表:scan 'lin_course_clickcount'
rowkey设计:
day_courseid
day_search_courseid

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.lin.spark</groupId>
<artifactId>SparkStreaming</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<scala.version>2.11.8</scala.version>
<kafka.version>0.9.0.0</kafka.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
<hbase.version>1.2.0-cdh5.7.0</hbase.version>
</properties> <!--添加cloudera的repository-->
<repositories>
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
</repository>
</repositories> <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency> <!-- Kafka 依赖-->
<!--
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
</dependency>
--> <!-- Hadoop 依赖-->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency> <!-- HBase 依赖-->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency> <dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency> <!-- Spark Streaming 依赖-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency> <!-- Spark Streaming整合Flume 依赖-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>${spark.version}</version>
</dependency> <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume-sink_2.11</artifactId>
<version>${spark.version}</version>
</dependency> <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
<version>${spark.version}</version>
</dependency> <dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.5</version>
</dependency> <!-- Spark SQL 依赖-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency> <dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-scala_2.11</artifactId>
<version>2.6.5</version>
</dependency> <dependency>
<groupId>net.jpountz.lz4</groupId>
<artifactId>lz4</artifactId>
<version>1.3.0</version>
</dependency> <dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.38</version>
</dependency> <dependency>
<groupId>org.apache.flume.flume-ng-clients</groupId>
<artifactId>flume-ng-log4jappender</artifactId>
<version>1.6.0</version>
</dependency> </dependencies> <build>
<!--
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
-->
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
<args>
<arg>-target:jvm-1.5</arg>
</args>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<configuration>
<downloadSources>true</downloadSources>
<buildcommands>
<buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
</buildcommands>
<additionalProjectnatures>
<projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
</additionalProjectnatures>
<classpathContainers>
<classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
<classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
</classpathContainers>
</configuration>
</plugin>
</plugins>
</build>
<reporting>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
</configuration>
</plugin>
</plugins>
</reporting> </project>
package com.lin.spark.streaming.project.spark

import com.lin.spark.streaming.project.dao.{CourseClickCountDAO, CourseSearchClickCountDAO}
import com.lin.spark.streaming.project.domain.{ClickLog, CourseClickCount, CourseSearchClickCount}
import com.lin.spark.streaming.project.utils.DateUtils
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable.ListBuffer /**
* Created by Administrator on 2019/6/6.
*/
object StatStreamingApp {
def main(args: Array[String]): Unit = { if (args.length != 4) {
System.err.println("参数有误!")
System.exit(1)
}
//hadoop:2181 test streamingtopic 2
val Array(zkQuorum, group, topics, numThreads) = args
val conf = new SparkConf().setAppName("KafkaUtil").setMaster("local[4]")
val ssc = new StreamingContext(conf, Seconds(60)) val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap val clickLog = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2) val cleanData = clickLog.map(line => {
val infos = line.split("\t")
//29.98.156.124 2019-06-06 05:37:01 "GET /class/131.html HTTP/1.1" 500 http://www.baidu.com/s?wd=Storm实战
//case class ClickLog(ip:String, time:String, courseId:Int, statusCode:Int, referer:String)
var courseId = 0
val url = infos(2).split(" ")(1)
if (url.startsWith("/class")) {
val urlHTML = url.split("/")(2)
courseId = urlHTML.substring(0, urlHTML.lastIndexOf(".")).toInt
}
ClickLog(infos(0), DateUtils.parseToMinute(infos(1)), courseId, infos(3).toInt, infos(4))
}).filter(clickLog => clickLog.courseId != 0) //存储点击日志
cleanData.map(log => {
(log.time.substring(0, 8) + "_" + log.courseId, 1)
}).reduceByKey(_ + _).foreachRDD(rdd => {
rdd.foreachPartition(partitionReconrds => {
val list = new ListBuffer[CourseClickCount]
partitionReconrds.foreach(pair => {
list.append(CourseClickCount(pair._1, pair._2))
})
CourseClickCountDAO.save(list)
})
}) //存储查询点击日志
cleanData.map(log => { val referer = log.referer.replaceAll("//", "/")
val splits = referer.split("/")
var host = ""
if (splits.length > 2) {
host = splits(1)
}
(host, log.courseId, log.time)
}).filter(x => {
x._1 != ""
}).map(searchLog=>{
(searchLog._3.substring(0,8) + "_" + searchLog._1 + "_" + searchLog._2 , 1)
}).reduceByKey(_ + _).foreachRDD(rdd => {
rdd.foreachPartition(partitionReconrds => {
val list = new ListBuffer[CourseSearchClickCount]
partitionReconrds.foreach(pair => {
list.append(CourseSearchClickCount(pair._1, pair._2))
})
CourseSearchClickCountDAO.save(list)
})
}) ssc.start()
ssc.awaitTermination()
}
}
package com.lin.spark.streaming.project.utils

import java.util.Date

import org.apache.commons.lang3.time.FastDateFormat

/**
* Created by Administrator on 2019/6/6.
*/
object DateUtils { val YYYYMMDDHHMMSS_FORMAT = FastDateFormat.getInstance("yyyy-MM-dd HH:mm:ss")
val TARGE_FORMAT = FastDateFormat.getInstance("yyyyMMddHHmmss") def getTime(time:String) ={
YYYYMMDDHHMMSS_FORMAT.parse(time).getTime
} def parseToMinute(time:String)={
TARGE_FORMAT.format(new Date(getTime(time)))
} def main(args: Array[String]): Unit = {
println(parseToMinute("2017-10-22 14:46:01"))
}
}
package com.lin.spark.streaming.project.domain

case class ClickLog(ip:String, time:String, courseId:Int, statusCode:Int, referer:String)
package com.lin.spark.streaming.project.domain

/**
* Created by Administrator on 2019/6/7.
*/
case class CourseClickCount(day_course:String,click_course:Long)
package com.lin.spark.streaming.project.domain

/**
* Created by Administrator on 2019/6/7.
*/
case class CourseSearchClickCount(day_search_course:String, click_count:Long)
package com.lin.spark.streaming.project.dao

import com.lin.spark.project.utils.HBaseUtils
import com.lin.spark.streaming.project.domain.CourseClickCount
import org.apache.hadoop.hbase.client.Get
import org.apache.hadoop.hbase.util.Bytes import scala.collection.mutable.ListBuffer /**
* Created by Administrator on 2019/6/7.
*/
object CourseClickCountDAO { val tableName = "lin_course_clickcount"
val cf = "info"
val qualifer = "click_count" def save(list:ListBuffer[CourseClickCount]):Unit={
val table =HBaseUtils.getInstance().getTable(tableName)
for (ele <- list){
table.incrementColumnValue(Bytes.toBytes(ele.day_course),
Bytes.toBytes(cf),
Bytes.toBytes(qualifer),
ele.click_course)
}
} def count(day_course:String):Long={
val table = HBaseUtils.getInstance().getTable(tableName)
val get = new Get(Bytes.toBytes(day_course))
val value = table.get(get).getValue(cf.getBytes,qualifer.getBytes)
if(value == null){
0L
}else{
Bytes.toLong(value)
}
} def main(args: Array[String]): Unit = {
val list = new ListBuffer[CourseClickCount]
list.append(CourseClickCount("20190606",99))
list.append(CourseClickCount("20190608",89))
list.append(CourseClickCount("20190609",100))
// save(list)
println(count("20190609"))
}
}
package com.lin.spark.streaming.project.dao

import com.lin.spark.project.utils.HBaseUtils
import com.lin.spark.streaming.project.domain.{CourseClickCount, CourseSearchClickCount}
import org.apache.hadoop.hbase.client.Get
import org.apache.hadoop.hbase.util.Bytes import scala.collection.mutable.ListBuffer /**
* Created by Administrator on 2019/6/7.
*/
object CourseSearchClickCountDAO { val tableName = "lin_course_search_clickcount"
val cf = "info"
val qualifer = "click_count" def save(list:ListBuffer[CourseSearchClickCount]):Unit={
val table =HBaseUtils.getInstance().getTable(tableName)
for (ele <- list){
table.incrementColumnValue(Bytes.toBytes(ele.day_search_course),
Bytes.toBytes(cf),
Bytes.toBytes(qualifer),
ele.click_count)
}
} def count(day_course:String):Long={
val table = HBaseUtils.getInstance().getTable(tableName)
val get = new Get(Bytes.toBytes(day_course))
val value = table.get(get).getValue(cf.getBytes,qualifer.getBytes)
if(value == null){
0L
}else{
Bytes.toLong(value)
}
} def main(args: Array[String]): Unit = {
val list = new ListBuffer[CourseSearchClickCount]
list.append(CourseSearchClickCount("20190606_www.baidu.com_99",99))
list.append(CourseSearchClickCount("20190608_www.bing.com_89",89))
list.append(CourseSearchClickCount("20190609_www.csdn.net_100",100))
save(list)
// println(count("20190609"))
}
}

日志=>flume=>kafka=>spark streaming=>hbase的更多相关文章

  1. flume+kafka+spark streaming整合

    1.安装好flume2.安装好kafka3.安装好spark4.流程说明: 日志文件->flume->kafka->spark streaming flume输入:文件 flume输 ...

  2. 基于Kafka+Spark Streaming+HBase实时点击流案例

    背景 Kafka实时记录从数据采集工具Flume或业务系统实时接口收集数据,并作为消息缓冲组件为上游实时计算框架提供可靠数据支撑,Spark 1.3版本后支持两种整合Kafka机制(Receiver- ...

  3. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  4. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装

    一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 sla ...

  5. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过Structured Streaming读取kafka的数据

    将arvo格式数据发送到kafka的topic 第一步:定制avro schema: { "type": "record", "name": ...

  6. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  7. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(八)安装zookeeper-3.4.12

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  8. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(三)安装spark2.2.1

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  9. demo2 Kafka+Spark Streaming+Redis实时计算整合实践 foreachRDD输出到redis

    基于Spark通用计算平台,可以很好地扩展各种计算类型的应用,尤其是Spark提供了内建的计算库支持,像Spark Streaming.Spark SQL.MLlib.GraphX,这些内建库都提供了 ...

随机推荐

  1. Python_pickle

    pickle是一个可以将任意一个对象存储在硬盘文件中的工具. 更新:Python3中用法变了:  https://www.cnblogs.com/fmgao-technology/p/9078918. ...

  2. mesos,marathon,haproxy on centos7 最完美安装教程

    前言 本教程参考 http://blog.51cto.com/11863547/1903532 http://blog.51cto.com/11863547/1903532 官方文档等... 系统:c ...

  3. Gradle构建SpringBoot并打包可运行的jar配置

    使用Gradle构建项目,继承了Ant的灵活和Maven的生命周期管理,不再使用XML作为配置文件格式,采用了DSL格式,使得脚本更加简洁. 构建环境: jdk1.6以上,此处使用1.8 Gradle ...

  4. SpringBoot中Redis的使用

    转载:http://www.ityouknow.com/springboot/2016/03/06/spring-boot-redis.html Spring Boot 对常用的数据库支持外,对 No ...

  5. hive之视图和索引

    一.视图 1.视图定义 视图其实是一个虚表,视图可以允许保存一个查询,并像对待表一样对这个查询进行操作,视图是一个逻辑结构,并不会存储数据. 2.视图的创建 通过创建视图来限制数据访问可以用来保护信息 ...

  6. shell编程之基础知识1

    1.shell脚本的基本格式 #!bin/bash   ->看到这个就是shell脚本 #filename:test.sh ->脚本名称 #auto echo hello world -& ...

  7. "||" 在sql中有什么用

    双竖线表示字符串拼接 比如: 'abc' || 'cba' 结果: 'abccba'

  8. java 根据excel模板导出文件

    <!--读取excel文件,配置POI框架的依赖--> <dependency> <groupId>org.apache.poi</groupId> & ...

  9. Vue学习笔记-父子通信案例

    <div id="app"> <cpn :number1="num1" :number2="num2" @num1chan ...

  10. mybatis框架之动态代理

    坦白讲,动态代理在日常工作中真没怎么用过,也少见别人用过,网上见过不少示例,但总觉与装饰模式差别不大,都是对功能的增强,什么前置后置,其实也就那么回事,至于面试中经常被问的mybatis框架mappe ...