方案一:使用reduceByKey

数据word.txt

张三
李四
王五
李四
王五
李四
王五
李四
王五
王五
李四
李四
李四
李四
李四

代码:

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.rdd.RDD;
import org.apache.spark.sql.SparkSession;
import scala.Tuple2; public class HelloWord {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();
final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext()); RDD<String> rdd = spark.sparkContext().textFile("C:\\Users\\boco\\Desktop\\word.txt", 1);
JavaRDD<String> javaRDD = rdd.toJavaRDD(); JavaPairRDD<String, Integer> javaRDDMap = javaRDD.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
}); JavaPairRDD<String, Integer> result = javaRDDMap.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer integer, Integer integer2) throws Exception {
return integer + integer2;
}
}); System.out.println(result.collect());
}
}

输出:

[(张三,1), (李四,9), (王五,5)]

方案二:使用spark sql

使用spark sql实现代码:

import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType; import java.util.ArrayList; public class HelloWord {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();
final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext()); JavaRDD<Row> rows = spark.read().text("C:\\Users\\boco\\Desktop\\word.txt").toJavaRDD(); ArrayList<StructField> fields = new ArrayList<StructField>();
StructField field = null;
field = DataTypes.createStructField("key", DataTypes.StringType, true);
fields.add(field); StructType schema = DataTypes.createStructType(fields); Dataset<Row> ds = spark.createDataFrame(rows, schema); ds.createOrReplaceTempView("words"); Dataset<Row> result = spark.sql("select key,count(0) as key_count from words group by key"); result.show();
}
}

结果:

+---+---------+
|key|key_count|
+---+---------+
| 王五| 5|
| 李四| 9|
| 张三| 1|
+---+---------+

方案二:使用spark streaming实时流分析

参考《http://spark.apache.org/docs/latest/streaming-programming-guide.html》

First, we create a JavaStreamingContext object, which is the main entry point for all streaming functionality. We create a local StreamingContext with two execution threads, and a batch interval of 1 second.

import org.apache.spark.*;
import org.apache.spark.api.java.function.*;
import org.apache.spark.streaming.*;
import org.apache.spark.streaming.api.java.*;
import scala.Tuple2; // Create a local StreamingContext with two working thread and batch interval of 1 second
SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount");
JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));

Using this context, we can create a DStream that represents streaming data from a TCP source, specified as hostname (e.g. localhost) and port (e.g. 9999).

// Create a DStream that will connect to hostname:port, like localhost:9999
JavaReceiverInputDStream<String> lines = jssc.socketTextStream("localhost", 9999);

This lines DStream represents the stream of data that will be received from the data server. Each record in this stream is a line of text. Then, we want to split the lines by space into words.

// Split each line into words
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());

flatMap is a DStream operation that creates a new DStream by generating multiple new records from each record in the source DStream. In this case, each line will be split into multiple words and the stream of words is represented as the words DStream. Note that we defined the transformation using a FlatMapFunction object. As we will discover along the way, there are a number of such convenience classes in the Java API that help defines DStream transformations.

Next, we want to count these words.

// Count each word in each batch
JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2); // Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print();

The words DStream is further mapped (one-to-one transformation) to a DStream of (word, 1) pairs, using a PairFunction object. Then, it is reduced to get the frequency of words in each batch of data, using a Function2 object. Finally, wordCounts.print() will print a few of the counts generated every second.

Note that when these lines are executed, Spark Streaming only sets up the computation it will perform after it is started, and no real processing has started yet. To start the processing after all the transformations have been setup, we finally call start method.

jssc.start();              // Start the computation
jssc.awaitTermination(); // Wait for the computation to terminate

The complete code can be found in the Spark Streaming example JavaNetworkWordCount.

If you have already downloaded and built Spark, you can run this example as follows. You will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using

$ nc -lk 9999

Then, in a different terminal, you can start the example by using

$ ./bin/run-example streaming.JavaNetworkWordCount localhost 9999

完整代码:

import java.util.Arrays;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext; import scala.Tuple2; public class HelloWord {
public static void main(String[] args) throws InterruptedException {
// Create a local StreamingContext with two working thread and batch interval of
// 1 second
SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("NetworkWordCount");
JavaSparkContext jsc=new JavaSparkContext(conf);
jsc.setLogLevel("WARN");
JavaStreamingContext jssc = new JavaStreamingContext(jsc, Durations.seconds(60)); // Create a DStream that will connect to hostname:port, like localhost:9999
JavaReceiverInputDStream<String> lines = jssc.socketTextStream("xx.xx.xx.xx", 19999); // Split each line into words
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator()); // Count each word in each batch
JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2); // Print the first ten elements of each RDD generated in this DStream to the
// console
wordCounts.print(); jssc.start(); // Start the computation
jssc.awaitTermination(); // Wait for the computation to terminate
}
}

测试:

[root@abced dx]# nc -lk
hellow wrd
hello word
hello word
hello dkk
hl
hello
hello
hello word
hello word
hello java
hello c@
hello hadoop]
hello spark
hello word
hello kafka
hello c
hello c#
hello .net core
net cre
workd
hle
hello words
hke hjh
hek
hel
hl3
hhk
hke

程序执行结果:

-------------------------------------------
Time: ms
-------------------------------------------
(c,)
(spark,)
(kafka,)
(c#,)
(hello,)
(java,)
(c@,)
(hadoop],)
(word,) // :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
-------------------------------------------
Time: ms
-------------------------------------------
(hle,)
(words,)
(.net,)
(hello,)
(workd,)
(cre,)
(net,)
(core,) // :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
-------------------------------------------
Time: ms
-------------------------------------------
(,)
(hhk,)
(hek,)
(hel,)
(,)
(hjh,)
(,)
(hke,)
(,)
(hl3,)

结论:是一批一批的处理的,不进行累加,每一批统计并不是累加之前的数据,而是针对当前接收到这一批数据的处理。

Spark:java api实现word count统计的更多相关文章

  1. Spark Java API 之 CountVectorizer

    Spark Java API 之 CountVectorizer 由于在Spark中文本处理与分析的一些机器学习算法的输入并不是文本数据,而是数值型向量.因此,需要进行转换.而将文本数据转换成数值型的 ...

  2. 在 IntelliJ IDEA 中配置 Spark(Java API) 运行环境

    1. 新建Maven项目 初始Maven项目完成后,初始的配置(pom.xml)如下: 2. 配置Maven 向项目里新建Spark Core库 <?xml version="1.0& ...

  3. Spark Java API 计算 Levenshtein 距离

    Spark Java API 计算 Levenshtein 距离 在上一篇文章中,完成了Spark开发环境的搭建,最终的目标是对用户昵称信息做聚类分析,找出违规的昵称.聚类分析需要一个距离,用来衡量两 ...

  4. spark (java API) 在Intellij IDEA中开发并运行

    概述:Spark 程序开发,调试和运行,intellij idea开发Spark java程序. 分两部分,第一部分基于intellij idea开发Spark实例程序并在intellij IDEA中 ...

  5. spark java API 实现二次排序

    package com.spark.sort; import java.io.Serializable; import scala.math.Ordered; public class SecondS ...

  6. spark java api数据分析实战

    1 spark关键包 <!--spark--> <dependency> <groupId>fakepath</groupId> <artifac ...

  7. 【Spark Java API】broadcast、accumulator

    转载自:http://www.jianshu.com/p/082ef79c63c1 broadcast 官方文档描述: Broadcast a read-only variable to the cl ...

  8. Spark: 单词计数(Word Count)的MapReduce实现(Java/Python)

    1 导引 我们在博客<Hadoop: 单词计数(Word Count)的MapReduce实现 >中学习了如何用Hadoop-MapReduce实现单词计数,现在我们来看如何用Spark来 ...

  9. Spark基础与Java Api介绍

    原创文章,转载请注明: 转载自http://www.cnblogs.com/tovin/p/3832405.html  一.Spark简介 1.什么是Spark 发源于AMPLab实验室的分布式内存计 ...

随机推荐

  1. Markdown,你只需要掌握这几个

    目录 题记 正文 1. 常用标记 这是一级标题 这是二级标题 这是三级标题 这是高阶标题(效果和一级标题一样 ) 这是次阶标题(效果和二级标题一样) 2. 次常用标记 3. 不常用标记 4. 专项使用 ...

  2. CocoaPods第三方库管理工具

    http://code4app.com/article/cocoapods-install-usage

  3. Java之多线程 Atomic(原子的)

    一.何谓Atomic? Atomic一词跟原子有点关系,后者曾被人认为是最小物质的单位.计算机中的Atomic是指不能分割成若干部分的意思.如果一段代码被认为是Atomic,则表示这段代码在执行过程中 ...

  4. 使用 IntraWeb (19) - 基本控件之 TIWTreeView

    这是个饱受非议的控件; 我通过尝试, 理解了非议, 也能理解作者. 总之向作者的思路靠拢吧, 还是不错的. TIWTreeView 所在单元及继承链: IWCompTreeview.TIWTreeVi ...

  5. FileReader读取中文txt文件编码丢失问题(乱码)(转)

    有一个UTF-8编码的文本文件,用FileReader读取到一个字符串,然后转换字符集:str=new String(str.getBytes(),"UTF-8");结果大部分中文 ...

  6. 交叉编译gdb和gdbserver

    从http://ftp.gnu.org/gnu/gdb/下载最新的gdb,我下载的是gdb-8.0. 编译aarch32(>armv5): #!/bin/bash export CC=arm-n ...

  7. 使用Axure RP原型设计实践07,注册判断

    本篇实现注册页的一些功能.本项目是通过用户名和电子邮件进行注册的. 在本篇之前,在"使用Axure RP原型设计实践03,制作一个登录界面的原型"中已经对注册页做了基本的处理. 打 ...

  8. 在ASP.NET MVC4中实现同页面增删改查,无弹出框02,增删改查界面设计

    在上一篇"在ASP.NET MVC4中实现同页面增删改查,无弹出框01,Repository的搭建"中,已经搭建好了Repository层,本篇就剩下增删改查的界面了......今 ...

  9. Windows Phone本地数据库(SQLCE):11、使用LINQ查询数据库(翻译) (转)

    这是“windows phone mango本地数据库(sqlce)”系列短片文章的第十一篇. 为了让你开始在Windows Phone Mango中使用数据库,这一系列短片文章将覆盖所有你需要知道的 ...

  10. Spring Boot开发之流水无情(二)

    http://my.oschina.net/u/1027043/blog/406558 上篇散仙写了一个很简单的入门级的Spring Boot的例子,没啥技术含量,不过,其实学任何东西只要找到第一个突 ...