MapReduce Demo
功能:统计公司员工一个月内手机上网上行流量、下行流量及总流量。
测试数据如下:
13612345678 6000 1000
13612345678 2000 3000
代码:
程序入口类:DataCount
package cn.terry.mr;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider.Text;public class DataCount {public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {Configuration conf=new Configuration();Job job=Job.getInstance(conf);job.setJarByClass(DataCount.class);job.setMapperClass(MRMap.class);FileInputFormat.setInputPaths(job, new Path(args[0]));job.setReducerClass(MRReduce.class);job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(DataBean.class);FileOutputFormat.setOutputPath(job, new Path(args[1]));job.waitForCompletion(true);}}数据实体类: DataBean.java
package cn.terry.mr;import java.io.DataInput;import java.io.DataOutput;import java.io.IOException;import org.apache.hadoop.io.Writable;public class DataBean implements Writable {private String telNo;private Long upPayLoad;private Long downPayLoad;private Long totalPayLoad;public String getTelNo() {return telNo;}public void setTelNo(String telNo) {this.telNo = telNo;}public Long getUpPayLoad() {return upPayLoad;}public void setUpPayLoad(Long upPayLoad) {this.upPayLoad = upPayLoad;}public Long getDownPayLoad() {return downPayLoad;}public void setDownPayLoad(Long downPayLoad) {this.downPayLoad = downPayLoad;}public Long getTotalPayLoad() {return totalPayLoad;}public void setTotalPayLoad(Long totalPayLoad) {this.totalPayLoad = totalPayLoad;}public DataBean() {}public DataBean(String telNo, Long upPayLoad, Long downPayLoad) {this.telNo = telNo;this.upPayLoad = upPayLoad;this.downPayLoad = downPayLoad;this.totalPayLoad=this.upPayLoad+this.downPayLoad;}//serialize@Overridepublic void write(DataOutput out) throws IOException {// TODO Auto-generated method stubout.writeUTF(telNo);out.writeLong(upPayLoad);out.writeLong(downPayLoad);out.writeLong(totalPayLoad);}//deserrialize@Overridepublic void readFields(DataInput in) throws IOException {// TODO Auto-generated method stubthis.telNo=in.readUTF();this.upPayLoad=in.readLong();this.downPayLoad=in.readLong();this.totalPayLoad=in.readLong();}@Overridepublic String toString() {// TODO Auto-generated method stubreturn this.upPayLoad+"\t"+ this.downPayLoad+"\t" + this.totalPayLoad;}}Map类:MRMap.java
package cn.terry.mr;import java.io.IOException;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Mapper;public class MRMap extends Mapper<LongWritable,Text,Text,DataBean> {@Overrideprotected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {String line=value.toString();String[] fields=line.split("\t");String telNo=fields[0];Long up=Long.parseLong(fields[1]);Long down= Long.parseLong(fields[2]);DataBean bean=new DataBean(telNo,up,down);context.write(new Text(telNo), bean);}}Reduce类:MRReduce.java
package cn.terry.mr;import java.io.IOException;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Reducer;public class MRReduce extends Reducer<Text,DataBean,Text,DataBean> {@Overrideprotected void reduce(Text key, Iterable<DataBean> v2, Context context) throws IOException, InterruptedException {long up_sum=0;long down_sum=0;for(DataBean bean :v2){up_sum+=bean.getUpPayLoad();down_sum+=bean.getDownPayLoad();}DataBean bean=new DataBean("",up_sum,down_sum);context.write(key, bean);}}
17/11/08 11:34:25 INFO client.RMProxy: Connecting to ResourceManager at master/1:80 32
17/11/08 11:34:27 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not p erformed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/11/08 11:34:27 INFO input.FileInputFormat: Total input paths to process : 1
17/11/08 11:34:28 INFO mapreduce.JobSubmitter: number of splits:1
17/11/08 11:34:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1509957441313_00 02
17/11/08 11:34:29 INFO impl.YarnClientImpl: Submitted application application_1509957441313_00 02
17/11/08 11:34:29 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/appli cation_1509957441313_0002/
17/11/08 11:34:29 INFO mapreduce.Job: Running job: job_1509957441313_0002
17/11/08 11:34:46 INFO mapreduce.Job: Job job_1509957441313_0002 running in uber mode : false
17/11/08 11:34:46 INFO mapreduce.Job: map 0% reduce 0%
17/11/08 11:34:55 INFO mapreduce.Job: Task Id : attempt_1509957441313_0002_m_000000_0, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collect or was :class com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:415) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746
以上错误可看出hadoop引用的Text包出错,需要将DataCount类中Text的包引用改为 import org.apache.hadoop.io.Text;
再次运行:
[root@master bin]# hadoop jar /home/hadoop/mpCount.jar cn.terry.mr.DataCount /data3.txt /MROut417/11/08 16:23:45 INFO client.RMProxy: Connecting to ResourceManager at master/x.x.x.x:803217/11/08 16:23:46 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.17/11/08 16:23:47 INFO input.FileInputFormat: Total input paths to process : 117/11/08 16:23:47 INFO mapreduce.JobSubmitter: number of splits:117/11/08 16:23:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1509957441313_000817/11/08 16:23:48 INFO impl.YarnClientImpl: Submitted application application_1509957441313_000817/11/08 16:23:48 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1509957441313_0008/17/11/08 16:23:48 INFO mapreduce.Job: Running job: job_1509957441313_000817/11/08 16:24:02 INFO mapreduce.Job: Job job_1509957441313_0008 running in uber mode : false17/11/08 16:24:02 INFO mapreduce.Job: map 0% reduce 0%17/11/08 16:24:14 INFO mapreduce.Job: map 100% reduce 0%17/11/08 16:24:25 INFO mapreduce.Job: map 100% reduce 100%17/11/08 16:24:26 INFO mapreduce.Job: Job job_1509957441313_0008 completed successfully查看结果:
[root@master bin]# hdfs dfs -ls /MROut4Found 2 items-rw-r--r-- 2 root supergroup 0 2017-11-08 16:24 /MROut4/_SUCCESS-rw-r--r-- 2 root supergroup 106 2017-11-08 16:24 /MROut4/part-r-00000[root@master bin]# hdfs dfs -cat /MROut4/part-r-0000013112345678 1800 400 220013512345678 9500 400 990013612345678 8000 4000 1200013812345678 3500 400 3900由于我的chrome和IE版本无法兼容cnblogs的插入code和picture功能,抱歉没能将代码及结果以友好的方式呈现。
MapReduce Demo的更多相关文章
- python - hadoop,mapreduce demo
Hadoop,mapreduce 介绍 59888745@qq.com 大数据工程师是在Linux系统下搭建Hadoop生态系统(cloudera是最大的输出者类似于Linux的红帽), 把用户的交易 ...
- Wordcount on YARN 一个MapReduce示例
Hadoop YARN版本:2.2.0 关于hadoop yarn的环境搭建可以参考这篇博文:Hadoop 2.0安装以及不停集群加datanode hadoop hdfs yarn伪分布式运行,有如 ...
- 关于Mapreduce Text类型赋值的错误
Mapreduce中Text类型数据被无缘无故替换? 今天偶然看到一个mapreduce demo,直接上手操作 统计两个文件中 最大值 文件中数据格式为 名字 数值 输出为 名字(最大值所对应的 ...
- Apache Hadoop2.x 边安装边入门
完整PDF版本:<Apache Hadoop2.x边安装边入门> 目录 第一部分:Linux环境安装 第一步.配置Vmware NAT网络 一. Vmware网络模式介绍 二. NAT模式 ...
- CentOS7 分布式安装 Hadoop 2.8
1. 基本环境 1.1 操作系统 操作系统:CentOS7.3 1.2 三台虚拟机 172.20.20.100 master 172.20.20.101 slave1 172.20.20.102 sl ...
- 在虚拟机上配置安装hadoop集群
原本以为有大神已经总结的很清楚了,就不自己在写了, 但是在自己安装的过程中还是出现了一些问题, 所以打算以自己的方式重新总结一下. 参考https://blog.csdn.net/hliq539 ...
- centos6.6安装hadoop-2.5.0(三、完全分布式安装)
操作系统:centos6.6(三台服务器) 环境:selinux disabled:iptables off:java 1.8.0_131 安装包:hadoop-2.5.0.tar.gz hadoop ...
- centos6.6安装hadoop-2.5.0(一、本地模式安装)
操作系统:centos6.6(一台服务器) 环境:selinux disabled:iptables off:java 1.8.0_131 安装包:hadoop-2.5.0.tar.gz hadoop ...
- 史上最详细的Hadoop环境搭建(转)
转载的文章,请告知侵删.本人只是做个记录,以免以后找不到. 前言 Hadoop在大数据技术体系中的地位至关重要,Hadoop是大数据技术的基础,对Hadoop基础知识的掌握的扎实程度,会决定在大数据技 ...
随机推荐
- PowerDesigner学习 ---- 系列文章
一.PowerDesigner概述(系统分析与建模) 二.项目和框架矩阵 三.企业架构模型 四.业务处理模型 五.概念数据模型(CDM生成LDM,PDM和OOM) 六.物理数据模型(PDM逆向工程) ...
- python, 用filter实现素数
# _*_ coding:utf-8 _*_ #step1: 生成一个序列def _odd_iter(): n = 1 while True: n = n + 1 yield n #Step2: 定义 ...
- postman测试REST接口注意事项
postman在测试第三方REST接口,当POST请求内容要求为application/json时,注意要在postman中设置POST请求体类型设置为raw,然后设置其内容为application/ ...
- What is the difference between concurrency, parallelism and asynchronous methods?
Ref: http://stackoverflow.com/questions/4844637/what-is-the-difference-between-concurrency-paralleli ...
- video conference s/w
CamFrogWindows | Mac OS | Linux (Server only) | iOS | Android | Windows PhoneCamFrog lets you set up ...
- Gitlab备份与恢复、迁移与升级
0.Gitlab安装 1.安装和配置必要的依赖关系 在CentOS7,下面的命令将在系统防火墙打开HTTP和SSH访问. yum install curl openssh-server postf ...
- C# 中数据类型以及结构
值类型:int.double.char.bool...,结构. 引用类型:类(string).接口.数组 Class1.cs using System; using System.Collection ...
- 关于IP核中中断信号的使用---以zynq系统为例
关于IP核中中断信号的使用---以zynq系统为例 1.使能设备的中断输出信号 2.使能处理器的中断接收信号 3.连接IP核到处理器之间的中断 此处只是硬件的搭建,软件系统的编写需要进一步研究. 搭建 ...
- GTP+SDI工程播出部分思路整理(2)
GTP+SDI工程播出部分思路整理(2) 以同样的方法来分析tx_video_a_c_in信号: SDI核中tx_video_a_c_in信号连接情况如下所示 .tx_video_a_c_in ...
- PByte和PChar的关系
作为指针是相同的, 解析的内容,稍微有点区别. var s:String;P:PChar;B:PByte;a:Integer;begin s:='1234'; P:=PChar(s);//按chr ...