MapReduce读取hdfs上文件,建立词频的倒排索引到Hbase
Hdfs上的数据文件为T0,T1,T2(无后缀):
T0:
What has come into being in him was life, and the life was the light of all people.
The light shines in the darkness, and the darkness did not overcome it. Enter through the narrow gate;
for the gate is wide and the road is easy that leads to destruction, and there are many who take it.
For the gate is narrow and the road is hard that leads to life, and there are few who find it
T1:
Where, O death, is your victory? Where, O death, is your sting? The sting of death is sin, and.
The power of sin is the law. But thanks be to God, who gives us the victory through our Lord Jesus Christ.
The grass withers, the flower fades, when the breath of the LORD blows upon it; surely the people are grass.
The grass withers, the flower fades; but the word of our God will stand forever.
T2:
What has come into being in him was life, and the life was the light of all people.
The light shines in the darkness, and the darkness did not overcome it. Enter through the narrow gate;
for the gate is wide and the road is easy that leads to destruction, and there are many who take it.
For the gate is narrow and the road is hard that leads to life, and there are few who find it.
实现代码如下:
package com.pro.bq; import java.io.IOException;
import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableReducer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.util.GenericOptionsParser; public class DataFromHdfs {
public static class LocalMap extends Mapper<Object, Text, Text, Text>
{
private FileSplit split=null;
private Text keydata=null;
public void map(Object key, Text value,Context context)
throws IOException, InterruptedException { split=(FileSplit) context.getInputSplit();
StringTokenizer tokenStr=new StringTokenizer(value.toString());
while(tokenStr.hasMoreTokens())
{
String token=tokenStr.nextToken();
if(token.contains(",")|| token.contains(".")||token.contains(";")||token.contains("?"))
{
token=token.substring(0, token.length()-1);
}
String filePath=split.getPath().toString();
int index=filePath.indexOf("T");
keydata=new Text(token+":"+filePath.substring(index));
context.write(keydata, new Text("1"));
}
}
}
public static class LocalCombiner extends Reducer<Text, Text, Text, Text>
{ public void reduce(Text key, Iterable<Text> values,Context context)
throws IOException, InterruptedException {
int index=key.toString().indexOf(":");
Text keydata=new Text(key.toString().substring(0, index));
String filename=key.toString().substring(index+1);
int sum=0;
for(Text val:values)
{
sum++;
}
context.write(keydata, new Text(filename+":"+String.valueOf(sum)));
}
}
public static class TableReduce extends TableReducer<Text, Text, ImmutableBytesWritable>
{ public void reduce(Text key, Iterable<Text> values,Context context)
throws IOException, InterruptedException {
for(Text val:values)
{
int index=val.toString().indexOf(":");
String filename=val.toString().substring(0, index);
int sum=Integer.parseInt(val.toString().substring(index+1));
String row=key.toString();
Put put=new Put(Bytes.toBytes(key.toString()));
// put.add(Bytes.toBytes("word"), Bytes.toBytes("content"), Bytes.toBytes(key.toString()));
put.add(Bytes.toBytes("filesum"), Bytes.toBytes("filename"), Bytes.toBytes(filename));
put.add(Bytes.toBytes("filesum"), Bytes.toBytes("count"), Bytes.toBytes(String.valueOf(sum)));
context.write(new ImmutableBytesWritable(Bytes.toBytes(row)), put);
} }
}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf=new Configuration();
conf=HBaseConfiguration.create(conf);
// conf.set("hbase.zookeeper.quorum.", "localhost");
String hdfsPath="hdfs://localhost:9000/user/haduser/";
String[] argsStr=new String[]{hdfsPath+"input/reverseIndex"};
String[] otherArgs=new GenericOptionsParser(conf, argsStr).getRemainingArgs();
Job job=new Job(conf);
job.setJarByClass(DataFromHdfs.class); job.setMapperClass(LocalMap.class);
job.setCombinerClass(LocalCombiner.class);
job.setReducerClass(TableReduce.class); job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);//combiner的输入和输出类型同map相同 //之前要新建"index"表,否则会报错
TableMapReduceUtil.initTableReducerJob("index", TableReduce.class, job); FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
System.exit(job.waitForCompletion(true)?0:1);
}
}
运行之前用Shell创建”index“表,命令:” create 'index','filensum' “
程序运行之后,再执行shell命令:" scan 'index' ",执行效果如下:

MapReduce读取hdfs上文件,建立词频的倒排索引到Hbase的更多相关文章
- SparkHiveContext和直接Spark读取hdfs上文件然后再分析效果区别
最近用spark在集群上验证一个算法的问题,数据量大概是一天P级的,使用hiveContext查询之后再调用算法进行读取效果很慢,大概需要二十多个小时,一个查询将近半个小时,代码大概如下: try: ...
- python读取hdfs上的parquet文件方式
在使用python做大数据和机器学习处理过程中,首先需要读取hdfs数据,对于常用格式数据一般比较容易读取,parquet略微特殊.从hdfs上使用python获取parquet格式数据的方法(当然也 ...
- impala删表,而hdfs上文件却还在异常处理
Impala/hive删除表,drop后,hdfs上文件却还在处理方法: 问题原因分析,如下如可以看出一个属组是hive,一个是impala,keberas账号登录hive用户无法删除impala用户 ...
- 用mapreduce读取hdfs数据到hbase上
hdfs数据到hbase过程 将HDFS上的文件中的数据导入到hbase中 实现上面的需求也有两种办法,一种是自定义mr,一种是使用hbase提供好的import工具 hbase先创建好表 cre ...
- 【Spark】Spark-shell案例——standAlone模式下读取HDFS上存放的文件
目录 可以先用local模式读取一下 步骤 一.先将做测试的数据上传到HDFS 二.开发scala代码 standAlone模式查看HDFS上的文件 步骤 一.退出local模式,重新进入Spark- ...
- spark读取hdfs上的文件和写入数据到hdfs上面
def main(args: Array[String]): Unit = { val conf = new SparkConf() conf.set("spark.master" ...
- shell脚本监控Flume输出到HDFS上文件合法性
在使用flume中发现由于网络.HDFS等其它原因,使得经过Flume收集到HDFS上得日志有一些异常,表现为: 1.有未关闭的文件:以tmp(默认)结尾的文件.加入存到HDFS上得文件应该是gz压缩 ...
- 使用JAVA API读取HDFS的文件数据出现乱码的解决方案
使用JAVA api读取HDFS文件乱码踩坑 想写一个读取HFDS上的部分文件数据做预览的接口,根据网上的博客实现后,发现有时读取信息会出现乱码,例如读取一个csv时,字符串之间被逗号分割 英文字符串 ...
- HDFS 上文件块的副本数设置
一.使用 setrep 命令来设置 # 设置 /javafx-src.zip 的文件块只存三份 hadoop fs -setrep /javafx-src.zip 二.文件块在磁盘上的路径 # 设置的 ...
随机推荐
- Leetcode#148 Sort List
原题地址 链表归并排序 真是恶心的一道题啊,哇了好多次才过. 代码: void mergeList(ListNode *a, ListNode *b, ListNode *&h, ListNo ...
- poj 3254
状态压缩 dp dp[i][j] 为第 i 行状态为 j 的总数 #include <cstdio> #include <cstdlib> #include <cmath ...
- Extjs中自定义事件
//Ext中所谓的响应事件,响应的主要是组件中已经定义的事件(通过看api各组件的events可以找到) //主要作用就是利用on调用各组件的事件处理函数,然后在函数中作用户想要的操作 ...
- SELINUX设为Disable 影响java SSH工具包Jsch 0.1.49.jar的一个案例
最近项目中遇到一个典型事件,当RHEL 的SELINUX设为DISABLE时 使用JAVA的Jsch 库调用SSH命令时将随机返回空字符串,我使用的版本是0.1.49,最新版本0.1.51未测试. 关 ...
- 最长连续回文串(最优线性时间O(n))
转自:http://blog.csdn.net/hopeztm/article/details/7932245 Given a string S, find the longest palindrom ...
- CLIP PATH (MASK) GENERATOR是一款在线制作生成clip-path路径的工具,可以直接生成SVG代码以及配合Mask制作蒙板。
CLIP PATH (MASK) GENERATOR是一款在线制作生成clip-path路径的工具,可以直接生成SVG代码以及配合Mask制作蒙板. CLIP PATH (MASK) GENERATO ...
- ZOJ 1610 Count the Colors (线段树区间更新)
题目链接 题意 : 一根木棍,长8000,然后分别在不同的区间涂上不同的颜色,问你最后能够看到多少颜色,然后每个颜色有多少段,颜色大小从头到尾输出. 思路 :线段树区间更新一下,然后标记一下,最后从头 ...
- EOF的一点注记
int ch; while( (ch = getchar()) != EOF ) { putchar(ch); } 执行程序,输入:we are the,然后回车.运行结果如下: [purple@lo ...
- 利用securecrt在linux与windows之间传输文件
SecureCRT这款SSH客户端软件同时具备了终端仿真器和文件传输功能.比ftp命令方便多了,而且服务器不用再开FTP服务了.rz,sz是便是Linux/Unix同Windows进行ZModem文件 ...
- http://blog.csdn.net/woshiyjk/article/details/7895888
http://blog.csdn.net/woshiyjk/article/details/7895888