mapreduce导出MSSQL的数据到HDFS
今天想通过一些数据,来测试一下我的《基于信息熵的无字典分词算法》这篇文章的正确性。就写了一下MapReduce程序从MSSQL SERVER2008数据库里取数据分析。程序发布到hadoop机器上运行报SQLEXCEPTION错误
奇怪了,我的SQL语句中没有LIMIT,这LIMIT哪来的。我翻看了DBInputFormat类的源码,
protected RecordReader<LongWritable, T> createDBRecordReader(DBInputSplit split, Configuration conf) throws IOException { @SuppressWarnings("unchecked") Class<T> inputClass = (Class<T>) (dbConf.getInputClass()); try { // use database product name to determine appropriate record reader. if (dbProductName.startsWith("ORACLE")) { // use Oracle-specific db reader. return new OracleDBRecordReader<T>(split, inputClass, conf, createConnection(), getDBConf(), conditions, fieldNames, tableName); } else if (dbProductName.startsWith("MYSQL")) { // use MySQL-specific db reader. return new MySQLDBRecordReader<T>(split, inputClass, conf, createConnection(), getDBConf(), conditions, fieldNames, tableName); } else { // Generic reader. return new DBRecordReader<T>(split, inputClass, conf, createConnection(), getDBConf(), conditions, fieldNames, tableName); } } catch (SQLException ex) { throw new IOException(ex.getMessage()); } }
DBRecordReader的源码
protected String getSelectQuery() { StringBuilder query = new StringBuilder(); // Default codepath for MySQL, HSQLDB, etc. Relies on LIMIT/OFFSET for splits. if(dbConf.getInputQuery() == null) { query.append("SELECT "); for (int i = 0; i < fieldNames.length; i++) { query.append(fieldNames[i]); if (i != fieldNames.length -1) { query.append(", "); } } query.append(" FROM ").append(tableName); query.append(" AS ").append(tableName); //in hsqldb this is necessary if (conditions != null && conditions.length() > 0) { query.append(" WHERE (").append(conditions).append(")"); } String orderBy = dbConf.getInputOrderBy(); if (orderBy != null && orderBy.length() > 0) { query.append(" ORDER BY ").append(orderBy); } } else { //PREBUILT QUERY query.append(dbConf.getInputQuery()); } try { query.append(" LIMIT ").append(split.getLength()); //问题所在 query.append(" OFFSET ").append(split.getStart()); } catch (IOException ex) { // Ignore, will not throw. } return query.toString(); }
终于找到原因了。
原来,hadoop只实现了Mysql的DBRecordReader(MySQLDBRecordReader)和ORACLE的DBRecordReader(OracleDBRecordReader)。
原因找到了,我参考着OracleDBRecordReader实现了MSSQL SERVER的DBRecordReader代码如下:
MSSQLDBInputFormat的代码:
/**
*
*/
package org.apache.hadoop.mapreduce.lib.db; import java.io.IOException;
import java.sql.SQLException; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.RecordReader; /**
* @author summer
* MICROSOFT SQL SERVER
*/
public class MSSQLDBInputFormat<T extends DBWritable> extends DBInputFormat<T> { public static void setInput(Job job,
Class<? extends DBWritable> inputClass,
String inputQuery, String inputCountQuery,String rowId) {
job.setInputFormatClass(MSSQLDBInputFormat.class);
DBConfiguration dbConf = new DBConfiguration(job.getConfiguration());
dbConf.setInputClass(inputClass);
dbConf.setInputQuery(inputQuery);
dbConf.setInputCountQuery(inputCountQuery);
dbConf.setInputFieldNames(new String[]{rowId});
} @Override
protected RecordReader<LongWritable, T> createDBRecordReader(
org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf) throws IOException { @SuppressWarnings("unchecked")
Class<T> inputClass = (Class<T>) (dbConf.getInputClass());
try { return new MSSQLDBRecordReader<T>(split, inputClass,
conf, createConnection(), getDBConf(), conditions, fieldNames,
tableName); } catch (SQLException ex) {
throw new IOException(ex.getMessage());
} } }
MSSQLDBRecordReader的代码:
/**
*
*/
package org.apache.hadoop.mapreduce.lib.db; import java.io.IOException;
import java.sql.Connection;
import java.sql.SQLException; import org.apache.hadoop.conf.Configuration; /**
* @author summer
*
*/
public class MSSQLDBRecordReader <T extends DBWritable> extends DBRecordReader<T>{ public MSSQLDBRecordReader(DBInputFormat.DBInputSplit split,
Class<T> inputClass, Configuration conf, Connection conn, DBConfiguration dbConfig,
String cond, String [] fields, String table) throws SQLException {
super(split, inputClass, conf, conn, dbConfig, cond, fields, table); } @Override
protected String getSelectQuery() {
StringBuilder query = new StringBuilder();
DBConfiguration dbConf = getDBConf();
String conditions = getConditions();
String tableName = getTableName();
String [] fieldNames = getFieldNames(); // Oracle-specific codepath to use rownum instead of LIMIT/OFFSET.
if(dbConf.getInputQuery() == null) {
query.append("SELECT "); for (int i = 0; i < fieldNames.length; i++) {
query.append(fieldNames[i]);
if (i != fieldNames.length -1) {
query.append(", ");
}
} query.append(" FROM ").append(tableName);
if (conditions != null && conditions.length() > 0)
query.append(" WHERE ").append(conditions);
String orderBy = dbConf.getInputOrderBy();
if (orderBy != null && orderBy.length() > 0) {
query.append(" ORDER BY ").append(orderBy);
}
} else {
//PREBUILT QUERY
query.append(dbConf.getInputQuery());
} try {
DBInputFormat.DBInputSplit split = getSplit();
if (split.getLength() > 0){
String querystring = query.toString();
String id = fieldNames[0];
query = new StringBuilder();
query.append("SELECT TOP "+split.getLength()+"* FROM ( ");
query.append(querystring);
query.append(" ) a WHERE " + id +" NOT IN (SELECT TOP ").append(split.getEnd());
query.append(" "+id +" FROM (");
query.append(querystring);
query.append(" ) b");
query.append(" )");
System.out.println("----------------------MICROSOFT SQL SERVER QUERY STRING---------------------------");
System.out.println(query.toString());
System.out.println("----------------------MICROSOFT SQL SERVER QUERY STRING---------------------------");
}
} catch (IOException ex) {
// ignore, will not throw.
} return query.toString();
} }
mapreduce的代码
/**
*
*/
package com.nltk.sns.mapreduce; import java.io.IOException;
import java.util.List; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.MRJobConfig;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.MSSQLDBInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import com.nltk.utils.ETLUtils; /**
* @author summer
*
*/
public class LawDataEtl { public static class CaseETLMapper extends
Mapper<LongWritable, LawCaseRecord, LongWritable, Text>{ static final int step = 6; LongWritable key = new LongWritable(1);
Text value = new Text(); @Override
protected void map(
LongWritable key,
LawCaseRecord lawCaseRecord,
Mapper<LongWritable, LawCaseRecord, LongWritable, Text>.Context context)
throws IOException, InterruptedException { System.out.println("-----------------------------"+lawCaseRecord+"------------------------------"); key.set(lawCaseRecord.id);
String source = ETLUtils.format(lawCaseRecord.source);
List<LawCaseWord> words = ETLUtils.split(lawCaseRecord.id,source, step);
for(LawCaseWord w:words){
value.set(w.toString());
context.write(key, value);
}
}
} static final String driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
static final String dbUrl = "jdbc:sqlserver://192.168.0.1:1433;DatabaseName=XXX";
static final String uid = "XXX";
static final String pwd = "XXX";
static final String inputQuery = "select id,source from tablename where id<1000";
static final String inputCountQuery = "select count(1) from LawDB.dbo.case_source where id<1000";
static final String jarClassPath = "/user/lib/sqljdbc4.jar";
static final String outputPath = "hdfs://ubuntu:9000/user/test";
static final String rowId = "id"; public static Job configureJob(Configuration conf) throws Exception{ String jobName = "etlcase";
Job job = Job.getInstance(conf, jobName); job.addFileToClassPath(new Path(jarClassPath));
MSSQLDBInputFormat.setInput(job, LawCaseRecord.class, inputQuery, inputCountQuery,rowId);
job.setJarByClass(LawDataEtl.class); FileOutputFormat.setOutputPath(job, new Path(outputPath)); job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(CaseETLMapper.class); return job;
} public static void main(String[] args) throws Exception{ Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
fs.delete(new Path(outputPath), true); DBConfiguration.configureDB(conf, driverClass, dbUrl, uid, pwd);
conf.set(MRJobConfig.NUM_MAPS, String.valueOf(10));
Job job = configureJob(conf);
System.out.println("------------------------------------------------");
System.out.println(conf.get(DBConfiguration.DRIVER_CLASS_PROPERTY));
System.out.println(conf.get(DBConfiguration.URL_PROPERTY));
System.out.println(conf.get(DBConfiguration.USERNAME_PROPERTY));
System.out.println(conf.get(DBConfiguration.PASSWORD_PROPERTY));
System.out.println("------------------------------------------------");
System.exit(job.waitForCompletion(true) ? 0 : 1); }
}
辅助类的代码:
/**
*
*/
package com.nltk.sns; import java.util.ArrayList;
import java.util.List; import org.apache.commons.lang.StringUtils; /**
* @author summer
*
*/
public class ETLUtils { public final static String NULL_CHAR = "";
public final static String PUNCTUATION_REGEX = "[(\\pP)&&[^\\|\\{\\}\\#]]+";
public final static String WHITESPACE_REGEX = "[\\p{Space}]+"; public static String format(String s){ return s.replaceAll(PUNCTUATION_REGEX, NULL_CHAR).replaceAll(WHITESPACE_REGEX, NULL_CHAR);
} public static List<String> split(String s,int stepN){ List<String> splits = new ArrayList<String>();
if(StringUtils.isEmpty(s) || stepN<1)
return splits;
int len = s.length();
if(len<=stepN)
splits.add(s);
else{
for(int j=1;j<=stepN;j++)
for(int i=0;i<=len-j;i++){
String key = StringUtils.mid(s, i,j);
if(StringUtils.isEmpty(key))
continue;
splits.add(key);
}
}
return splits; } public static void main(String[] args){ String s="谢婷婷等与姜波等";
int stepN = 2;
List<String> splits = split(s,stepN);
System.out.println(splits);
}
}
运行成功了
代码初略的实现,主要是为了满足我的需求,大家可以根据自己的需要进行修改。
实际上DBRecordReader作者实现的并不好,我们来看DBRecordReader、MySQLDBRecordReader和OracleDBRecordReader源码,DBRecordReader和MySQLDBRecordReader耦合度太高。一般而言,就是对于没有具体实现的数据库DBRecordReader也应该做到运行不报异常,无非就是采用单一的SPLIT和单一的MAP。
mapreduce导出MSSQL的数据到HDFS的更多相关文章
- 使用C#导出MSSQL表数据Insert语句,支持所有MSSQL列属性
在正文开始之前,我们先看一下MSSQL的两张系统表sys.objects . syscolumnsMSDN中 sys.objects表的定义:在数据库中创建的每个用户定义的架构作用域内的对象在该表中均 ...
- MapReduce(十六): 写数据到HDFS的源代码分析
1) LineRecordWriter负责把Key,Value的形式把数据写入到DFSOutputStream watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZ ...
- 使用MapReduce将mysql数据导入HDFS
package com.zhen.mysqlToHDFS; import java.io.DataInput; import java.io.DataOutput; import java.io.IO ...
- 1.6-1.10 使用Sqoop导入数据到HDFS及一些设置
一.导数据 1.import和export Sqoop可以在HDFS/Hive和关系型数据库之间进行数据的导入导出,其中主要使用了import和export这两个工具.这两个工具非常强大, 提供了很多 ...
- BCP导入导出MsSql
BCP导入导出MsSql 1.导出数据 (1).在Sql Server Management Studio中: --导出数据到tset1.txt,并指定本地数据库的用户名和密码 --这里需要指定数据库 ...
- sqoop将oracle数据导入hdfs集群
使用sqoop将oracle数据导入hdfs集群 集群环境: hadoop1.0.0 hbase0.92.1 zookeeper3.4.3 hive0.8.1 sqoop-1.4.1-incubati ...
- 大数据(1)---大数据及HDFS简述
一.大数据简述 在互联技术飞速发展过程中,越来越多的人融入互联网.也就意味着各个平台的用户所产生的数据也越来越多,可以说是爆炸式的增长,以前传统的数据处理的技术已经无法胜任了.比如淘宝,每天的活跃用户 ...
- 第3节 sqoop:4、sqoop的数据导入之导入数据到hdfs和导入数据到hive表
注意: (1)\001 是hive当中默认使用的分隔符,这个玩意儿是一个asc 码值,键盘上面打不出来 (2)linux中一行写不下,可以末尾加上 一些空格和 “ \ ”,换行继续写余下的命令: bi ...
- 关于Java导出100万行数据到Excel的优化方案
1>场景 项目中需要从数据库中导出100万行数据,以excel形式下载并且只要一张sheet(打开这么大文件有多慢另说,呵呵). ps:xlsx最大容纳1048576行 ,csv最大容纳1048 ...
随机推荐
- Cesium应用篇:2影像服务(上)
文章中相关范例下载路径:https://yunpan.cn/cByQqkANWN7Pu 访问密码 823d Cesium中提供了多种ImageryProvider方式,来满足用户的实际需 ...
- Mongoose使用案例--让JSON数据直接入库MongoDB
目录 1.准备工作. 2.配置Mongoose. 3.创建目录及文件. 4.插入数据,POST提交JSON增加一条记录. 5.查询数据,取出你插入数据库的记录. 一.准备工作 使用Express4创建 ...
- 【集合框架】JDK1.8源码分析之IdentityHashMap(四)
一.前言 前面已经分析了HashMap与LinkedHashMap,现在我们来分析不太常用的IdentityHashMap,从它的名字上也可以看出来用于表示唯一的HashMap,仔细分析了其源码,发现 ...
- 讲讲Android事件拦截机制
简介 什么是触摸事件?顾名思义,触摸事件就是捕获触摸屏幕后产生的事件.当点击一个按钮时,通常会产生两个或者三个事件--按钮按下,这是事件一,如果滑动几下,这是事件二,当手抬起,这是事件三.所以在And ...
- JS产生随机数的几个用法!
<script> function GetRandomNum(Min,Max) { var Range = Max - Min; var Rand = Math.random(); ret ...
- .Net语言 APP开发平台——Smobiler学习日志:实现在手机上调用摄像头进行扫描
最前面的话:Smobiler是一个在VS环境中使用.Net语言来开发APP的开发平台,也许比Xamarin更方便 样式一 一.目标样式 我们要实现上图中的效果,需要如下的操作: 1.从工具栏上的&qu ...
- webapi文档描述-swagger
最近做的项目使用mvc+webapi,采取前后端分离的方式,后台提供API接口给前端开发人员.这个过程中遇到一个问题后台开发人员怎么提供接口说明文档给前端开发人员,最初打算使用word文档方式进行交流 ...
- redis主从复制 从而 数据备份和读写分离
蜗牛Redis系列文章目录http://www.cnblogs.com/tdws/tag/NoSql/ 爬虫转载注明地址本文地址—博客园蜗牛 http://www.cnblogs.com/tdws/p ...
- SharePoint 列表的导出导入
有一群友问到关于 SharePoint 列表的导入与导出的问题,而最近也要做相关操作且好久没写博客了,所以记录下来,过程其实相当简单. 方法:将 列表 保存为 模板(可包含数据),下载模板文件,上传到 ...
- 把生成的excel文件直接提供为下载页效果
把php中的excel显示下载页下载到本地硬盘需要设置头信息: 代码: $objWriter = \PHPExcel_IOFactory::createWriter($objPHPExcel, 'Ex ...