Greenplum and Hadoop HDFS integration
Step 1: Install Java on greenplum
Step 2: Set JAVA and HADOOP home for gpadmin
export JAVA_HOME=/usr/java/jdk1.6.0_26
export HADOOP_HOME=/home/hadoop
Step 3: Modify postgresql.conf file
/data/disk1/gp/master/gpseg-1/postgresql.conf
gp_external_enable_exec=on
gp_external_grant_privileges=on
gp_hadoop_target_version = hadoop2
gp_hadoop_home = '/opt/17173/hadoop'
Step 4: Restart Greenplum database
gpstop -a;gpstart -a
或者
gpconfig -c gp_hadoop_target_version -v "'hadoop2'"
gpconfig -c gp_hadoop_home -v "'/opt/17173/hadoop'"
Step 5: Copy test file to hdfs
tmp/test-data.txt
nc|Wednesday, October 10, 2012 03:45:36 UTC|39.5662|-123.3917|1.8|8.9|9|Northern California
hv|Wednesday, October 10, 2012 03:32:29 UTC|19.4028|-155.2697|2.9|1.9|24|Island of Hawaii, Hawaii
hv|Wednesday, October 10, 2012 03:24:59 UTC|19.4048|-155.2673|2.6|2.1|17|Island of Hawaii, Hawaii
nn|Wednesday, October 10, 2012 03:21:16 UTC|36.7553|-115.5388|1.2|7|20|Nevada
nn|Wednesday, October 10, 2012 03:09:13 UTC|38.583|-119.4507|1.3|7|7|Central California
uw|Wednesday, October 10, 2012 03:07:14 UTC|47.7083|-122.325|2|29.7|36|Seattle-Tacoma urban area, Washington
ci|Wednesday, October 10, 2012 02:52:38 UTC|32.8157|-116.1407|1.3|7.4|22|Southern California
ci|Wednesday, October 10, 2012 02:46:21 UTC|33.932|-116.8478|1.8|7.6|87|Southern California
hv|Wednesday, October 10, 2012 02:17:29 UTC|19.4042|-155.2688|1.9|1.7|17|Island of Hawaii, Hawaii
hadoop fs -copyFromLocal /tmp/test-data.txt /tmp
6.为HDFS protocol赋权限
为了能够创建外部表访问HDFS文件,使用创建外部表的用执行如下操作
GRANT INSERT ON PROTOCOL gphdfs TO user01
GRANT SELECT ON PROTOCOL gphdfs TO user01;
GRANT ALL ON PROTOCOL gphdfs TO user01;
7.创建外部表
CREATE EXTERNAL TABLE earthquake_raw_ext(
source text,
period text,
latitude double precision,
longitude double precision,
magnitude double precision,
depth double precision,
NST double precision,
region text
)
LOCATION ( 'gphdfs://sea2:8020/tmp/test-data.txt')
FORMAT 'text' (delimiter '|')
ENCODING 'UTF8';
\d tb01 使用\d +表名查看当前表结构
GP中create table as的语法(http://media.gpadmin.me/wp-content/uploads/2012/11/GPDB_AdminGuide_4_2.pdf page:439)
CREATE [[GLOBAL | LOCAL] {TEMPORARY | TEMP}] TABLE table_name(
[ { column_name data_type[ DEFAULT default_expr]
[column_constraint[ ... ]
[ ENCODING ( storage_directive[,...] ) ]
]
| table_constraint
| LIKE other_table[{INCLUDING | EXCLUDING}
{DEFAULTS | CONSTRAINTS}] ...}
[, ... ] ]
)
[ INHERITS ( parent_table[, ... ] ) ]
[ WITH ( storage_parameter=value[, ... ] )
[ ON COMMIT {PRESERVE ROWS | DELETE ROWS | DROP} ]
[ TABLESPACE tablespace]
[ DISTRIBUTED BY (column, [ ... ] ) | DISTRIBUTED RANDOMLY ]
[ PARTITION BY partition_type(column)
[ SUBPARTITION BY partition_type(column) ]
[ SUBPARTITION TEMPLATE ( template_spec ) ]
[...]
( partition_spec)
| [ SUBPARTITION BY partition_type(column) ]
[...]
( partition_spec
[ ( subpartition_spec
[(...)]
) ]
)
where storage_parameter is:
APPENDONLY={TRUE|FALSE}
BLOCKSIZE={8192-2097152}
ORIENTATION={COLUMN|ROW}
COMPRESSTYPE={ZLIB|QUICKLZ|RLE_TYPE|NONE}
COMPRESSLEVEL={0-9}
FILLFACTOR={10-100}
OIDS[=TRUE|FALSE]
where column_constraint is:
[CONSTRAINT constraint_name]
NOT NULL | NULL
| UNIQUE [USING INDEX TABLESPACE tablespace]
[WITH ( FILLFACTOR = value )]
| PRIMARY KEY [USING INDEX TABLESPACE tablespace]
[WITH ( FILLFACTOR = value )]
| CHECK ( expression )
and table_constraint is:
[CONSTRAINT constraint_name]
UNIQUE ( column_name [, ... ] )
[USING INDEX TABLESPACE tablespace]
[WITH ( FILLFACTOR=value )]
| PRIMARY KEY ( column_name [, ... ] )
[USING INDEX TABLESPACE tablespace]
[WITH ( FILLFACTOR=value )]
| CHECK ( expression )
where partition_type is:
LIST
| RANGE
where partition_specification is:
partition_element [, ...]
and partition_element is:
DEFAULT PARTITION name
| [PARTITION name] VALUES (list_value [,...] )
| [PARTITION name]
START ([datatype] 'start_value') [INCLUSIVE | EXCLUSIVE]
[ END ([datatype] 'end_value') [INCLUSIVE | EXCLUSIVE] ]
[ EVERY ([datatype] [number | INTERVAL] 'interval_value') ]
| [PARTITION name]
END ([datatype] 'end_value') [INCLUSIVE | EXCLUSIVE]
[ EVERY ([datatype] [number | INTERVAL] 'interval_value') ]
[ WITH ( partition_storage_parameter=value [, ... ] ) ]
[column_reference_storage_directive [, …] ]
[ TABLESPACE tablespace ]
where subpartition_spec or template_spec is:
subpartition_element [, ...]
and subpartition_element is:
DEFAULT SUBPARTITION name
| [SUBPARTITION name] VALUES (list_value [,...] )
| [SUBPARTITION name]
START ([datatype] 'start_value') [INCLUSIVE | EXCLUSIVE]
[ END ([datatype] 'end_value') [INCLUSIVE | EXCLUSIVE] ]
[ EVERY ([datatype] [number | INTERVAL] 'interval_value') ]
| [SUBPARTITION name]
END ([datatype] 'end_value') [INCLUSIVE | EXCLUSIVE]
[ EVERY ([datatype] [number | INTERVAL] 'interval_value') ]
[ WITH ( partition_storage_parameter=value [, ... ] ) ]
[column_reference_storage_directive [, …] ]
[ TABLESPACE tablespace ]
where storage_parameter is:
APPENDONLY={TRUE|FALSE}
BLOCKSIZE={8192-2097152}
ORIENTATION={COLUMN|ROW}
COMPRESSTYPE={ZLIB|QUICKLZ|RLE_TYPE|NONE}
COMPRESSLEVEL={0-9}
FILLFACTOR={10-100}
OIDS[=TRUE|FALSE]
where storage_directive is:
COMPRESSTYPE={ZLIB | QUICKLZ | RLE_TYPE | NONE}
| COMPRESSLEVEL={0-9}
| BLOCKSIZE={8192-2097152}
Where column_reference_storage_directive is:
COLUMN column_name ENCODING (storage_directive [, ... ] ), ...
|
DEFAULT COLUMN ENCODING (storage_directive [, ... ] )
创建事实表,导入数据
create table test (id, name)
with (APPENDONLY=true,BLOCKSIZE=8192,ORIENTATION=column,COMPRESSTYPE=QUICKLZ) as select * from external_test
distributed by (id);
查看数据和在segment上面分布
select gp_segment_id,count(1) from test group by 1;
CREATE EXTERNAL TABLE faq_logs_ext(
logtime bigint,
appid text,
gamecode text,
page text,
sessionid text,
userid text,
query text,
questionid int,
questionorder text,
staytime int,
iswant boolean
)LOCATION ('gphdfs://sea2:8020/faq/faq_logs/20160101') FORMAT 'TEXT' (DELIMITER '\u0001') LOG ERRORS INTO err_metrics SEGMENT REJECT :100 ROWS;
create table faq_logs as select * from faq_logs_ext;
- 通过Sql语句导数据
在通过SQL Server向导中的SQL语句导数据时,默认情况下源表中的nvarchar字段类型会变成202,解决此问题的方法是,要重新选择一下对应的数据接收表.
- SQLServer导数据到Oracle
从SQLServer导数据到Oracle大概有以下几种方法: 使用SSMS的导出数据向导,使用Microsoft ODBC for Oracle或Oracle Provider for OLE DB连 ...
- 使用pyspark模仿sqoop从oracle导数据到hive的主要功能(自动建表,分区导入,增量,解决数据换行符问题)
最近公司开始做大数据项目,让我使用sqoop(1.6.4版本)导数据进行数据分析计算,然而当我们将所有的工作流都放到azkaban上时整个流程跑完需要花费13分钟,而其中导数据(增量)就占了4分钟左右 ...
- 异构数据库之间完全可以用SQL语句导数据
告诉你一个最快的方法,用SQLServer连接DBF 在SQLServer中执行 SELECT * into bmk FROM OpenDataSource( ‘Microsoft.Jet.OLEDB ...
- GreenPlum 大数据平台--外部表(三)
一,外部表介绍 Greenplum 在数据加载上有一个明显的优势,就是支持数据的并发加载,gpfdisk是并发加载的工具,数据库中对应的就是外部表 所谓外部表,就是在数据库中只有表定义.没有数据,数据 ...
- GreenPlum 大数据平台--监控
数据库状态监控活动 活动 过程 纠正措施 列出当前状态为down的Segment.如果有任何行被返回,就会生成一个警告或者告警. 推荐频率:每5到10分钟 重要度: IMPORTANT 在postgr ...
- sqoop从mysql导数据到hive报错:Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
背景 使用sqoop从mysql导数据到hive,从本地服务器是可以访问mysql的(本地服务器是hadoop集群的一个datanode),但是sqoop导数据的时候依然连接不上mysql 报错如下: ...
- SQL Server 导数据 Oracle
1. 使用Sql Server的企业管理器导入(推荐) 优点: 可以指定导入的表. 缺点: 转成Oracle时, 对应的数据类型要一个一个手动修改 2.使用ORACLE官方提供的Sql Devel ...
- Solr4.7从数据库导数据
实际工程应用中,从数据库导出数据创建索引再常见不过了,现在实验一下从数据库导入数据创建索引. 一.版本说明 Solr版本:4.7.0 数据库:sqlserver2005 二.配置步骤 1. 准备的j ...
随机推荐
- SprimgMVC学习笔记(十一)—— 解决静态资源无法被springmvc处理
方法一:在springmvc.xml中配置 <!-- 解决静态资源无法被springMVC处理的问题 --> <mvc:default-servlet-handler /> 方 ...
- Fountains(非线段树版(主要是不太会用))
Arkady plays Gardenscapes a lot. Arkady wants to build two new fountains. There are n available foun ...
- 使用kafka bin目录中的zookeeper-shell.sh来查看kafka在zookeeper中的配置
cd kafka_2.11-0.10.2.1\bin\windowsecho ls /brokers/ids | zookeeper-shell.bat localhost:2181 使用kafka ...
- PIE 支持项目介绍
目前PIE SDK已经支持了气象.海洋.农业.水利.测绘等多个行业应用. [气象应用-和WebGIS程序界面结合] [气象应用-积雪监测] [气象应用-洪涝监测] [气象应用-专题模板] [气象应用- ...
- vue组件传参
一.父子组件的定义 负值组件的定义有两种,我称为常规父子组件和特殊父子组件. 1.1.常规父子组件 将其他组件以import引入用自定义标签接收,在当前组件中component里注册该标签,页面上可以 ...
- webstorm预览时把浏览器地址localhost改成IP
可以通过 File --> Setting,搜索 deployment 点击 + 号 然后输入一个名称,选择:Local or mounted folder,点击 OK 接下来选择你的本地项目路 ...
- oracle 基础知识(二)-表空间
一,表空间 01,表空间? Oracle数据库是通过表空间来存储物理表的,一个数据库实例可以有N个表空间,一个表空间下可以有N张表.有了数据库,就可以创建表空间.表空间(tablespace)是数据库 ...
- eclipse中注释快捷键
手动注释: ①类注释:Shift+Alt+J ②方法注释:在方法上方输入/** 后点击回车 自动注释:点击菜单栏上的Window -->Preferences-->Java-->Co ...
- API-Framework 前后端分离
- Beam编程系列之Apache Beam WordCount Examples(MinimalWordCount example、WordCount example、Debugging WordCount example、WindowedWordCount example)(官网的推荐步骤)
不多说,直接上干货! https://beam.apache.org/get-started/wordcount-example/ 来自官网的: The WordCount examples demo ...