Loading Data into HAWQ

Loading data into the database is required to start using it but how? There are several approaches to achieve this basic requirement but achieve the result by approaching the problem in different ways. This allows you to load data that best matches your use case.

Table Setup
This table will be used for the testing in HAWQ. I have this table
created in a single node VM running Hortonworks HDP with HAWQ 2.0
installed. I’m using the default Resource Manager too.

CREATE TABLE test_data
(id int,
fname text,
lname text)
DISTRIBUTED RANDOMLY;

Singleton
Let’s start with probably the worst way first. Sometimes this way is
ideal because you have very little data to load but in most cases, avoid
singleton inserts. This approach inserts just a single tuple in a
single transaction.

head si_test_data.sql
insert into test_data (id, fname, lname) values (1, 'jon_00001', 'roberts_00001');
insert into test_data (id, fname, lname) values (2, 'jon_00002', 'roberts_00002');
insert into test_data (id, fname, lname) values (3, 'jon_00003', 'roberts_00003');
insert into test_data (id, fname, lname) values (4, 'jon_00004', 'roberts_00004');
insert into test_data (id, fname, lname) values (5, 'jon_00005', 'roberts_00005');
insert into test_data (id, fname, lname) values (6, 'jon_00006', 'roberts_00006');
insert into test_data (id, fname, lname) values (7, 'jon_00007', 'roberts_00007');
insert into test_data (id, fname, lname) values (8, 'jon_00008', 'roberts_00008');
insert into test_data (id, fname, lname) values (9, 'jon_00009', 'roberts_00009');
insert into test_data (id, fname, lname) values (10, 'jon_00010', 'roberts_00010');

This repeats for 10,000 tuples.

time psql -f si_test_data.sql > /dev/null
real 5m49.527s

As you can see, this is pretty slow and not recommended for inserting large amounts of data. Nearly 6 minutes to load 10,000 tuples is crawling.

COPY
If you are familiar with PostgreSQL then you will feel right at home
with this technique. This time, the data is in a file named
test_data.txt and it is not wrapped with an insert statement.

head test_data.txt
1|jon_00001|roberts_00001
2|jon_00002|roberts_00002
3|jon_00003|roberts_00003
4|jon_00004|roberts_00004
5|jon_00005|roberts_00005
6|jon_00006|roberts_00006
7|jon_00007|roberts_00007
8|jon_00008|roberts_00008
9|jon_00009|roberts_00009
10|jon_00010|roberts_00010
COPY test_data FROM '/home/gpadmin/test_data.txt' WITH DELIMITER '|';
COPY 10000
Time: 128.580 ms

This method is significantly faster but it loads the data through the master. This means it doesn’t scale well as the master will become the bottleneck but it does allow you to load data from a host anywhere on your network so long as it has access to the master.

gpfdist
gpfdist is a web server that serves posix files for the segments to
fetch. Segment processes will get the data directly from gpfdist and
bypass the master when doing so. This enables you to scale by adding
more gpfdist processes and/or more segments.

gpfdist -p 8888 &
[1] 128836
[gpadmin@hdb ~]$ Serving HTTP on port 8888, directory /home/gpadmin

Now you’ll need to create a new external table to read the data from gpfdist.

CREATE EXTERNAL TABLE gpfdist_test_data
(id int,
fname text,
lname text)
LOCATION ('gpfdist://hdb:8888/test_data.txt')
FORMAT 'TEXT' (DELIMITER '|');

And to load the data.

INSERT INTO test_data SELECT * FROM gpfdist_test_data;
INSERT 0 10000
Time: 98.362 ms

gpfdist is blazing fast and scales easily. You can add more than one gpfdist location in the external table, use wild cards, use different formats, and much more. The downside is the file must be on a host that all segments can reach. You also have to create a separate gpfdist process on that host.

gpload
gpload is a utility that automates the loading process by using gpfdist.
Review the documentation for more on this utility. Technically, it is
the same as gpfdist and external tables but just automates the commands
for you.

Programmable Extension Framework (PXF)
PXF allows you to read and write data to HDFS using external tables.
Like using gpfdist, it is done by each segment so it scales and executes
in parallel.

For this example, I’ve loaded the test data into HDFS.

hdfs dfs -cat /test_data/* | head
1|jon_00001|roberts_00001
2|jon_00002|roberts_00002
3|jon_00003|roberts_00003
4|jon_00004|roberts_00004
5|jon_00005|roberts_00005
6|jon_00006|roberts_00006
7|jon_00007|roberts_00007
8|jon_00008|roberts_00008
9|jon_00009|roberts_00009
10|jon_00010|roberts_00010

The external table definition.

CREATE EXTERNAL TABLE et_test_data
(id int,
fname text,
lname text)
LOCATION ('pxf://hdb:51200/test_data?Profile=HdfsTextSimple')
FORMAT 'TEXT' (DELIMITER '|');

And now to load it.

INSERT INTO test_data SELECT * FROM et_test_data;
INSERT 0 10000
Time: 227.599 ms

PXF is probably the best way to load data when using the “Data Lake” design. You load your raw data into HDFS and then consume it with a variety of tools in the Hadoop ecosystem. PXF can also read and write other formats.

Outsourcer and gplink
Last but not least are software programs I created. Outsourcer
automates the table creation and load of data directly to Greenplum or
HAWQ using gpfdist. It sources data from SQL Server and Oracle as these
are the two most common OLTP databases.

gplink is another tool that can read external data but this technique
can connect to any valid JDBC source. It doesn’t automate many of the
steps that Oustourcer does but it is a convenient tool to get data from a
JDBC source.

You might be thinking that sqoop does this but not exactly. gplink
and Outsourcer load data into HAWQ and Greenplum tables. It is
optimized for these databases and fixes data for you automatically.
Both remove null and newline characters and escapes the escape and
delimiter characters. With sqoop, you will have to read the data from
HDFS using PXF and then fix the errors that could be in the files.

Both tools are linked above.

Summary
This post gives a brief description on the various ways to load data
into HAWQ. Pick the right technique for your use case. As you can see,
HAWQ is very flexible and can handle a variety of ways to load data.

This entry was posted in Hadoop on July 14, 2016.

[转帖]Loading Data into HAWQ的更多相关文章

  1. Loading Data into HDFS

    How to use a PDI job to move a file into HDFS. Prerequisites In order to follow along with this how- ...

  2. 使用OGG"Loading data from file to Replicat"的方法应该注意的问题:replicat进程是前台进程

    使用OGG的 "Loading data from file to Replicat"的方法应该注意的问题:replicat进程是前台进程 因此.最好是在vncserver中调用该 ...

  3. OGG "Loading data from file to Replicat"table静态数据同步配置过程

    OGG "Loading data from file to Replicat"table静态数据同步配置过程 一个.mgr过程 GGSCI (lei1) 3> view p ...

  4. Loading Data into a Table;MySQL从本地向数据库导入数据

    在localhost中准备好了一个test数据库和一个pet表: mysql> SHOW DATABASES; +--------------------+ | Database | +---- ...

  5. loading data into a table(亲测有效)

    一.实验要求 导入数据到数据库的表里    表内容如下: name owner species sex birth death Fluffy Harold cat f 1993-02-04   Cla ...

  6. HeadFirst Ruby 第十五章总结 Saving and loading data

    前言 在上一章讲述了如何进行基础的操作,比如 处理 GET 请求的 get route, 再比如下载 gem 等等方面的知识.在这一章节,作者告诉我们如何储存.处理数据.整个过程分三步走: 首先,当 ...

  7. 解决eclipse+adt出现的 loading data for android 问题

    因为公司最近做的项目中有用到一些第三方demo,蛋疼的是这些demo还比较旧...eclipse的... 于是给自己的eclipse装上了ADT插件,但是...因为我的eclipse比较新,Versi ...

  8. [MST] Loading Data from the Server using lifecycle hook

    Let's stop hardcoding our initial state and fetch it from the server instead. In this lesson you wil ...

  9. fake_useragent—Error occurred during loading data报错问题

    问题如下 解决方法: 在自己的临时文件下新建一个fake_useragent_0.1.11.json 把下面的文字复制进去 临时文件 直接输入cmd %temp% 即可进去 { "rando ...

随机推荐

  1. C++程序设计方法6:算法横向拆分

    例子1:负载监视器,如何在一个程序中实现对这些不同条件的适应呢? int main() { WindowDisplay display; Monitor monitor(&display); ...

  2. token和盐

    // 盐,加密后密码获取    Map<String, String> map = new HashMap<String, String>();    map.put(&quo ...

  3. PAT基础6-9

    6-9 统计个位数字 (15 分) 本题要求实现一个函数,可统计任一整数中某个位数出现的次数.例如-21252中,2出现了3次,则该函数应该返回3. 函数接口定义: int Count_Digit ( ...

  4. CSS_级联和继承

    2016-11-06 <CSS入门经典>第七章 1.在HTML中使用CSS样式表的三种方式: (1)内联的样式表. eg:<em style="background-whi ...

  5. Install latest git on CentOS 6/7

    Assuming you have sudo/root permission. Try rpmforge-extras first. yum --disablerepo=base,updates -- ...

  6. Docker技术快速精通指南

    doctor专业网站:http://www.dockerinfo.net/ Docker中文文档 csdn 的docker专栏: Docker技术快速精通指南

  7. USE " cc.exports.* = value " INSTEAD OF SET GLOBAL VARIABLE"

    Cocos2d-x 3.5的lua项目生成后,变成了MVC模式,并且,加入了一个全局变量的检测功能.也就是说,你不小心用了全局变量,他会提示你出错! 比如 local temp = 1 temp = ...

  8. ABAP表生成Java实体Bean

    项目中需要将HR模块中的表数据同步到Java系统中,向外围系统提供分发与查询服务,涉及到的表有两百多张,字段好几千上万个,如果手工一张张这些ABAP表在Java系统数据库中创建一遍的话,工作量将非常大 ...

  9. iOS:解决UITextView自适应高度粘贴大量文字导致显示不全的问题

    一.描述 在UITextView输入框中粘贴大量的文字时,UITextView内容自适应高度计算出现误差,导致整块文字上移消失. 二.方案 在UITextView文字改变的监听中添加如下方法即可. [ ...

  10. CSS 小技巧(不定时更新)

    1.Web 文本中的省略号 在Web开发中,对于一种情况很常见.那就是,文本太长,而放置文本的容器不够长,而我们又不想让文本换行,所以,我们想使用省略号来解决这个问题.在今天HTML的标准中并没有相关 ...