Creating, Showing, Altering, and Dropping Tables

See Hive Data Definition Language for detailed information about creating, showing, altering, and dropping tables.

Creating Tables

An example statement that would create the page_view table mentioned above would be like:

CREATE TABLE page_view(viewTime INT, userid BIGINT,
                page_url STRING, referrer_url STRING,
                ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
STORED AS SEQUENCEFILE;

In this example the columns of the table are specified with the corresponding types. Comments can be attached both at the column level as well as at the table level. Additionally the partitioned by clause defines the partitioning columns which are different from the data columns and are actually not stored with the data. When specified in this way, the data in the files is assumed to be delimited with ASCII 001(ctrl-A) as the field delimiter and newline as the row delimiter.

The field delimiter can be parametrized if the data is not in the above format as illustrated in the following example:

CREATE TABLE page_view(viewTime INT, userid BIGINT,
                page_url STRING, referrer_url STRING,
                ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
ROW FORMAT DELIMITED
        FIELDS TERMINATED BY '1'
STORED AS SEQUENCEFILE;

The row deliminator currently cannot be changed since it is not determined by Hive but Hadoop. e delimiters.

It is also a good idea to bucket the tables on certain columns so that efficient sampling queries can be executed against the data set. If bucketing is absent, random sampling can still be done on the table but it is not efficient as the query has to scan all the data. The following example illustrates the case of the page_view table that is bucketed on the userid column:

CREATE TABLE page_view(viewTime INT, userid BIGINT,
                page_url STRING, referrer_url STRING,
                ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
ROW FORMAT DELIMITED
        FIELDS TERMINATED BY '1'
        COLLECTION ITEMS TERMINATED BY '2'
        MAP KEYS TERMINATED BY '3'
STORED AS SEQUENCEFILE;

In the example above, the table is clustered by a hash function of userid into 32 buckets. Within each bucket the data is sorted in increasing order of viewTime. Such an organization allows the user to do efficient sampling on the clustered column - in this case userid. The sorting property allows internal operators to take advantage of the better-known data structure while evaluating queries with greater efficiency.

CREATE TABLE page_view(viewTime INT, userid BIGINT,
                page_url STRING, referrer_url STRING,
                friends ARRAY<BIGINT>, properties MAP<STRING, STRING>
               , ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
ROW FORMAT DELIMITED
        FIELDS TERMINATED BY '1'
        COLLECTION ITEMS TERMINATED BY '2'
        MAP KEYS TERMINATED BY '3'
STORED AS SEQUENCEFILE;

In this example the columns that comprise of the table row are specified in a similar way as the definition of types. Comments can be attached both at the column level as well as at the table level. Additionally the partitioned by clause defines the partitioning columns which are different from the data columns and are actually not stored with the data. The CLUSTERED BY clause specifies which column to use for bucketing as well as how many buckets to create. The delimited row format specifies how the rows are stored in the hive table. In the case of the delimited format, this specifies how the fields are terminated, how the items within collections (arrays or maps) are terminated and how the map keys are terminated. STORED AS SEQUENCEFILE indicates that this data is stored in a binary format (using hadoop SequenceFiles) on hdfs. The values shown for the ROW FORMAT and STORED AS clauses in the above example represent the system defaults.

Table names and column names are case insensitive.

Browsing Tables and Partitions

SHOW TABLES;

To list existing tables in the warehouse; there are many of these, likely more than you want to browse.

SHOW TABLES 'page.*';

To list tables with prefix 'page'. The pattern follows Java regular expression syntax (so the period is a wildcard).

SHOW PARTITIONS page_view;

To list partitions of a table. If the table is not a partitioned table then an error is thrown.

DESCRIBE page_view;

To list columns and column types of table.

DESCRIBE EXTENDED page_view;

To list columns and all other properties of table. This prints lot of information and that too not in a pretty format. Usually used for debugging.

DESCRIBE EXTENDED page_view PARTITION (ds='2008-08-08');

To list columns and all other properties of a partition. This also prints lot of information which is usually used for debugging.

Altering Tables

To rename existing table to a new name. If a table with new name already exists then an error is returned:

ALTER TABLE old_table_name RENAME TO new_table_name;

To rename the columns of an existing table. Be sure to use the same column types, and to include an entry for each preexisting column:

ALTER TABLE old_table_name REPLACE COLUMNS (col1 TYPE, ...);

To add columns to an existing table:

ALTER TABLE tab1 ADD COLUMNS (c1 INT COMMENT 'a new int column', c2 STRING DEFAULT 'def val');

Note that a change in the schema (such as the adding of the columns), preserves the schema for the old partitions of the table in case it is a partitioned table. All the queries that access these columns and run over the old partitions implicitly return a null value or the specified default values for these columns.

In the later versions we can make the behavior of assuming certain values as opposed to throwing an error in case the column is not found in a particular partition configurable.

Dropping Tables and Partitions

Dropping tables is fairly trivial. A drop on the table would implicitly drop any indexes(this is a future feature) that would have been built on the table. The associated command is

DROP TABLE pv_users;

To dropping a partition. Alter the table to drop the partition.

ALTER TABLE pv_users DROP PARTITION (ds='2008-08-08')
  • Note that any data for this table or partitions will be dropped and may not be recoverable. *

Loading Data

There are multiple ways to load data into Hive tables. The user can create an external table that points to a specified location within HDFS. In this particular usage, the user can copy a file into the specified location using the HDFS put or copy commands and create a table pointing to this location with all the relevant row format information. Once this is done, the user can transform the data and insert them into any other Hive table. For example, if the file /tmp/pv_2008-06-08.txt contains comma separated page views served on 2008-06-08, and this needs to be loaded into the page_view table in the appropriate partition, the following sequence of commands can achieve this:

从一个现有大表中,读取一部分(某分区)数据加载到表中。

CREATE EXTERNAL TABLE page_view_stg(viewTime INT, userid BIGINT,
                page_url STRING, referrer_url STRING,
                ip STRING COMMENT 'IP Address of the User',
                country STRING COMMENT 'country of origination')
COMMENT 'This is the staging page view table'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '44' LINES TERMINATED BY '12'
STORED AS TEXTFILE
LOCATION '/user/data/staging/page_view';
 
hadoop dfs -put /tmp/pv_2008-06-08.txt /user/data/staging/page_view
 
FROM page_view_stg pvs
INSERT OVERWRITE TABLE page_view PARTITION(dt='2008-06-08', country='US')
SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, nullnull, pvs.ip
WHERE pvs.country = 'US';

In the example above nulls are inserted for the array and map types in the destination tables but potentially these can also come from the external table if the proper row formats are specified.

This method is useful if there is already legacy data in HDFS on which the user wants to put some metadata so that the data can be queried and manipulated using Hive.

Additionally, the system also supports syntax that can load the data from a file in the local files system directly into a Hive table where the input data format is the same as the table format. If /tmp/pv_2008-06-08_us.txt already contains the data for US, then we do not need any additional filtering as shown in the previous example. The load in this case can be done using the following syntax:

从本地磁盘加载数据文件

LOAD DATA LOCAL INPATH /tmp/pv_2008-06-08_us.txt INTO TABLE page_view PARTITION(date='2008-06-08', country='US')

The path argument can take a directory (in which case all the files in the directory are loaded), a single file name, or a wildcard (in which case all the matching files are uploaded). If the argument is a directory - it cannot contain subdirectories. Similarly - the wildcard must match file names only.

In the case that the input file /tmp/pv_2008-06-08_us.txt is very large, the user may decide to do a parallel load of the data (using tools that are external to Hive). Once the file is in HDFS - the following syntax can be used to load the data into a Hive table:

从HDFS加载数据文件

LOAD DATA INPATH '/user/data/pv_2008-06-08_us.txt' INTO TABLE page_view PARTITION(date='2008-06-08', country='US')

It is assumed that the array and map fields in the input.txt files are null fields for these examples.

See Hive Data Manipulation Language for more information about loading data into Hive tables.

[Hive - Tutorial] Creating, Showing, Altering, and Dropping Tables的更多相关文章

  1. Hive Tutorial 阅读记录

    Hive Tutorial 目录 Hive Tutorial 1.Concepts 1.1.What Is Hive 1.2.What Hive Is NOT 1.3.Getting Started ...

  2. Hive Tutorial(上)(Hive 入门指导)

    用户指导 Hive 指导 Hive指导 概念 Hive是什么 Hive不是什么 获得和开始 数据单元 类型系统 内置操作符和方法 语言性能 用法和例子(在<下>里面) 概念 Hive是什么 ...

  3. [Hive - Tutorial] Type System 数据类型

    数据类型Type System Hive supports primitive and complex data types, as described below. See Hive Data Ty ...

  4. Part 2 Creating, altering and dropping a database

    A SQL Server database can be created, altered and dropped1. Graphically using SQL Server Management ...

  5. [Hive - Tutorial] Built In Operators and Functions 内置操作符与内置函数

    Built-in Operators Relational Operators The following operators compare the passed operands and gene ...

  6. [Hive - Tutorial] Querying and Inserting Data 查询和插入数据

    Querying and Inserting Data Simple Query Partition Based Query Joins Aggregations Multi Table/File I ...

  7. [Hive - Tutorial] Data Units 数据存储单位

    Data Units In the order of granularity - Hive data is organized into: 数据库.表.分区.桶 Databases: Namespac ...

  8. hive Tutorial

    hive数据单元按照粒度从大到小,依次为 1.数据库database:可以用show databases; 命令查看所有的数据库,并用use d1; 命令来选中d1数据库,接下来就可以操作d1数据库中 ...

  9. 【大数据系列】apache hive 官方文档翻译

    GettingStarted 开始 Created by Confluence Administrator, last modified by Lefty Leverenz on Jun 15, 20 ...

随机推荐

  1. ADO.NET基础02(语句参数化,配置文件,DataSet与DataTable)

    ADO.NET连接池 ado.net默认启用了连接池 *如何清空连接池?Connection的静态方法ClearAllPools(). ClearPool() Ado.net连接池使用总结: 1.第一 ...

  2. 关于spring-mvc的InitBinder注解的参数

    关于spring-mvc的InitBinder注解的参数 通过Spring-mvc的@InitBinder注释的方法可以对WebDataBinder做一些初始化操作.比如设置Validator. 我一 ...

  3. (转)Spring的编程式事务例子

    纯JDBC操作, 对某些项目来说, 也许更好, Spring JDBC Framework让你不用关心Connection, Statement, ResultSet. 定义数据源 spring事务编 ...

  4. Java API —— 多线程

    1.多线程概述     1)进程:         正在运行的程序,是系统进行资源分配和调用的独立单位.         每一个进程都有它自己的内存空间和系统资源.     2)线程:         ...

  5. Java API —— IO流( FileInputStream & FileOutputStream & BufferedInputStream & BufferedOutputStream )

    1.IO流概述 · IO流用来处理设备之间的数据传输        · 上传文件和下载文件        · Java对数据的操作是通过流的方式 · Java用于操作流的对象都在IO包中   2.IO ...

  6. 量化生产力Quantifying Productivity

    I'm always on a lookout for interesting datasets to collect, analyze and interpret. And what better ...

  7. Android log日志

    LOG是用来记录程序执行过程的机制,它既可以用于程序调试,也可以用于产品运营中的事件记录.在Android系统中,提供了简单.便利的LOG机制,开发人员可以方便地使用. androidsdk中提供了l ...

  8. 11月下旬poj其他题

    poj1000,poj1003,poj1004,poj1064,poj1218 水题 poj1012:0<k<14——漂亮的打表 poj1651:与能量项链很像的dp poj1159:回文 ...

  9. UVa 1640 (计数) The Counting Problem

    题意: 统计[a, b]或[b, a]中0~9这些数字各出现多少次. 分析: 这道题可以和UVa 11361比较来看. 同样是利用这样一个“模板”,进行区间的分块,加速运算. 因为这里没有前导0,所以 ...

  10. BZOJ3105: [cqoi2013]新Nim游戏

    题解: 线性基?类似于向量上的基底. 此题题解戳这里:http://blog.csdn.net/wyfcyx_forever/article/details/39477673 代码: #include ...