Archiving for File Count Reduction

Note: Archiving should be considered an advanced command due to the caveats involved.

Overview

Due to the design of HDFS, the number of files in the filesystem directly affects the memory consumption(消费) in the namenode. While normally not a problem for small clusters, memory usage may hit the limits of accessible memory on a single machine when there are >50-100 million files. In such situations, it is advantageous(有利的) to have as few files as possible.

The use of Hadoop Archives is one approach(途径) to reducing the number of files in partitions. (减少分区里面的文件数量)Hive has built-in support to convert files in existing partitions to a Hadoop Archive (HAR) so that a partition that may once have consisted of 100's of files can occupy just ~3 files (depending on settings). However, the trade-off(交易,权衡) is that queries may be slower due to the additional overhead in reading from the HAR. (但是读数据的时候可能会稍稍变慢)

Note that archiving does NOT compress the files – HAR is analogous to the Unix tar command.

Archiving 并非压缩文件,非常类似与Unix系统的tar命令 (按我的理解是:仅打包,不压缩)

tar -zcvf /tmp/etc.tar.gz  /etc  <==打包后,以 gzip 压缩
tar -jcvf /tmp/etc.tar.bz2 /etc <==打包后,以 bzip2 压缩
tar -zxvf /tmp/etc.tar.gz 解压
tar -jxvf /tmp/etc.tar.bz2 解压

Settings

There are 3 settings that should be configured before archiving is used. (Example values are shown.)

hive> set hive.archive.enabled=true;
hive> set hive.archive.har.parentdir.settable=true;
hive> set har.partfile.size=1099511627776;

hive.archive.enabled controls whether archiving operations are enabled.

hive.archive.har.parentdir.settable informs Hive whether the parent directory can be set while creating the archive. In recent versions of Hadoop the -p option can specify the root directory of the archive. For example, if /dir1/dir2/file is archived with /dir1 as the parent directory, then the resulting archive file will contain the directory structure dir2/file. In older versions of Hadoop (prior to 2011), this option was not available and therefore Hive must be configured to accommodate(适应) this limitation.

har.partfile.size controls the size of the files that make up the archive. The archive will contain size_of_partition/har.partfile.size files, rounded up. Higher values mean fewer files, but will result in longer archiving times due to the reduced number of mappers.

Usage

Archive

Once the configuration values are set, a partition can be archived with the command:

ALTER TABLE table_name ARCHIVE PARTITION (partition_col = partition_col_value, partition_col = partiton_col_value, ...)

For example:

ALTER TABLE srcpart ARCHIVE PARTITION(ds='2008-04-08', hr='12')

Once the command is issued, a mapreduce job will perform the archiving. Unlike Hive queries, there is no output on the CLI to indicate process.

Unarchive

The partition can be reverted back to its original files with the unarchive command:

ALTER TABLE srcpart UNARCHIVE PARTITION(ds='2008-04-08', hr='12')

Cautions and Limitations 警告和限制

  • In some older versions of Hadoop, HAR had a few bugs that could cause data loss or other errors. Be sure that these patches are integrated into your version of Hadoop:

https://issues.apache.org/jira/browse/HADOOP-6591 (fixed in Hadoop 0.21.0)

https://issues.apache.org/jira/browse/MAPREDUCE-1548 (fixed in Hadoop 0.22.0)

https://issues.apache.org/jira/browse/MAPREDUCE-2143 (fixed in Hadoop 0.22.0)

https://issues.apache.org/jira/browse/MAPREDUCE-1752 (fixed in Hadoop 0.23.0)

  • The HarFileSystem class still has a bug that has yet to be fixed:

https://issues.apache.org/jira/browse/MAPREDUCE-1877 (moved to https://issues.apache.org/jira/browse/HADOOP-10906 in 2014)

Hive comes with the HiveHarFileSystem class that addresses some of these issues, and is by default the value for fs.har.impl. Keep this in mind if you're rolling your own version of HarFileSystem:

  • The default HiveHarFileSystem.getFileBlockLocations() has no locality. That means it may introduce higher network loads or reduced performance.
  • Archived partitions cannot be overwritten with INSERT OVERWRITE. The partition must be unarchived first.
  • If two processes attempt to archive the same partition at the same time, bad things could happen. (Need to implement concurrency support.)

Under the Hood

Internally, when a partition is archived, a HAR is created using the files from the partition's original location (such as /warehouse/table/ds=1). The parent directory of the partition is specified to be the same as the original location and the resulting archive is named 'data.har'. The archive is moved under the original directory (such as /warehouse/table/ds=1/data.har), and the partition's location is changed to point to the archive.

[Hive - LanguageManual] Archiving for File Count Reduction的更多相关文章

  1. Hive:org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The NameSpace quota (directories and files) of directory /mydir is exceeded: quota=100000 file count=100001

    集群中遇到了文件个数超出限制的错误: 0)昨天晚上spark 任务突然抛出了异常:org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: T ...

  2. [Hive - LanguageManual] Alter Table/Partition/Column

    Alter Table/Partition/Column Alter Table Rename Table Alter Table Properties Alter Table Comment Add ...

  3. [Hive - LanguageManual] DML: Load, Insert, Update, Delete

    LanguageManual DML Hive Data Manipulation Language Hive Data Manipulation Language Loading files int ...

  4. [Hive - LanguageManual] GroupBy

    Group By Syntax Simple Examples Select statement and group by clause Advanced Features Multi-Group-B ...

  5. [HIve - LanguageManual] Hive Operators and User-Defined Functions (UDFs)

    Hive Operators and User-Defined Functions (UDFs) Hive Operators and User-Defined Functions (UDFs) Bu ...

  6. [Hive - LanguageManual ] Explain (待)

    EXPLAIN Syntax EXPLAIN Syntax Hive provides an EXPLAIN command that shows the execution plan for a q ...

  7. [Hive - LanguageManual ] Windowing and Analytics Functions (待)

    LanguageManual WindowingAndAnalytics     Skip to end of metadata   Added by Lefty Leverenz, last edi ...

  8. [Hive - LanguageManual] VirtualColumns

    Virtual Columns Simple Examples Virtual Columns Hive 0.8.0 provides support for two virtual columns: ...

  9. Hive LanguageManual DDL

    hive语法规则LanguageManual DDL SQL DML 和 DDL 数据操作语言 (DML) 和 数据定义语言 (DDL) 一.数据库 增删改都在文档里说得也很明白,不重复造车轮 二.表 ...

随机推荐

  1. ArcGIS 10.1 for Desktop新特性之地理标记照片

    转自:http://blog.csdn.net/esrichinacd/article/details/7730825 地理标记照片是指带有地理位置信息的照片,通常通过内置GPS的数码相机或智能手机拍 ...

  2. 一个酷炫的,基于HTML5,Jquery和Css的全屏焦点图特效,兼容各种浏览器

    基于HTML5和CSS的焦点图特效,梅花图案的背景很有中国特色,而且还会动哦,效果超炫,推荐下载! 演示图 html代码 <!DOCTYPE html PUBLIC "-//W3C// ...

  3. oracle 修改表的sql语句

    oracle 修改表的sql语句     1增加一个列:ALTER TABLE 表名 ADD(列名 数据类型);如:ALTER TABLE emp ADD(license varchar2(256)) ...

  4. swift:自动引用计数ARC

    Swift自动引用计数:ARC    原文链接:https://numbbbbb.gitbooks.io/-the-swift-programming-language-/content/chapte ...

  5. Android 自定义控件-TextView

    很多时候系统自带的View满足不了设计的要求,就需要自定义View控件.自定义View首先要实现一个继承自View的类.添加类的构造方法,override父类的方法,如onDraw,(onMeasur ...

  6. Java API —— Collections类

    1.Collections类概述         针对集合操作 的工具类,都是静态方法   2.Collections成员方法         public static <T> void ...

  7. Cookie的具体使用之来存储对象

    1.创建一个新的cookie,并赋值. HttpCookie cookie;       cookie=new HttpCookie("user");       cookie.D ...

  8. SPRING STS Virgo OSGI 开发一 : bundle 项目的创建

    1. Spring STS 下载地址  (spring 最近改了站点 暂时不是太熟悉)     http://spring.io/tools/sts/all 2. 下载 Virgo 插件    htt ...

  9. btr_pcur_t

    /** Persistent cursor */ typedef struct btr_pcur_struct btr_pcur_t; /* The persistent B-tree cursor ...

  10. iOS开发:记录开发中遇到的编译或运行异常以及解决方案

    1.部署到真机异常 dyld`dyld_fatal_error: ->  0x120015088 <+0>: brk    #0x3 dyld: Library not loaded ...