http://parquet.apache.org

层次结构:

file -> row groups -> column chunks -> pages(data/index/dictionary)

Motivation

We created Parquet to make the advantages of compressed, efficient columnar data representation available to any project in the Hadoop ecosystem.

Parquet is built from the ground up with complex nested data structures in mind, and uses the record shredding and assembly algorithm described in the Dremel paper. We believe this approach is superior to simple flattening of nested name spaces.

Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented.

Parquet is built to be used by anyone. The Hadoop ecosystem is rich with data processing frameworks, and we are not interested in playing favorites. We believe that an efficient, well-implemented columnar storage substrate should be useful to all frameworks without the cost of extensive and difficult to set up dependencies.

Parquet是为了让Hadoop生态的任何项目都可以利用压缩和列式存储的优点;Parquet生来就支持复杂的嵌套数据结构,使用了Dremel论文里提到的记录分片和整合算法;Parquet支持高效的压缩和编码scheme,很多项目都证明了这会极大的提升查询性能;

Glossary

Block (hdfs block): This means a block in hdfs and the meaning is unchanged for describing this file format. The file format is designed to work well on top of hdfs.

File: A hdfs file that must include the metadata for the file. It does not need to actually contain the data.

Row group: A logical horizontal partitioning of the data into rows. There is no physical structure that is guaranteed for a row group. A row group consists of a column chunk for each column in the dataset.

Column chunk: A chunk of the data for a particular column. These live in a particular row group and is guaranteed to be contiguous in the file.

Page: Column chunks are divided up into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). There can be multiple page types which is interleaved in a column chunk.

Hierarchically, a file consists of one or more row groups. A row group contains exactly one column chunk per column. Column chunks contain one or more pages.

一个file包含一个或多个row group,一个row group里每个column都包含唯一一个column chunk,一个column chunk包含一个或多个page;

Metadata

There are three types of metadatafile metadata, column (chunk) metadata and page header metadata. All thrift structures are serialized using the TCompactProtocol.

The file metadata contains the locations of all the column metadata start locations.

Metadata is written after the data to allow for single pass writing.

Readers are expected to first read the file metadata to find all the column chunks they are interested in. The columns chunks should then be read sequentially.

有3种元数据:file metadata,column metadata和page header metadata;file metadata包含了所有column metadata的起始位置;reader应该先读file metadata来找到它们感兴趣的column chunk;

The format is explicitly designed to separate the metadata from the data. This allows splitting columns into multiple files, as well as having a single metadata file reference multiple parquet files.

这种格式的设计是为了将metadata和data分离,这样就可以将不同的列的数据拆分到不同的文件,同时有一个metadata文件可以引用多个data文件;

【原创】大数据基础之Parquet(1)简介的更多相关文章

  1. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  2. 【原创】大数据基础之Impala(1)简介、安装、使用

    impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...

  3. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  4. 【原创】大数据基础之词频统计Word Count

    对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...

  5. 大数据基础知识问答----spark篇,大数据生态圈

    Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapredu ...

  6. 大数据基础知识:分布式计算、服务器集群[zz]

    大数据中的数据量非常巨大,达到了PB级别.而且这庞大的数据之中,不仅仅包括结构化数据(如数字.符号等数据),还包括非结构化数据(如文本.图像.声音.视频等数据).这使得大数据的存储,管理和处理很难利用 ...

  7. 大数据基础知识问答----hadoop篇

    handoop相关知识点 1.Hadoop是什么? Hadoop是一个由Apache基金会所开发的分布式系统基础架构.用户可以在不了解分布式底层细节的情况下,开发分布式程序.充分利用集群的威力进行高速 ...

  8. hadoop大数据基础框架技术详解

    一.什么是大数据 进入本世纪以来,尤其是2010年之后,随着互联网特别是移动互联网的发展,数据的增长呈爆炸趋势,已经很难估计全世界的电子设备中存储的数据到底有多少,描述数据系统的数据量的计量单位从MB ...

  9. 大数据基础总结---HDFS分布式文件系统

    HDFS分布式文件系统 文件系统的基本概述 文件系统定义:文件系统是一种存储和组织计算机数据的方法,它使得对其访问和查找变得容易. 文件名:在文件系统中,文件名是用于定位存储位置. 元数据(Metad ...

随机推荐

  1. C语言之概述

    //添加对函数的说明(规范) #include<stdio.h> /*A simple C progress*/ int main(void) { int num; /*Define an ...

  2. CodeForces 461B Appleman and T

    题目链接:http://codeforces.com/contest/461/problem/B 题目大意: 给定一课树,树上的节点有黑的也有白的,有这样一种分割树的方案,分割后每个子图只含有一个黑色 ...

  3. golang数据类型与转换

    一.数值型int(默认值 0) int 整数 32位系统占4个字节(-2^31~2^31-1).64位系统占8个字节(-2^63~2^63-1)uint 32位系统占4个字节(0~2^32-1).64 ...

  4. js实现在光标的位置 添加内容

    <!doctype html> <html> <head> <meta charset="utf-8"> <title> ...

  5. WinForm登录验证

    概述:输错三次禁止登陆,15分钟后才能继续. 图示: Form1代码: using System; using System.Configuration; using System.Data.SqlC ...

  6. Docker 安装应用

    Docker 安装应用 安装 odoo 10 : docker pull postgres:9.6 &&docker pull odoo:10 && docker ru ...

  7. 【并发编程】【JDK源码】J.U.C--线程池

    原文:慕课网实战·高并发探索(十四):线程池 Executor new Thread的弊端 每次new Thread 新建对象,性能差. 线程缺乏统一管理,可能无限制的新建线程,相互竞争,可能占用过多 ...

  8. python版接口自动化测试框架源码完整版(requests + unittest)

    python版接口自动化测试框架:https://gitee.com/UncleYong/my_rf [框架目录结构介绍] bin: 可执行文件,程序入口 conf: 配置文件 core: 核心文件 ...

  9. Git服务器Gogs简易安装-Windows环境

    1.下载git for windows https://github.com/git-for-windows/git/releases/download/v2.15.0.windows.1/Git-2 ...

  10. c++三种继承方式public,protect,private

    C++中的三种继承public,protected,private 三种访问权限 public:可以被任意实体访问 protected:只允许子类及本类的成员函数访问 private:只允许本类的成员 ...