Overview

  • Apache Impala (incubating) is the open source, native analytic database for apache Hadoop.

Features

  • Do BI-style Queries on Hadoop:

    • low latency and high concurrency for BI/analytic queries on Hadoop(not delivered by batch frameworks such as Apache Hive).
    • scales linearly, even in multitenant environments.
  • Unify ur Infrasturecture: Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment—no redundant infrastructure or data conversion/duplication.
  • Implement Quickly: supports SQL
  • Count on Enterprise-class Security
  • Retain Freedom from Lock-in: open-source
  • Expand the Hadoop User-verse

Architecuture

  • Circumvents MapReduce to avoid latency, directly access the data through a specialized distributed query engine that is very similar to those found in commercial parallel RDBMSs.
  • Some advantages:
    • Thx to local processing on data nodes, network bottlenecks are avoided.
    • A signle, open, and unified metadata store can be utilized.
    • Costly data format conversion is unnecessary and thus no overhead is incurred.
    • All data is immediately query-able, with no delays for ETL.
    • All hardware is utilized for Impala queries as well as for MR.
    • Only a single machine pool is needed to scale.

Documentation

... skip

Impala User-Defined Functions(UDFs)

  • UDF let you code ur own application logic for processing column values during an Impala query.

UDFS Concepts

  • U can code either scalar functions for producing results one row at a time.
  • Or more complex aggregate functions for doing analysis across.

UDFs and UDAFs

  • The most general kind of udf takes single input value and produces a single output value. When used in a query, it is called once for each row in the result set. eg:

    select customer_name, is_frequent_customer(customer_id) from customers;
    select obfuscate(sensitive_column) from sensitive_data;
  • A user-defined aggergate function(UDAF) accepts a group of values and returns a single value. U can use UDAFs to summarize and condense sets of rows, in the same style as the built-in COUNT, MAX(), SUM(), and AVG() functions. When called in a query that uses the GROUP BY clause, the function is called once for each combination of GROUP BY values. eg:
    -- Evaluates multiple rows but returns a single value
    select closest_restaurant(latitude, longitude) from places; -- Evaluates batches of rows and returns a separate value for each batch.
    select most_profitable_locartion(store_id, sales, expenses, tax_rate, depreciation) from franchise_data group by year;
  • Currently, Impala does not support other categories of udf, such as user-defined table functions(UDTFs) or window functions.

Native Impala UDFs

  • Impala supports UDFs written in C++, in addition to supporting existing Hive UDFs written in Java.
  • Where practical, use C++ UDFs because the compiled native code can yield higher performance, with UDF execution time often 10x faster for a C++ UDF than the equivalent Java UDF.

Using Hive UDFs with Impala

  • Impala can run Java-based user-defined functions (UDFs), originally written for Hive, with no changes, subject to the following conditions:

    • The parameter and return value must all use scalar data types supported by Impala. That's to say, complex or nested types are not supported.
    • Currently, Hive UDFs that accept or return the TIMESTAMP type are not supported.
    • Hive UDAFs and UDTFs are not supported.
    • Typically, a Java UDF will execute several times slower in Impala than the equivalent native UDF written in C++.
  • What to do next?
    • write ur udf
    • upload the jar to a hdfs path(where impala can read)
    • for each Java-based UDF that u want to call through Impala, issue a CREATE FUNCTION statement, with a LOCATION clause containing the full HDFS path or the JAR file, and a SYMBOL clause with the fully qualified name of the class, using dots as separators and without the .class extension. eg:
      create function my_neg(bigint)
      returns bigint location '/user/hive/udfs/hive.jar'
      symbol = 'org.apache.hadoop.hive.ql.udf.UDFOPNegative';
    • call the function from ur queries, passing arguments of the correct type to match the function signature.

FYI

<Impala><Overview><UDF>的更多相关文章

  1. 简单物联网:外网访问内网路由器下树莓派Flask服务器

    最近做一个小东西,大概过程就是想在教室,宿舍控制实验室的一些设备. 已经在树莓上搭了一个轻量的flask服务器,在实验室的路由器下,任何设备都是可以访问的:但是有一些限制条件,比如我想在宿舍控制我种花 ...

  2. 利用ssh反向代理以及autossh实现从外网连接内网服务器

    前言 最近遇到这样一个问题,我在实验室架设了一台服务器,给师弟或者小伙伴练习Linux用,然后平时在实验室这边直接连接是没有问题的,都是内网嘛.但是回到宿舍问题出来了,使用校园网的童鞋还是能连接上,使 ...

  3. 外网访问内网Docker容器

    外网访问内网Docker容器 本地安装了Docker容器,只能在局域网内访问,怎样从外网也能访问本地Docker容器? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Docker容器 ...

  4. 外网访问内网SpringBoot

    外网访问内网SpringBoot 本地安装了SpringBoot,只能在局域网内访问,怎样从外网也能访问本地SpringBoot? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装Java 1 ...

  5. 外网访问内网Elasticsearch WEB

    外网访问内网Elasticsearch WEB 本地安装了Elasticsearch,只能在局域网内访问其WEB,怎样从外网也能访问本地Elasticsearch? 本文将介绍具体的实现步骤. 1. ...

  6. 怎样从外网访问内网Rails

    外网访问内网Rails 本地安装了Rails,只能在局域网内访问,怎样从外网也能访问本地Rails? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Rails 默认安装的Rails端口 ...

  7. 怎样从外网访问内网Memcached数据库

    外网访问内网Memcached数据库 本地安装了Memcached数据库,只能在局域网内访问,怎样从外网也能访问本地Memcached数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装 ...

  8. 怎样从外网访问内网CouchDB数据库

    外网访问内网CouchDB数据库 本地安装了CouchDB数据库,只能在局域网内访问,怎样从外网也能访问本地CouchDB数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Cou ...

  9. 怎样从外网访问内网DB2数据库

    外网访问内网DB2数据库 本地安装了DB2数据库,只能在局域网内访问,怎样从外网也能访问本地DB2数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动DB2数据库 默认安装的DB2 ...

  10. 怎样从外网访问内网OpenLDAP数据库

    外网访问内网OpenLDAP数据库 本地安装了OpenLDAP数据库,只能在局域网内访问,怎样从外网也能访问本地OpenLDAP数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动 ...

随机推荐

  1. sgu

    dp第几朵花放第几瓶 104 数论 能不能除3:105    106(ex_gcd引入t求范围交)     107(大数乘的FFT) 开空间技巧108 棋盘黑白格消除109(组合数学) java平方根 ...

  2. PHP个人博客项目------切切歆语博客

    php+mysql+apache, ThinkPHP3.2框架开发 我的个人博客项目 适合新手练习 源码地址下载:https://github.com/DickyQie/php-myblog

  3. 小程序分享转发功能实现demo

    /** * 用户点击右上角分享 */ onShareAppMessage: function() { //分享 console.log("分享") var that = this ...

  4. dedecms自定义表单时间时间戳值类型的转换方法

    找网站找的别人的方法,记录一下 修改/dede/templets/diy_list.htm,在第42行else前面加上以下代码: else if($fielddata[1]=='datetime') ...

  5. DRF之视图和router

    1. 视图 Django REST framwork 提供的视图的主要作用: 控制序列化器的执行(检验.保存.转换数据) 控制数据库查询的执行 1.1. 请求与响应 1.1.1 Request RES ...

  6. 『MXNet』第十二弹_再谈新建计算节点

    上一节我们已经谈到了计算节点,但是即使是官方文档介绍里面相关内容也过于简略,我们使用Faster-RCNN代码中的新建节点为例,重新介绍一下新建节点的调用栈. 1.调用新建节点 参数分为三部分,op_ ...

  7. [转]2017年最具价值的十大开源项目!GitHub 年度报告~

    <GitHub 2017 年度报告>GitHub 每年都会在年度盛会中推出数据报告,其中列出了一些年度的数据,包括其网站中最受欢迎的编程语言.开源项目等.那么今年哪些开源项目最具价值呢?我 ...

  8. CAD插入块后坐标不匹配

    有两张图,将一张图复制(CTRL+V),再另一张图中粘贴到原坐标(pasteorig),两张图可以很好匹配,但将一张图以外部参照的方式插入另一张图却发现图形无法匹配.因为没有看到图纸,所以我也没法准确 ...

  9. Linux定时计划(crontab)使用说明

    一.设置定时计划步骤 第一步,编缉计划文件:crontab -e 第二步,在文件中写入计划,格式如:minute hour day month week command.如0 8 * * * sh / ...

  10. spring boot整合shiro后,部分注解(Cache缓存、Transaction事务等)失效的问题

    版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.net/elonpage/article/details/78965176 前言 整合有缓存.事务的sprin ...