How to use a PDI job to move a file into HDFS.

Prerequisites

In order to follow along with this how-to guide you will need the following:

  • Hadoop
  • Pentaho Data Integration

Sample Files

The sample data file needed for this guide is:

File Name Content
weblogs_rebuild.txt.zip Unparsed, raw weblog data

Step-By-Step Instructions

Setup

Start Hadoop if it is not already running.

Create a Job to Put the Files into Hadoop

In this task you will load a file into HDFS.

Speed Tip
You can download the Kettle Job load_hdfs.kjb if you don't want to do every step
  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas.
  3. Add a Copy Files Job Entry: You will copy files from your local disk to HDFS, so expand the 'Big Data' section of the Design palette and drag a 'Hadoop Copy Files' job entry onto the job canvas. Your canvas should look like this:
  4. Connect the Start and Copy Files Job Entries: Hover the mouse over the 'Start' node and a tooltip will appear.  Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Hadoop Copy Files' node. Your canvas should look like this:
  5. Edit the Copy Files Job Entry: Double-click on the 'Hadoop Copy Files' node to edit its properties. Enter this information:
    1. File/Folder source(s): The folder containing the sample files you want to add to the HDFS.
    2. File/Folder destination(s): hdfs://<NAMENODE>:<PORT>/user/pdi/weblogs/raw
    3. Wildcard (RegExp): Enter ^.*\.txt
    4. Click the Add button to add the above entries to the list of files you wish to copy.
    5. Check the "Create destination folder" option to ensure that the weblogs folder is created in HDFS the first time this job is executed.
      When you are done your window should look like this (your file paths may be different):

      Click 'OK' to close the window.
  6. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'load_hdfs.kjb' into a folder of your choice.
  7. Run the Job: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. An 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully:

    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hadoop

  1. Run the following command:

    hadoop fs -ls /user/pdi/weblogs/raw

    This should return:
    -rwxrwxrwx 3 demo demo 77908174 2011-12-28 07:16 /user/pdi/weblogs/raw/weblog_raw.txt

    Summary

    In this guide you learned how to copy local files into HDFS using PDI's graphical design tool. You can use this tool to put files into the HDFS from many different sources.

Troubleshooting

  • Make sure you have the correct shim configured and that it matches your Hadoop cluster's distro and version.
  • Problem: Hadoop copy files step creates an empty file in HDFS and hangs or never writes any data.
    Check: The Hadoop client side API that Pentaho calls to copy files to HDFS requires that PDI has network connectivity to the nodes in the cluster. The DNS names or IP addresses used within the cluster must resolve the same relative to the PDI machine as they do in the cluster. When PDI requests to put a file into HDFS, the Name Node will return the DNS names (or IP address' depending on the configuration) of the actual nodes that the data will be copied to.
  • Problem: Permission denied: user=XXXX, access=EXECUTE, inode="/user/pdi/weblogs/raw":raw:hadoop:drwxr-x---
    When not using Kerberos security, the Hadoop API used by this step sends the username of the logged in user when trying to copy the file(s) regardless of what username was used in the connect field. To Change the user you must set the environment variable HADOOP_USER_NAME. You can modify spoon.bat or spoon.sh by changing the OPT variable:

    OPT="$OPT .... -DHADOOP_USER_NAME=HadoopNameToSpoof"
 
 

This documentation is maintained by the Pentaho community, and members are encouraged to create new pages in the appropriate spaces, or edit existing pages that need to be corrected or updated.

Please do not leave comments on Wiki pages asking for help. They will be deleted. Use the forums instead.

Browse Space

Add Content

Your Account 
Anonymous

Loading Data into HDFS的更多相关文章

  1. [转帖]Loading Data into HAWQ

    Loading Data into HAWQ Leave a reply Loading data into the database is required to start using it bu ...

  2. 使用OGG&quot;Loading data from file to Replicat&quot;的方法应该注意的问题:replicat进程是前台进程

    使用OGG的 "Loading data from file to Replicat"的方法应该注意的问题:replicat进程是前台进程 因此.最好是在vncserver中调用该 ...

  3. OGG &quot;Loading data from file to Replicat&quot;table静态数据同步配置过程

    OGG "Loading data from file to Replicat"table静态数据同步配置过程 一个.mgr过程 GGSCI (lei1) 3> view p ...

  4. Sample: Write And Read data from HDFS with java API

    HDFS: hadoop distributed file system 它抽象了整个集群的存储资源,可以存放大文件. 文件采用分块存储复制的设计.块的默认大小是64M. 流式数据访问,一次写入(现支 ...

  5. Loading Data into a Table;MySQL从本地向数据库导入数据

    在localhost中准备好了一个test数据库和一个pet表: mysql> SHOW DATABASES; +--------------------+ | Database | +---- ...

  6. loading data into a table(亲测有效)

    一.实验要求 导入数据到数据库的表里    表内容如下: name owner species sex birth death Fluffy Harold cat f 1993-02-04   Cla ...

  7. HeadFirst Ruby 第十五章总结 Saving and loading data

    前言 在上一章讲述了如何进行基础的操作,比如 处理 GET 请求的 get route, 再比如下载 gem 等等方面的知识.在这一章节,作者告诉我们如何储存.处理数据.整个过程分三步走: 首先,当 ...

  8. 解决eclipse+adt出现的 loading data for android 问题

    因为公司最近做的项目中有用到一些第三方demo,蛋疼的是这些demo还比较旧...eclipse的... 于是给自己的eclipse装上了ADT插件,但是...因为我的eclipse比较新,Versi ...

  9. flume data to hdfs

    flume 开发梳理 flume 数据到hadoop conf/hdfsAgent.conf #配置sources.channels.sinks a1.sources=r1 a1.channels=c ...

随机推荐

  1. 2016 系统设计第一期 (档案一)MVC 相关控件整理

    说明:前者是MVC,后者是boostrap 1.form 表单 @using (Html.BeginForm("Create", "User", FormMet ...

  2. 2016 系统设计第一期 (档案一)MVC a标签 跳转 Html.ActionLink的用法

    html: <a class="J_menuItem" href="baidu.com">权限管理</a> cshtml: 原有样式: ...

  3. UI元素的相对自适应

    什么是UI元素的相对自适应 UI元素的相对自适应,顾名思义,是指两个UI元素之间保持一种相对的位置不要变化,例如,UI元素A永远处于UI元素B右边的50像素处位置.再比如,一个UI背景框,不论屏幕尺寸 ...

  4. 【BZOJ2152】聪聪可可

    Description 聪聪和可可是兄弟俩,他们俩经常为了一些琐事打起来,例如家中只剩下最后一根冰棍而两人都想吃.两个人都想玩儿电脑(可是他们家只有一台电脑)……遇到这种问题,一般情况下石头剪刀布就好 ...

  5. Jetty 与 Tomcat 比较,及性能分析

    主流java的web容器,主要是Tomcat, jboss, jetty, resin.由于以前我们主要用的是jboss4.0.5,但jbosse用的servlet容器是tomcat5.5,所以只进行 ...

  6. (转)inux Read系统调用

    转载网址:http://my.oschina.net/haomcu/blog/468656 1. 什么是系统调用 2. read系统调用在内核空间的处理层次模型 3. 相关的内核数据结构 4. rea ...

  7. 如何让VS根据编译环境选择相应的配置文件

    其实微软还是蛮有创造力的,一个配置文件居然弄了这么多的形式,从原来的ini到现在的xml,总而言之让我们这些在微软殿堂里的程序员翘着屁股追赶. 微软最新的配置文件实际上就是个xml文件,以后缀名.co ...

  8. csu 1306 Manor(优先队列)

    http://acm.csu.edu.cn/OnlineJudge/problem.php?id=1306 1306: Manor Time Limit: 1 Sec  Memory Limit: 1 ...

  9. 1189: [HNOI2007]紧急疏散evacuate - BZOJ

    Description 发生了火警,所有人员需要紧急疏散!假设每个房间是一个N M的矩形区域.每个格子如果是'.',那么表示这是一块空地:如果是'X',那么表示这是一面墙,如果是'D',那么表示这是一 ...

  10. asp 文件上传(无组件上传)

    文件1.上传界面文件 upload.htm<html><head><meta http-equiv="Content-Language" conten ...