How to use a PDI job to move a file into HDFS.

Prerequisites

In order to follow along with this how-to guide you will need the following:

  • Hadoop
  • Pentaho Data Integration

Sample Files

The sample data file needed for this guide is:

File Name Content
weblogs_rebuild.txt.zip Unparsed, raw weblog data

Step-By-Step Instructions

Setup

Start Hadoop if it is not already running.

Create a Job to Put the Files into Hadoop

In this task you will load a file into HDFS.

Speed Tip
You can download the Kettle Job load_hdfs.kjb if you don't want to do every step
  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas.
  3. Add a Copy Files Job Entry: You will copy files from your local disk to HDFS, so expand the 'Big Data' section of the Design palette and drag a 'Hadoop Copy Files' job entry onto the job canvas. Your canvas should look like this:
  4. Connect the Start and Copy Files Job Entries: Hover the mouse over the 'Start' node and a tooltip will appear.  Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Hadoop Copy Files' node. Your canvas should look like this:
  5. Edit the Copy Files Job Entry: Double-click on the 'Hadoop Copy Files' node to edit its properties. Enter this information:
    1. File/Folder source(s): The folder containing the sample files you want to add to the HDFS.
    2. File/Folder destination(s): hdfs://<NAMENODE>:<PORT>/user/pdi/weblogs/raw
    3. Wildcard (RegExp): Enter ^.*\.txt
    4. Click the Add button to add the above entries to the list of files you wish to copy.
    5. Check the "Create destination folder" option to ensure that the weblogs folder is created in HDFS the first time this job is executed.
      When you are done your window should look like this (your file paths may be different):

      Click 'OK' to close the window.
  6. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'load_hdfs.kjb' into a folder of your choice.
  7. Run the Job: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. An 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully:

    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hadoop

  1. Run the following command:

    hadoop fs -ls /user/pdi/weblogs/raw

    This should return:
    -rwxrwxrwx 3 demo demo 77908174 2011-12-28 07:16 /user/pdi/weblogs/raw/weblog_raw.txt

    Summary

    In this guide you learned how to copy local files into HDFS using PDI's graphical design tool. You can use this tool to put files into the HDFS from many different sources.

Troubleshooting

  • Make sure you have the correct shim configured and that it matches your Hadoop cluster's distro and version.
  • Problem: Hadoop copy files step creates an empty file in HDFS and hangs or never writes any data.
    Check: The Hadoop client side API that Pentaho calls to copy files to HDFS requires that PDI has network connectivity to the nodes in the cluster. The DNS names or IP addresses used within the cluster must resolve the same relative to the PDI machine as they do in the cluster. When PDI requests to put a file into HDFS, the Name Node will return the DNS names (or IP address' depending on the configuration) of the actual nodes that the data will be copied to.
  • Problem: Permission denied: user=XXXX, access=EXECUTE, inode="/user/pdi/weblogs/raw":raw:hadoop:drwxr-x---
    When not using Kerberos security, the Hadoop API used by this step sends the username of the logged in user when trying to copy the file(s) regardless of what username was used in the connect field. To Change the user you must set the environment variable HADOOP_USER_NAME. You can modify spoon.bat or spoon.sh by changing the OPT variable:

    OPT="$OPT .... -DHADOOP_USER_NAME=HadoopNameToSpoof"
 
 

This documentation is maintained by the Pentaho community, and members are encouraged to create new pages in the appropriate spaces, or edit existing pages that need to be corrected or updated.

Please do not leave comments on Wiki pages asking for help. They will be deleted. Use the forums instead.

Browse Space

Add Content

Your Account 
Anonymous

Loading Data into HDFS的更多相关文章

  1. [转帖]Loading Data into HAWQ

    Loading Data into HAWQ Leave a reply Loading data into the database is required to start using it bu ...

  2. 使用OGG&quot;Loading data from file to Replicat&quot;的方法应该注意的问题:replicat进程是前台进程

    使用OGG的 "Loading data from file to Replicat"的方法应该注意的问题:replicat进程是前台进程 因此.最好是在vncserver中调用该 ...

  3. OGG &quot;Loading data from file to Replicat&quot;table静态数据同步配置过程

    OGG "Loading data from file to Replicat"table静态数据同步配置过程 一个.mgr过程 GGSCI (lei1) 3> view p ...

  4. Sample: Write And Read data from HDFS with java API

    HDFS: hadoop distributed file system 它抽象了整个集群的存储资源,可以存放大文件. 文件采用分块存储复制的设计.块的默认大小是64M. 流式数据访问,一次写入(现支 ...

  5. Loading Data into a Table;MySQL从本地向数据库导入数据

    在localhost中准备好了一个test数据库和一个pet表: mysql> SHOW DATABASES; +--------------------+ | Database | +---- ...

  6. loading data into a table(亲测有效)

    一.实验要求 导入数据到数据库的表里    表内容如下: name owner species sex birth death Fluffy Harold cat f 1993-02-04   Cla ...

  7. HeadFirst Ruby 第十五章总结 Saving and loading data

    前言 在上一章讲述了如何进行基础的操作,比如 处理 GET 请求的 get route, 再比如下载 gem 等等方面的知识.在这一章节,作者告诉我们如何储存.处理数据.整个过程分三步走: 首先,当 ...

  8. 解决eclipse+adt出现的 loading data for android 问题

    因为公司最近做的项目中有用到一些第三方demo,蛋疼的是这些demo还比较旧...eclipse的... 于是给自己的eclipse装上了ADT插件,但是...因为我的eclipse比较新,Versi ...

  9. flume data to hdfs

    flume 开发梳理 flume 数据到hadoop conf/hdfsAgent.conf #配置sources.channels.sinks a1.sources=r1 a1.channels=c ...

随机推荐

  1. 使用nodejs搭建服务器显示HTML页面

    首先安装express 在命令行输入:npm install express -g 安装完成后可以查看安装情况:npm ls -g 然后创建server.js文件 var express = requ ...

  2. hadoop 任务执行优化

    任务执行优化 1. 推测式执行: 如果jobtracker 发现有拖后的任务,会再启动一个相同的备份任务,然后哪个先执行完就会去kill掉另一个,因此会在监控页面上经常能看到正常执行完的作业会有被ki ...

  3. 原生js解决跨浏览器兼容问题

    //跨浏览器兼容问题 Util = { //添加类名 add : function(ele,type,hand){ if(ele.addEventListener){ ele.addEventList ...

  4. DeepFace--Facebook的人脸识别(转)

    DeepFace基本框架 人脸识别的基本流程是: detect -> aligh -> represent -> classify 人脸对齐流程 分为如下几步: a. 人脸检测,使用 ...

  5. HDU 2992 Hotel booking(BFS+DFS 或者 SPFA+Floyd)

    点我看题目 题意 : 一个司机要从1点到达n点,1点到n点中有一些点有宾馆,司机的最长开车时间不能超过10小时,所以要在10小时之内找到宾馆休息,但是为了尽快的走到n点,问最少可以经过几个宾馆. 思路 ...

  6. codeforces #309 div1 C

    首先我们会发现所有的人构成了一个图 定义相爱为 在一个集合里 定义相恨为 不在一个集合里 很容易发现满足条件的图一定是一个二分图 那么分类讨论如下: 1.如果出现不合法 答案为0 2.如果不是一个二分 ...

  7. LESS CSS 总结

    1.LESS 简介 less是动态的样式表语言,通过简洁明了的语法定义,使编写 CSS 的工作变得非常简单 类似Jquery框架 中文网站: http://www.lesscss.net/ 2.编译工 ...

  8. 208. Implement Trie (Prefix Tree)

    题目: Implement a trie with insert, search, and startsWith methods. 链接: http://leetcode.com/problems/i ...

  9. Retrofit初识

    Retrofit Retrofit是一套RESTful架构的Android(Java)客户端实现,基于注解,提供JSON to POJO(Plain Ordinary Java Object,简单Ja ...

  10. Android安全问题 抢先接收广播 - 内因篇之广播接收器注册流程

    导读:本文说明系统是如何注册动态广播以及静态广播,这里主要注意其注册的顺序 这篇文章主要是针对我前两篇文章 android安全问题  抢先开机启动 - 结果篇 android安全问题  抢先拦截短信 ...