How to use a PDI job to move a file into HDFS.

Prerequisites

In order to follow along with this how-to guide you will need the following:

  • Hadoop
  • Pentaho Data Integration

Sample Files

The sample data file needed for this guide is:

File Name Content
weblogs_rebuild.txt.zip Unparsed, raw weblog data

Step-By-Step Instructions

Setup

Start Hadoop if it is not already running.

Create a Job to Put the Files into Hadoop

In this task you will load a file into HDFS.

Speed Tip
You can download the Kettle Job load_hdfs.kjb if you don't want to do every step
  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas.
  3. Add a Copy Files Job Entry: You will copy files from your local disk to HDFS, so expand the 'Big Data' section of the Design palette and drag a 'Hadoop Copy Files' job entry onto the job canvas. Your canvas should look like this:
  4. Connect the Start and Copy Files Job Entries: Hover the mouse over the 'Start' node and a tooltip will appear.  Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Hadoop Copy Files' node. Your canvas should look like this:
  5. Edit the Copy Files Job Entry: Double-click on the 'Hadoop Copy Files' node to edit its properties. Enter this information:
    1. File/Folder source(s): The folder containing the sample files you want to add to the HDFS.
    2. File/Folder destination(s): hdfs://<NAMENODE>:<PORT>/user/pdi/weblogs/raw
    3. Wildcard (RegExp): Enter ^.*\.txt
    4. Click the Add button to add the above entries to the list of files you wish to copy.
    5. Check the "Create destination folder" option to ensure that the weblogs folder is created in HDFS the first time this job is executed.
      When you are done your window should look like this (your file paths may be different):

      Click 'OK' to close the window.
  6. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'load_hdfs.kjb' into a folder of your choice.
  7. Run the Job: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. An 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully:

    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hadoop

  1. Run the following command:

    hadoop fs -ls /user/pdi/weblogs/raw

    This should return:
    -rwxrwxrwx 3 demo demo 77908174 2011-12-28 07:16 /user/pdi/weblogs/raw/weblog_raw.txt

    Summary

    In this guide you learned how to copy local files into HDFS using PDI's graphical design tool. You can use this tool to put files into the HDFS from many different sources.

Troubleshooting

  • Make sure you have the correct shim configured and that it matches your Hadoop cluster's distro and version.
  • Problem: Hadoop copy files step creates an empty file in HDFS and hangs or never writes any data.
    Check: The Hadoop client side API that Pentaho calls to copy files to HDFS requires that PDI has network connectivity to the nodes in the cluster. The DNS names or IP addresses used within the cluster must resolve the same relative to the PDI machine as they do in the cluster. When PDI requests to put a file into HDFS, the Name Node will return the DNS names (or IP address' depending on the configuration) of the actual nodes that the data will be copied to.
  • Problem: Permission denied: user=XXXX, access=EXECUTE, inode="/user/pdi/weblogs/raw":raw:hadoop:drwxr-x---
    When not using Kerberos security, the Hadoop API used by this step sends the username of the logged in user when trying to copy the file(s) regardless of what username was used in the connect field. To Change the user you must set the environment variable HADOOP_USER_NAME. You can modify spoon.bat or spoon.sh by changing the OPT variable:

    OPT="$OPT .... -DHADOOP_USER_NAME=HadoopNameToSpoof"
 
 

This documentation is maintained by the Pentaho community, and members are encouraged to create new pages in the appropriate spaces, or edit existing pages that need to be corrected or updated.

Please do not leave comments on Wiki pages asking for help. They will be deleted. Use the forums instead.

Browse Space

Add Content

Your Account 
Anonymous

Loading Data into HDFS的更多相关文章

  1. [转帖]Loading Data into HAWQ

    Loading Data into HAWQ Leave a reply Loading data into the database is required to start using it bu ...

  2. 使用OGG&quot;Loading data from file to Replicat&quot;的方法应该注意的问题:replicat进程是前台进程

    使用OGG的 "Loading data from file to Replicat"的方法应该注意的问题:replicat进程是前台进程 因此.最好是在vncserver中调用该 ...

  3. OGG &quot;Loading data from file to Replicat&quot;table静态数据同步配置过程

    OGG "Loading data from file to Replicat"table静态数据同步配置过程 一个.mgr过程 GGSCI (lei1) 3> view p ...

  4. Sample: Write And Read data from HDFS with java API

    HDFS: hadoop distributed file system 它抽象了整个集群的存储资源,可以存放大文件. 文件采用分块存储复制的设计.块的默认大小是64M. 流式数据访问,一次写入(现支 ...

  5. Loading Data into a Table;MySQL从本地向数据库导入数据

    在localhost中准备好了一个test数据库和一个pet表: mysql> SHOW DATABASES; +--------------------+ | Database | +---- ...

  6. loading data into a table(亲测有效)

    一.实验要求 导入数据到数据库的表里    表内容如下: name owner species sex birth death Fluffy Harold cat f 1993-02-04   Cla ...

  7. HeadFirst Ruby 第十五章总结 Saving and loading data

    前言 在上一章讲述了如何进行基础的操作,比如 处理 GET 请求的 get route, 再比如下载 gem 等等方面的知识.在这一章节,作者告诉我们如何储存.处理数据.整个过程分三步走: 首先,当 ...

  8. 解决eclipse+adt出现的 loading data for android 问题

    因为公司最近做的项目中有用到一些第三方demo,蛋疼的是这些demo还比较旧...eclipse的... 于是给自己的eclipse装上了ADT插件,但是...因为我的eclipse比较新,Versi ...

  9. flume data to hdfs

    flume 开发梳理 flume 数据到hadoop conf/hdfsAgent.conf #配置sources.channels.sinks a1.sources=r1 a1.channels=c ...

随机推荐

  1. c#面向对象机制的进一步理解

    今天看到一个面试题很有意思: namespace EventTest{ class Program { static void Main(string[] args) { A a = new C(); ...

  2. [cc150] check palindrome of a singly linked list

    Problem: Implement a function to check if a singly linked list is a palindrome. 思路: 最简单的方法是 Reverse ...

  3. Why are very few schools involved in deep learning research? Why are they still hooked on to Bayesian methods?

    Why are very few schools involved in deep learning research? Why are they still hooked on to Bayesia ...

  4. PAT-乙级-1020. 月饼 (25)

    1020. 月饼 (25) 时间限制 100 ms 内存限制 65536 kB 代码长度限制 8000 B 判题程序 Standard 作者 CHEN, Yue 月饼是中国人在中秋佳节时吃的一种传统食 ...

  5. js 计时器

    <html><head><script type="text/javascript">var count=0;var t ;function t ...

  6. [笨木头FireFly 02]入门篇2_客户端发送请求,服务器处理请求

    原地址:http://www.9miao.com/question-15-53940.html 好,经过上一篇不权威的讲解,大家已经能轻易地让客户端和服务端连接起来了. 但是,仅仅是连接了,可它们俩不 ...

  7. POJ3273Monthly Expense(二分)

    http://poj.org/problem?id=3273 题意: 农夫约翰给出了n天的每天花费 ,让你将这n天分成m组,每组中存在的天数必须是连续的,然后让每组里花费的总和尽量的小,最后将花费最大 ...

  8. 【无聊放个模板系列】BZOJ 1597 斜率优化

    STL 双向队列DEQUE版本 #include<cstdio> #include<cstdlib> #include<cstring> #include<i ...

  9. jquery的ajax和原始的ajax这两种方式的使用方法

    jquery的ajax是对原始的ajax进行的封装,方便用户的使用.下面用代码分别举例各自的使用方式. jquery的ajax发送和接收xml数据格式. $.ajax({ type: "PU ...

  10. Windows10搭建PHP7开发环境

    原文:Windows10搭建PHP7开发环境 3年前写了一篇<Windows下搭建PHP开发环境>之后就再也没有碰过PHP了,最近新发布了PHP7然后回去看了一下之前写的文章,发现很多配置 ...