我想还有很多人没有听说过ZModem协议,更不知道有rz/sz这样方便的工具。 好东西不敢独享。以下给出我知道的一点皮毛。 下面一段是从SecureCRT的帮助中copy的:
ZModem is a full-duplex file transfer protocol that supports fast data transfer rates and effective error detection. ZModem is very user friendly, allowing either the sending or receiving party to initiate a file transfer. ZModem supports multiple file ("batch") transfers, and allows the use of wildcards when specifying filenames. ZModem also supports resuming most prior ZModem file transfer attempts.
rz,sz是便是Linux/Unix同Windows进行ZModem文件传输的命令行工具 windows端需要支持ZModem的telnet/ssh客户端,SecureCRT就可以用SecureCRT登陆到Unix/Linux主机(telnet或ssh均可) O 运行命令rz,即是接收文件,SecureCRT就会弹出文件选择对话框,选好文件之后关闭对话框,文件就会上传到当前目录 O 运行命令sz file1 file2就是发文件到windows上(保存的目录是可以配置) 比FTP命令方便多了,而且服务器不用再开FTP服务了 PS:Linux上rz/sz这两个小工具安装lrzsz-x.x.xx.rpm即可,Unix可用源码自行 编译,Solaris spac的可以到sunfreeware下载执行码
如果安装的是hadoop-0.20.2,那么eclipse-plugin的具体位置位在:/home/hadoop/hadoop-0.20.2/contrib/eclipse-plugin下面。
如果安装的是hadoop-0.21.0,那么eclipse-plugin的具体位置位在:/home/hadoop/hadoop-0.21.0/mapred/contrib/eclipse/hadoop-0.21.0-eclipse-plugin.jar下面
将hadoop-0.21.0-eclipse-plugin.jar这个插件保存到eclipse目录下的pluging中,eclipse就能够自动识别。
本机的环境如下:
Eclipse 3.6
Hadoop-0.20.2
Hive-0.5.0-dev
1. 安装hadoop-0.20.2-eclipse-plugin的插件。注意:Hadoop目录中的/hadoop-0.20.2/contrib /eclipse-plugin/hadoop-0.20.2-eclipse-plugin.jar在Eclipse3.6下有问题,无法在 Hadoop Server上运行,可以从http://code.google.com/p/hadoop-eclipse-plugin/下载
2. 选择Map/Reduce视图:window -> open pers.. -> other.. -> map/reduce
3. 增加DFS Locations:点击Map/Reduce Locations—> New Hadoop Loaction,填写对应的host和port
1
2
3
4
5
6
7
8
9
10
|
- Map/Reduce Master:
- Host: 10.10.xx.xx
- Port: 9001
- DFS Master:
- Host: 10.10.xx.xx(选中 User M/R Master host即可)
- Port: 9000
- User name: root
- 更改Advance parameters 中的 hadoop.job.ugi, 默认是 DrWho,Tardis, 改成:root,Tardis。如果看不到选项,则使用Eclipse -clean重启Eclipse
- 否则,可能会报错org.apache.hadoop.security.AccessControlException
|
4. 设置本机的Host:
1
2
3
4
5
|
- 10.10.xx.xx zw-hadoop-master. zw-hadoop-master
- #注意后面需要还有一个zw-hadoop-master.,否则运行Map/Reduce时会报错:
- java.lang.IllegalArgumentException: Wrong FS: hdfs://zw-hadoop-master:9000/user/root/oplog/out/_temporary/_attempt_201008051742_0135_m_000007_0, expected: hdfs://zw-hadoop-master.:9000
- at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352)
|
5. 新建一个Map/Reduce Project,新建Mapper,Reducer,Driver类,注意,自动生成的代码是基于老版本的Hadoop,自己修改:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
|
- <span>package</span> <span>com.sohu.hadoop.test</span><span>;</span>
- <span>import</span> <span>java.util.StringTokenizer</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.IntWritable</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.Text</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.mapreduce.Mapper</span><span>;</span>
- <span>public</span> <span>class</span> MapperTest <span>extends</span> Mapper<span><</span>Object, Text, Text, IntWritable<span>></span> <span>{</span>
- <span>private</span> <span>final</span> <span>static</span> IntWritable one <span>=</span> <span>new</span> IntWritable<span>(</span><span>1</span><span>)</span><span>;</span>
- <span>public</span> <span>void</span> map<span>(</span><span>Object</span> key, Text value, <span>Context</span> context<span>)</span>
- <span>throws</span> <span>IOException</span>, <span>InterruptedException</span> <span>{</span>
- <span>String</span> userid <span>=</span> value.<span>toString</span><span>(</span><span>)</span>.<span>split</span><span>(</span><span>"[|]"</span><span>)</span><span>[</span><span>2</span><span>]</span><span>;</span>
- context.<span>write</span><span>(</span><span>new</span> Text<span>(</span>userid<span>)</span>, <span>new</span> IntWritable<span>(</span><span>1</span><span>)</span><span>)</span><span>;</span>
- <span>}</span>
- <span>}</span>
- <span>package</span> <span>com.sohu.hadoop.test</span><span>;</span>
- <span>import</span> <span>java.io.IOException</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.IntWritable</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.Text</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.mapreduce.Reducer</span><span>;</span>
- <span>public</span> <span>class</span> ReducerTest <span>extends</span> Reducer<span><</span>Text, IntWritable, Text, IntWritable<span>></span> <span>{</span>
- <span>private</span> IntWritable result <span>=</span> <span>new</span> IntWritable<span>(</span><span>)</span><span>;</span>
- <span>public</span> <span>void</span> reduce<span>(</span>Text key, Iterable<span><</span>IntWritable<span>></span> values, <span>Context</span> context<span>)</span>
- <span>throws</span> <span>IOException</span>, <span>InterruptedException</span> <span>{</span>
- <span>int</span> sum <span>=</span> <span>0</span><span>;</span>
- <span>for</span> <span>(</span>IntWritable val <span>:</span> values<span>)</span> <span>{</span>
- sum <span>+=</span> val.<span>get</span><span>(</span><span>)</span><span>;</span>
- <span>}</span>
- result.<span>set</span><span>(</span>sum<span>)</span><span>;</span>
- context.<span>write</span><span>(</span>key, result<span>)</span><span>;</span>
- <span>}</span>
- <span>}</span>
- <span>package</span> <span>com.sohu.hadoop.test</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.conf.Configuration</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.fs.Path</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.IntWritable</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.Text</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.compress.CompressionCodec</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.io.compress.GzipCodec</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.mapreduce.Job</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.mapreduce.lib.input.FileInputFormat</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.mapreduce.lib.output.FileOutputFormat</span><span>;</span>
- <span>import</span> <span>org.apache.hadoop.util.GenericOptionsParser</span><span>;</span>
- <span>public</span> <span>class</span> DriverTest <span>{</span>
- <span>public</span> <span>static</span> <span>void</span> main<span>(</span><span>String</span><span>[</span><span>]</span> args<span>)</span> <span>throws</span> <span>Exception</span> <span>{</span>
- Configuration conf <span>=</span> <span>new</span> Configuration<span>(</span><span>)</span><span>;</span>
- <span>String</span><span>[</span><span>]</span> otherArgs <span>=</span> <span>new</span> GenericOptionsParser<span>(</span>conf, args<span>)</span>
- .<span>getRemainingArgs</span><span>(</span><span>)</span><span>;</span>
- <span>if</span> <span>(</span>otherArgs.<span>length</span> <span>!=</span> <span>2</span><span>)</span>
- <span>{</span>
- <span>System</span>.<span>err</span>.<span>println</span><span>(</span><span>"Usage: DriverTest <in> <out>"</span><span>)</span><span>;</span>
- <span>System</span>.<span>exit</span><span>(</span><span>2</span><span>)</span><span>;</span>
- <span>}</span>
- Job job <span>=</span> <span>new</span> Job<span>(</span>conf, <span>"Driver Test"</span><span>)</span><span>;</span>
- job.<span>setJarByClass</span><span>(</span>DriverTest.<span>class</span><span>)</span><span>;</span>
- job.<span>setMapperClass</span><span>(</span>MapperTest.<span>class</span><span>)</span><span>;</span>
- job.<span>setCombinerClass</span><span>(</span>ReducerTest.<span>class</span><span>)</span><span>;</span>
- job.<span>setReducerClass</span><span>(</span>ReducerTest.<span>class</span><span>)</span><span>;</span>
- job.<span>setOutputKeyClass</span><span>(</span>Text.<span>class</span><span>)</span><span>;</span>
- job.<span>setOutputValueClass</span><span>(</span>IntWritable.<span>class</span><span>)</span><span>;</span>
- conf.<span>setBoolean</span><span>(</span><span>"mapred.output.compress"</span>, <span>true</span><span>)</span><span>;</span>
- conf.<span>setClass</span><span>(</span><span>"mapred.output.compression.codec"</span>, GzipCodec.<span>class</span>,CompressionCodec.<span>class</span><span>)</span><span>;</span>
- FileInputFormat.<span>addInputPath</span><span>(</span>job, <span>new</span> Path<span>(</span>otherArgs<span>[</span><span>0</span><span>]</span><span>)</span><span>)</span><span>;</span>
- FileOutputFormat.<span>setOutputPath</span><span>(</span>job, <span>new</span> Path<span>(</span>otherArgs<span>[</span><span>1</span><span>]</span><span>)</span><span>)</span><span>;</span>
- <span>System</span>.<span>exit</span><span>(</span>job.<span>waitForCompletion</span><span>(</span><span>true</span><span>)</span> <span>?</span> <span>0</span> <span>:</span> <span>1</span><span>)</span><span>;</span>
- <span>}</span>
- <span>}</span>
|
6. 在DriverTest上,点击Run As —> Run on Hadoop,选择对应的Hadoop Locaion即可
- Hadoop学习记录(6)|Eclipse安装Hadoop 插件
下载 https://skydrive.live.com/redir.aspx?cid=cf7746837803bc50&resid=CF7746837803BC50!1277&par ...
- Linux下为Eclipse安装hadoop插件
前提条件:在Linux系统中已经安装好了jdk和hadoop 本文的安装环境:1.arch Linux 2. hadoop1.0.1本地伪分布模式安装 3. Eclipse 4.5 1. 下载Ecl ...
- Eclipse安装Hadoop插件配置Hadoop开发环境
一.编译Hadoop插件 首先需要编译Hadoop 插件:hadoop-eclipse-plugin-2.6.0.jar,然后才可以安装使用. 第三方的编译教程:https://github.com/ ...
- Ubuntu 14.10 下Eclipse安装Hadoop插件
准备环境 1 安装好了Hadoop,之前安装了Hadoop 2.5.0,安装参考http://www.cnblogs.com/liuchangchun/p/4097286.html 2 安装Eclip ...
- Ubuntu13.04 Eclipse下编译安装Hadoop插件及使用小例
Ubuntu13.04 Eclipse下编译安装Hadoop插件及使用小例 一.在Eclipse下编译安装Hadoop插件 Hadoop的Eclipse插件现在已经没有二进制版直接提供,只能自己编译. ...
- Eclipse集成Hadoop插件
一.Eclipse集成Hadoop插件 1.在这之前我们需要配置真机上的hadoop环境变量 注:在解压tar包的时候普通解压会出现缺文件的现象,所以在这里我们需要用管理员的方式启动我们的解压软件(我 ...
- 【Maven】Eclipse安装Maven插件后导致Eclipse启动出错
本文纯属复制粘贴:具体请参照原文: Eclipse安装Maven插件后,Eclipse启动问题:Maven Integration for Eclipse JDK Warning. 解决方法: 1. ...
- Eclipse安装svn插件的几种方式
Eclipse安装svn插件的几种方式 1.在线安装: (1).点击 Help --> Install New Software... (2).在弹出的窗口中点击add按钮,输入Name(任意) ...
- Eclipse安装maven插件报错
Eclipse安装maven插件,报错信息如下: Cannot complete the install because one or more required items could not be ...
随机推荐
- awk 截取某段时间的日志
好久没有截取nginx/haproxy 中 的日志了,竟有点不熟悉了. 记录一下,以免以后忘记. NGINX 日志格式: 192.168.1.26 - - [14/Sep/2017:16:48:42 ...
- CentOS 6 系统启动流程
第一阶段: BIOS启动引导 主板加电,系统自动载入BIOS(Basic Input Output System)系统 BIOS载入CMOS,读取CMOS中设定的硬件工作参数 BIOS进行POST自检 ...
- 企业办公领域: Windows + Office的组合在未来能抵挡住 Google Apps的冲击么
从个人角度讲,我基本上不怎么喜欢微软的产品,即便是其无处不见的Windows. Windows 8用了几个月的后,实在无法忍受其某些SB的设计,还是换回Win7.另外自从用上了MacBook 以后, ...
- codeforces 811E Vladik and Entertaining Flags(线段树+并查集)
codeforces 811E Vladik and Entertaining Flags 题面 \(n*m(1<=n<=10, 1<=m<=1e5)\)的棋盘,每个格子有一个 ...
- [T-ARA][TIAMO]
歌词来源:http://music.163.com/#/song?id=439915067 改了一版格式,先尝试一下,考虑到总不能永远只看着拼音读,所以想把发音按照韩文字来写,以后争取看着韩文字唱. ...
- dedecms为导航栏目添加英文标题
最近公司官网是使用 DedeCMS 做的,这个项目中要使用到为导航栏目添加英文标题,就查找资料把它实现了. 根据设计图写成静态页面后是这样的效果: 操作步骤如下: 1. 修改数据表,添加英文字段 影响 ...
- 我的第一个 Servlet
简单记录一下我从头写一个 Servlet 的过程. 我安装的是 Tomcat 7 版本,在 Ubuntu 18.04 上运行,IDE 为 Intellij IDEA. 首先创建一个 Java Web ...
- 【RabbitMQ】1、安装
1. 下载 下载地址:http://www.rabbitmq.com/download.html 2. windows下安装 2.1. 安装Erlang 下载:http://www.erlang. ...
- BZOJ1049:[HAOI2006]数字序列(DP)
Description 现在我们有一个长度为n的整数序列A.但是它太不好看了,于是我们希望把它变成一个单调严格上升的序列. 但是不希望改变过多的数,也不希望改变的幅度太大. Input 第一行包含一个 ...
- [CQOI2017]小Q的表格
题目 神仙题,神仙题 这是一道很适合盯着发呆的题目 看到这个规律 \[ f(a,b)=f(b,a) \] \[ b\times f(a,a+b)=(a+b)\times f(a,b) \] 这也没什么 ...