转载请注明出处:http://www.cnblogs.com/wubdut/p/4681286.html

platform: Ubuntu 14.04 LTS

hadoop 1.2.1

1. install ssh:

$sudo apt-get install openssh-server

$sudo apt-get install openssh-client

2. ssh no password access:

$ssh wubin (your computer)

$ssh-keygen

$ssh localhost

$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

(send to other computer $ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node13)

3. install jdk

$ sudo add-apt-repository ppa:webupd8team/java

$ sudo apt-get update

$ sudo apt-get install oracle-java8-installer

$ java -version

4. install hadoop:

download hadoop-1.2.1-bin.tar.gz;

$tar -zxvf hadoop-1.2.1-bin.tar.gz

$sudo cp -r hadoop-1.2.1 /usr/local/hadoop

$sudo chown wubin /usr/local/hadoop

$dir /usr/local/hadoop

$sudo vim $HOME/.bashrc

  go to the bottom:

  export HADOOP_PREFIX=/usr/local/hadoop
  export PATH=$PATH:$HADOOP_PREFIX/bin

$exec bash

$$PATH

  : no such file or directory

$sudo vim /usr/local/hadoop/conf/hadoop-env.sh

  export JAVA_HOME=/usr/lib/jvm/java-8-oracle

  export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true

$sudo vim /usr/local/hadoop/conf/core-site.xml

  <configuration>
  <property>
  <name>fs.default.name</name>
  <value>hdfs://WuBin:10001</value>
  </property>

  <property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop/tmp</value>
  </property>
  </configuration>

$sudo vim /usr/local/hadoop/conf/mapred-site.xml

  <configuration>
  <property>
  <name>mapred.job.tracker</name>
  <value>WuBin:10002</value>
  </property>
  </configuration>

$sudo mkdir /usr/local/hadoop/tmp

$sudo chown chown wubin /usr/hadoop/tmp

5. start hadoop

$hadoop namenode -format

$start-all.sh

$jps

  9792 DataNode
  9971 SecondaryNameNode
  9641 NameNode
  10331 Jps
  10237 TaskTracker
  10079 JobTracker

$dir /usr/local/hadoop/bin

User Interface:

  localhost:50070

  localhost:50030

  localhost:50070(support other computer to view the webpage via this port).

6. hdfs order:

  $hadoop -fs -mkdir filename

  $hadoop -fs -mkdir hdfs://NameNode:port/filename

  $hadoop -fs -rmr filename

  $hadoop -fs -moveFromLocal localfilename hdfsfilename

  $hadoop -fs -copyToLocal hdfsfilename localfilename

  $hadoop -fs -put localfilename hdfsfilename

7. Notation:

  When you deploy the multi-node clusters, you will modify /etc/hosts of Master. Please remember to remove this line:

  127.0.0.0 localhost

  this may cause errer which I always have no idea to deel with.

Reference:

[1] Hadoop tutorial: 05 Installing Apache Hadoop Single Node, https://www.youtube.com/channel/UCjZvxgi8ro5VDv7tCqEWwgw.

Installing Apache Hadoop Single Node的更多相关文章

  1. Hadoop Single Node Setup(hadoop本地模式和伪分布式模式安装-官方文档翻译 2.7.3)

    Purpose(目标) This document describes how to set up and configure a single-node Hadoop installation so ...

  2. Hadoop:部署Hadoop Single Node

    一.环境准备 1.系统环境 CentOS 7 2.软件环境 OpenJDK # 查询可安装的OpenJDK软件包[root@server1] yum search java | grep jdk... ...

  3. How To Setup Apache Hadoop On CentOS

    he Apache Hadoop software library is a framework that allows for the distributed processing of large ...

  4. Hadoop MapReduce Next Generation - Setting up a Single Node Cluster

    Hadoop MapReduce Next Generation - Setting up a Single Node Cluster. Purpose This document describes ...

  5. Setting up a Single Node Cluster Hadoop on Ubuntu/Debian

    Hadoop: Setting up a Single Node Cluster. Hadoop: Setting up a Single Node Cluster. Purpose Prerequi ...

  6. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/hive/warehouse/page_view. Name node is in safe mode

    FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.ipc.RemoteExceptio ...

  7. 启动HDFS之后一直处于安全模式org.apache.hadoop.hdfs.server.namenode.SafeModeException: Log not rolled. Name node is in safe mode.

    一.现象 三台机器 crxy99,crxy98,crxy97(crxy99是NameNode+DataNode,crxy98和crxy97是DataNode) 按正常命令启动HDFS之后,HDFS一直 ...

  8. Apache Hadoop 2.9.2 完全分布式部署

    Apache Hadoop 2.9.2 完全分布式部署(HDFS) 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.环境准备 1>.操作平台 [root@node101.y ...

  9. Hive:org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The NameSpace quota (directories and files) of directory /mydir is exceeded: quota=100000 file count=100001

    集群中遇到了文件个数超出限制的错误: 0)昨天晚上spark 任务突然抛出了异常:org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: T ...

随机推荐

  1. Liunx mv(转)

    转竹子—博客:http://www.cnblogs.com/peida/archive/2012/10/27/2743022.html mv命令是move的缩写,可以用来移动文件或者将文件改名(mov ...

  2. Python.Books

    Flask 1. Flask Web Development Miguel Grinberg April 2014 2. Flask Framework Cookbook Shalabh Aggarw ...

  3. oracle:the password has expired

    今天在用dbvisualizer登录数据库的时候,报了the password has expired的错误,于是上网查了一下原因,是因为数据库密码过期了,因为默认的是180天. 解决方法: 1)用系 ...

  4. .net序列化

    在开发过程中,会遇到很多需要使用序列化的场景,比如wcf,webservice或者jquery+.net等.那今天就说说我对序列化的理解. 在.net中有几种序列化的方式,XML.二进制.SOAP.还 ...

  5. Luogu 2577[ZJOI2005]午餐 - 动态规划

    Solution 啊... 我太菜了唔 不看题解是不可能的, 这辈子都不可能的. 首先一个队伍中排队轮到某个人的时间是递增的, 又要加上吃饭时间, 所以只能使吃饭时间递减, 才能满足最优,于是以吃饭时 ...

  6. hdu2870 Largest Submatrix 单调栈

    描述 就不需要描述了... 题目传送门 题解 被lyd的书的标签给骗了(居然写了单调队列优化dp??)  看了半天没看出来哪里是单调队列dp,表示强烈谴责QAQ w x y z  可以各自 变成a , ...

  7. kdump 调试手段

    kdump是在系统崩溃的时候用来转储内存运行参数的一个工具和服务,打个比方,如果系统一旦崩溃那么正常的内核就没有办法工作了,在这个时候将由kdump产生一个用于capture当前运行信息的内核,该内核 ...

  8. maven3 学习

    主要参考博文:http://www.cnblogs.com/yjmyzz/p/3495762.html 修正: 1.下载maven 3.1.1 先到官网http://maven.apache.org/ ...

  9. c# 通过按钮获取文件夹和打开磁盘文件

    Button控件获取文件夹: FolderBrowserDialog fileDialog = new FolderBrowserDialog(); if (fileDialog.ShowDialog ...

  10. NOIP2017提高组预赛详解

    NOIP2017预赛终于结束了. 普遍反映今年的卷子难度较大,但事实上是这样吗?马上我将为您详细地分析这张试卷,这样你就能知道到底难不难. 对了答案,鄙人考得还是太差了,只有91分. 那么下面我们就一 ...