Installing Hadoop on Mac OSX Yosemite Tutorial Part 1.

Install HomeBrew
Installing Hadoop
SSH Localhost
Configuring Hadoop
Starting and Stopping Hadoop
Good to know

Install HomeBrew

Found here:http://brew.sh/ or simply paste this inside the terminal

$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install Hadoop

$ brew install hadoop

Hadoop will be installed in the following directory
/usr/local/Cellar/hadoop

Configuring Hadoop

Edit hadoop-env.sh

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/hadoop-env.sh
where 2.6.0 is the hadoop version.

Find the line with

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

and change it to

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="

Edit Core-site.xml

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/core-site.xml

 <configuration>
  <property>
     <name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:9000</value>
  </property>
</configuration>

Edit mapred-site.xml

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/mapred-site.xml and by default will be blank.

 <configuration>
       <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9010</value>
       </property>
 </configuration>

Edit hdfs-site.xml

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/hdfs-site.xml

 <configuration>
    <property>
      <name>dfs.replication</name>
      <value>1</value>
     </property>
 </configuration>

To simplify life edit your ~/.profile using vim or your favorite editor and add the following two commands

alias hstart="/usr/local/Cellar/hadoop/2.6.0/sbin/start-dfs.sh;/usr/local/Cellar/hadoop/2.6.0/sbin/start-yarn.sh"
alias hstop="/usr/local/Cellar/hadoop/2.6.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/2.6.0/sbin/stop-dfs.sh"

and execute

$ source ~/.profile

in the terminal to update.

Before we can run Hadoop we first need to format the HDFS using

$ hdfs namenode -format

SSH Localhost

Nothing needs to be done here if you have already generated ssh keys. To verify just check for the existance of ~/.ssh/id_rsa and the ~/.ssh/id_rsa.pub files. If not the keys can be generated using

$ ssh-keygen -t rsa

Enable Remote Login
“System Preferences” -> “Sharing”. Check “Remote Login”
Authorize SSH Keys
To allow your system to accept login, we have to make it aware of the keys that will be used

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Let’s try to login.

$ ssh localhost
> Last login: Fri Mar  6 20:30:53 2015
$ exit

Running Hadoop

Now we can run Hadoop just by typing

$ hstart

and stopping using

$ hstop

Download Examples

To run examples, Hadoop needs to be started.

Hadoop Examples 1.2.1 (Old)
Hadoop Examples 2.6.0 (Current)

Test them out using:

$ hadoop jar <path to the hadoop-examples file> pi 10 100

Good to know

We can access the Hadoop web interface by connecting to

Resource Manager: http://localhost:50070
JobTracker:http://localhost:8088
Specific Node Information:http://localhost:8042

This we can use to access the HDFS filesystem, for any resulting output files.

Errors

To resolve ‘WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable’ (Stackoverflow.com)

Connection Refused after installing Hadoop

$ hdfs dfs -ls
> 15/03/06 20:13:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> ls: Call From spaceship.local/192.168.1.65 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:   http://wiki.apache.org/hadoop/ConnectionRefused

The start-up scripts such as start-all.sh do not provide you with specifics about why the startups failed. Some of the time it won’t even notify you that a startup failed… To troubleshoot the service that isn’t functioning execute it manually.

$ hdfs namenode
> 15/03/06 20:18:31 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
> 15/03/06 20:18:31 FATAL namenode.NameNode: Failed to start namenode.

and the problem is…

$ hadoop namenode -format

To verify the problem is fixed run

$ hstart
$ hdfs dfs -ls /

If ‘hdfs dfs -ls’ gives you a error

> ls: `.': No such file or directory

then we need to create the default directory structure Hadoop expects (ie. /user/whoami_output/)

$ whoami
> spaceship
$ hdfs dfs -mkdir -p /user/spaceship
> 15/03/06 20:31:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ hdfs dfs -ls
> 15/03/06 20:31:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ hdfs dfs -put book.txt
> 15/03/06 20:32:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ hdfs dfs -ls
> 15/03/06 20:32:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   1 marekbejda supergroup      29578 2015-03-06 20:32 book.txt

JPS and Nothing Works…

Seems like certain builds of Java 1.8 (i.e.. 1.8_40) are missing a critical package that breaks Yarn. Check your logs at

$ jps
> 5935 Jps
$ vim /usr/local/Cellar/hadoop/2.6.0/libexec/logs/yarn-*
> 2015-03-07 16:21:32,934 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.lang.NoClassDefFoundError: sun/management/ExtendedPlatformComponent
..
> 2015-03-07 16:21:32,937 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2015-03-07 16:21:32,939 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

http://mail.openjdk.java.net/pipermail/core-libs-dev/2014-November/029818.html

Either downgrade to Java 1.7 or I’m currently running 1.8.0_20

$ java -version
> java version "1.8.0_20"
> Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
> Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)

Like this:

Like Loading...
 
转自:http://amodernstory.com/2014/09/23/installing-hadoop-on-mac-osx-yosemite/#hadoop

Hbase(参考:http://freddy.cellcore.org/post/52568231952/hadoop-hbase-on-osx-10-8-mountain-lion)

Downloading Hbase

Now that you have successfully setup and launch Hadoop it’s time to install Hbase. Similarly to Hadoop, you have two options to get Hbase. You can either go to the Hbase distribution site, choose a mirror close to your location and download it (then copy to $HD_HOME), or execute the following commands:
cd ~/Downloads
curl http://apache.websitebeheerjd.nl/hbase/stable/hbase-0.94.8.tar.gz > hbase-0.94.8.tar.gz
mv hbase-0.94.8.tar.gz $HD_HOME/
cd $HD_HOME
tar xvzf hbase-0.94.8.tar.gz
ln -s hbase-0.94.8 hbase备注使用,省去很多事情
brew install hbase

Configuring Hbase

Configuring Hbase is quite easy (a very basic instance), you need to modify only two files (located under $HBASE_HOME/conf).

hbase-env.sh

The file hbase-env.sh sets the execution environment for Hbase. This file works the same way with as hadoop-env.sh for Hadoop. Add the following lines to hbase-env.sh:
  1. JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
  2. HBASE_OPTS="-Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"

hbase-site.xml

Hbase properties are governed by the file hbase-site.xml. The only configuration parameter that you need to specify to make Hbase work is hbase.rootdir, the Hbase root directory. This directory can be either a local file file:/// or an HDFS instancehdfs://. In this particular case we are pointing Hbase to our newly installed HDFS instance. Other properties that can be set in this files can be found here.
Hbase requires Zookeper to work. By default Hbase comes with an embedded instance of Zookeeper, which relieves us from the task of setting one by ourselves. In the case that you may want to know more about Zookeper, its configuration, and its role on the Hbase architecture checkout this article.
  1. <configuration>
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://localhost:9000/hbase</value>
  5. </property>
  6. </configuration>

Running Hbase

Now you are ready to launch with Hbase. To start Hbase just execute the following command:
$HBASE_HOME/bin/start-hbase.sh

Test it

In order to test your Hbase installation, launch the Hbase shell and play with it (heavily inspired from http://hbase.apache.org/book/quickstart.html). To launch the Hbase shell execute the following command:
$HBASE_HOME/bin/hbase shell
You should be prompted to the Hbase interactive interpreter:
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 0.94.8, r1485407, Wed May 22 20:53:13 UTC 2013
Create a new table and put new values on it:
hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0> list 'test'
..
1 row(s) in 0.0550 seconds
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds
scan the table values:
hbase(main):007:0> scan 'test'
ROW        COLUMN+CELL
row1       column=cf:a, timestamp=1288380727188, value=value1
row2       column=cf:b, timestamp=1288380738440, value=value2
row3       column=cf:c, timestamp=1288380747365, value=value3
3 row(s) in 0.0590 seconds
get a value through its key:
hbase(main):008:0> get 'test', 'row1'
COLUMN      CELL
cf:a        timestamp=1288380727188, value=value1
1 row(s) in 0.0400 seconds
disable and drop (delete) the table.
hbase(main):012:0> disable 'test'
0 row(s) in 1.0930 seconds
hbase(main):013:0> drop 'test'
0 row(s) in 0.0770 seconds 
If you could execute those commands successfully then your hbase instance is working properly.

Hbase web-interfaces

http://localhost:60010/ Hbase master webuihttp://localhost:60030/ Hbase region server webui

Stopping Hbase

$HBASE_HOME/bin/stop-hbase.sh


 
 
 
 
 
 
 
 

Installing Hadoop on Mac OSX Yosemite Tutorial Part 1.的更多相关文章

  1. Dia Diagram Mac OSX Yosemite Fix 闪退 xterm

    [转]http://navkirats.blogspot.hk/2014/10/dia-diagram-mac-osx-yosemite-fix-i-use.html I use the Dia to ...

  2. Setting up Latex-vim (or Latex-suite) plugin within macVim under Mac OSX Yosemite 2015-1-20 by congliu

    1. Overview: Vim是命令行下的文本编辑程序,gVim是Vim的Linux下的图形化版本,macVim是Mac下的图形化版本 Latex-vim是vim写Latex文件时的插件 Skim是 ...

  3. Installing XGBoost on Mac OSX

      0. Get gcc with open mp.  Just paste and execute the following command in your terminal, once Home ...

  4. Mac OSX Yosemite 10.10 brew 错误:mktemp: mkdtemp failed on /tmp/git-LIPo: No such file or directory

    这个问题困扰了我非常久非常久.使得我不得不花一点时间来说一下解决方法. 事情是这种:前两天兴高採烈的更新了一下宝贝mac到10.10. 一切看起来都那么美好,可是. .当我又一次安装magento的时 ...

  5. Install mcrypt for php on Mac OSX 10.10 Yosemite for a Development Server

    mcrypt is a file encryption method using secure techniques to exchange data. It is required for some ...

  6. mac osx 系统 brew install hadoop 安装指南

    mac osx 系统 brew  install hadoop 安装指南   brew install hadoop 配置 core-site.xml:配置hdfs文件地址(记得chmod 对应文件夹 ...

  7. Mac OSX系统中Hadoop / Hive 与 spark 的安装与配置 环境搭建 记录

    Mac OSX系统中Hadoop / Hive 与 spark 的安装与配置 环境搭建 记录     Hadoop 2.6 的安装与配置(伪分布式) 下载并解压缩 配置 .bash_profile : ...

  8. Mac 操作系统安装 SVN server教程(Subversion With Mac OS X Tutorial)

    Find recent articles on my github page: rubyrobot.github.io © 2006-2014 Imagine Ecommerce Subversion ...

  9. Install Ansible on Mac OSX

    from: https://devopsu.com/guides/ansible-mac-osx.html and : https://devopsu.com/guides/ansible-post- ...

随机推荐

  1. STS新建的maven项目报错问题

    STS新建的maven项目报错问题 解决方法:打开pom.xml文件添加 <dependency> <groupId>javax.servlet</groupId> ...

  2. linux系统性能调优第一步——性能分析(vmstat)

    linux系统性能调优第一步--性能分析(vmstat) 分类: LINUX 性能调优的第一步是性能分析,下面从性能分析着手进行一些介绍,尤其对linux性能分析工具vmstat的用法和实践进行详细介 ...

  3. 蓝牙模块连接后出现ANR,日志记录

    11-25 16:29:48.433 14507-14561/myapplication.com.myblue W/MALI: glDrawArrays:714: [MALI] glDrawArray ...

  4. Java中对List集合内的元素进行顺序、倒序、随机排序的示例代码

    import java.util.Collections; import java.util.LinkedList; import java.util.List; public class Test ...

  5. Unity3d 用NGUI制作做新手引导的思路

    一.先看下效果 Prefab结构 二.实现思路: 1.prefab上的Panel层级设置成较高 2.背景由5个UISprite拼接起来的,4个(L,R,U,D)当作遮罩,1个镂空(Hollow)当作点 ...

  6. linux下的audit服务

    audit   ['ɔːdɪt]  审计 auditd是linux的一个审计服务. 这是man下的解释 auditd is the userspace component to the Linux A ...

  7. php调用c/c++的一种方式

    php调用c/c++有很多方式,最常用的是通过tcp或者http去调用,通过发送请求去调用c/c++编写的cgi/fastcgi来实现,另外php还有一种直接执行外部应用程序的方式,这种方式会影响到系 ...

  8. ACM/ICPC 之 数据结构-邻接表+DP+队列+拓扑排序(TSH OJ-旅行商TSP)

    做这道题感觉异常激动,因为在下第一次接触拓扑排序啊= =,而且看了看解释,猛然发现此题可以用DP优化,然后一次A掉所有样例,整个人激动坏了,哇咔咔咔咔咔咔咔~ 咔咔~哎呀,笑岔了- -|| 旅行商(T ...

  9. 24. javacript高级程序设计-最佳实践

    1. 最佳实践 l 来自其他语言的代码约定可以用于决定何时进行注释,以及如何进行缩进,不过JavaScript需要针对其松散类型的性质创造一些特殊的约定 l javascript应该定义行为,html ...

  10. Volley与XUtils网络请求使用对比,心得,两者基本使用

    之前一直使用的Volley作为网络请求框架,它是Google 在2013年的I/O大会 上,发布的.Volley是Android平台上的网络通信库,能使网络通信更快,更简单,更健壮,同时扩展性很强.在 ...