3.Hadoop完全分布式搭建

1.完全分布式搭建

  1. 配置

    #cd /soft/hadoop/etc/
    #mv hadoop local
    #cp -r local full
    #ln -s full hadoop
    #cd hadoop
  2. 修改core-site.xml配置文件

    #vim core-site.xml
    [core-site.xml配置如下]
    <?xml version="1.0"?>
    <configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop-1</value>
    </property>
    </configuration>
  3. 修改hdfs-site.xml配置文件

    #vim hdfs-site.xml
    [hdfs-site.xml配置如下]
    <?xml version="1.0"?>
    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>3</value>
    </property>
    <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoop-2:50090</value>
    </description>
    </property>
    </configuration>
  4. 修改mapred-site.xml配置文件

    #cp mapred-site.xml.template mapred-site.xml
    #vim mapred-site.xml
    [mapred-site.xml配置如下]
    <?xml version="1.0"?>
    <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>
  5. 修改yarn-site.xml配置文件

    #vim yarn-site.xml
    [yarn-site.xml配置如下]
    <?xml version="1.0"?>
    <configuration>
    <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop-1</value>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    </configuration>
  6. 修改slaves配置文件

    #vim slaves
    [salves]
    hadoop-2
    hadoop-3
    hadoop-4
    hadoop-5
  7. 同步到其他节点

     #scp -r /soft/hadoop/etc/full  hadoop-2:/soft/hadoop/etc/
    #scp -r /soft/hadoop/etc/full hadoop-3:/soft/hadoop/etc/
    #scp -r /soft/hadoop/etc/full hadoop-4:/soft/hadoop/etc/
    #scp -r /soft/hadoop/etc/full hadoop-5:/soft/hadoop/etc/
    #ssh hadoop-2 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
    #ssh hadoop-3 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
    #ssh hadoop-4 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
    #ssh hadoop-5 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
  8. 格式化hdfs分布式文件系统

    #hadoop namenode -format
  9. 启动服务

    [root@hadoop-1 hadoop]# start-all.sh
    This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
    Starting namenodes on [hadoop-1]
    hadoop-1: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-root-namenode-hadoop-1.out
    hadoop-2: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-2.out
    hadoop-3: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-3.out
    hadoop-4: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-4.out
    hadoop-5: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-5.out
    Starting secondary namenodes [hadoop-2]
    hadoop-2: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-hadoop-2.out
    starting yarn daemons
    starting resourcemanager, logging to /soft/hadoop-2.7.3/logs/yarn-root-resourcemanager-hadoop-1.out
    hadoop-3: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-3.out
    hadoop-4: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-4.out
    hadoop-2: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-2.out
    hadoop-5: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-5.out
  10. 查看服务运行状态

        [root@hadoop-1 hadoop]# jps
    16358 ResourceManager
    12807 NodeManager
    16011 NameNode
    16204 SecondaryNameNode
    16623 Jps hadoop-5 | SUCCESS | rc=0 >>
    16993 NodeManager
    16884 DataNode
    17205 Jps hadoop-1 | SUCCESS | rc=0 >>
    28520 ResourceManager
    28235 NameNode
    29003 Jps hadoop-2 | SUCCESS | rc=0 >>
    17780 Jps
    17349 DataNode
    17529 NodeManager
    17453 SecondaryNameNode hadoop-4 | SUCCESS | rc=0 >>
    17105 Jps
    16875 NodeManager
    16766 DataNode hadoop-3 | SUCCESS | rc=0 >>
    16769 DataNode
    17121 Jps
    16878 NodeManager
  11. 登陆WEB查看

2. 完全分布式单词统计

  1. 通过hadoop自带的demo运行单词统计

    #mkdir /input
    #cd /input/
    #echo "hello world" > file1.txt
    #echo "hello world" > file2.txt
    #echo "hello world" > file3.txt
    #echo "hello hadoop" > file4.txt
    #echo "hello hadoop" > file5.txt
    #echo "hello mapreduce" > file6.txt
    #echo "hello mapreduce" > file7.txt
    #hadoop dfs -mkdir /input
    #hdfs dfs -ls /
    #hadoop fs -ls /
    #hadoop fs -put /input/* /input
    #hadoop fs -ls /input
  2. 开始统计

    [root@hadoop-1 ~]# hadoop jar /soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input/ /output
    17/05/14 23:01:07 INFO client.RMProxy: Connecting to ResourceManager at hadoop-1/10.31.133.19:8032
    17/05/14 23:01:09 INFO input.FileInputFormat: Total input paths to process : 7
    17/05/14 23:01:10 INFO mapreduce.JobSubmitter: number of splits:7
    17/05/14 23:01:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1494773207391_0001
    17/05/14 23:01:10 INFO impl.YarnClientImpl: Submitted application application_1494773207391_0001
    17/05/14 23:01:11 INFO mapreduce.Job: The url to track the job: http://hadoop-1:8088/proxy/application_1494773207391_0001/
    17/05/14 23:01:11 INFO mapreduce.Job: Running job: job_1494773207391_0001
    17/05/14 23:01:23 INFO mapreduce.Job: Job job_1494773207391_0001 running in uber mode : false
    17/05/14 23:01:23 INFO mapreduce.Job: map 0% reduce 0%
    17/05/14 23:01:56 INFO mapreduce.Job: map 43% reduce 0%
    17/05/14 23:01:57 INFO mapreduce.Job: map 100% reduce 0%
    17/05/14 23:02:04 INFO mapreduce.Job: map 100% reduce 100%
    17/05/14 23:02:05 INFO mapreduce.Job: Job job_1494773207391_0001 completed successfully
    17/05/14 23:02:05 INFO mapreduce.Job: Counters: 50
    File System Counters
    FILE: Number of bytes read=184
    FILE: Number of bytes written=949365
    FILE: Number of read operations=0
    FILE: Number of large read operations=0
    FILE: Number of write operations=0
    HDFS: Number of bytes read=801
    HDFS: Number of bytes written=37
    HDFS: Number of read operations=24
    HDFS: Number of large read operations=0
    HDFS: Number of write operations=2
    Job Counters
    Killed map tasks=1
    Launched map tasks=7
    Launched reduce tasks=1
    Data-local map tasks=7
    Total time spent by all maps in occupied slots (ms)=216289
    Total time spent by all reduces in occupied slots (ms)=4827
    Total time spent by all map tasks (ms)=216289
    Total time spent by all reduce tasks (ms)=4827
    Total vcore-milliseconds taken by all map tasks=216289
    Total vcore-milliseconds taken by all reduce tasks=4827
    Total megabyte-milliseconds taken by all map tasks=221479936
    Total megabyte-milliseconds taken by all reduce tasks=4942848
    Map-Reduce Framework
    Map input records=7
    Map output records=14
    Map output bytes=150
    Map output materialized bytes=220
    Input split bytes=707
    Combine input records=14
    Combine output records=14
    Reduce input groups=4
    Reduce shuffle bytes=220
    Reduce input records=14
    Reduce output records=4
    Spilled Records=28
    Shuffled Maps =7
    Failed Shuffles=0
    Merged Map outputs=7
    GC time elapsed (ms)=3616
    CPU time spent (ms)=3970
    Physical memory (bytes) snapshot=1528823808
    Virtual memory (bytes) snapshot=16635846656
    Total committed heap usage (bytes)=977825792
    Shuffle Errors
    BAD_ID=0
    CONNECTION=0
    IO_ERROR=0
    WRONG_LENGTH=0
    WRONG_MAP=0
    WRONG_REDUCE=0
    File Input Format Counters
    Bytes Read=94
    File Output Format Counters
    Bytes Written=37
  3. 查看

    [root@hadoop-1 ~]# hadoop fs -ls /out/put
    Found 2 items
    -rw-r--r-- 3 root supergroup 0 2017-05-14 23:02 /out/put/_SUCCESS
    -rw-r--r-- 3 root supergroup 37 2017-05-14 23:02 /out/put/part-r-00000
    [root@hadoop-1 ~]# hadoop fs -cat /out/put/part-r-00000
    hadoop 2
    hello 7
    mapreduce 2
    world 3
    [root@hadoop-1 ~]#

3.hadoop完全分布式搭建的更多相关文章

  1. hadoop完全分布式搭建HA(高可用)

    2018年03月25日 16:25:26 D调的Stanley 阅读数:2725 标签: hadoop HAssh免密登录hdfs HA配置hadoop完全分布式搭建zookeeper 配置 更多 个 ...

  2. 超详细解说Hadoop伪分布式搭建--实战验证【转】

    超详细解说Hadoop伪分布式搭建 原文http://www.tuicool.com/articles/NBvMv2原原文 http://wojiaobaoshanyinong.iteye.com/b ...

  3. Hadoop伪分布式搭建(一)

     下面内容主要说明在Windows虚拟机上面,怎么搭建一个Hadoop伪分布式,并如何运行wordcount程序和网页查看HDFS文件系统. 1 相关软件下载和安装 APACH官网提供hadoop版本 ...

  4. Hadoop伪分布式搭建步骤

    说明: 搭建环境是VMware10下用的是Linux CENTOS 32位,Hadoop:hadoop-2.4.1  JAVA :jdk7 32位:本文是本人在网络上收集的HADOOP系列视频所附带的 ...

  5. Hadoop 完全分布式搭建

    搭建环境 https://www.cnblogs.com/YuanWeiBlogger/p/11456623.html 修改主机名------------------- 1./etc/hostname ...

  6. hadoop 伪分布式搭建

    下载hadoop1.0.4版本,和jdk1.6版本或更高版本:1. 安装JDK,安装目录大家可以自定义,下面是我的安装目录: /usr/jdk1.6.0_22 配置环境变量: [root@hadoop ...

  7. Hadoop完全分布式搭建过程中遇到的问题小结

    前一段时间,终于抽出了点时间,在自己本地机器上尝试搭建完全分布式Hadoop集群环境,也是借助网络上虾皮的Hadoop开发指南系列书籍一步步搭建起来的,在这里仅代表hadoop初学者向虾皮表示衷心的感 ...

  8. Hadoop完全分布式搭建流程

    centos7 搭建完全分布式 Hadoop 环境  SSR 前言 本次教程是以先创建 四台虚拟机 为基础,再配置好一台虚拟机的情况下,直接复制文件到另外的虚拟机中(这样做大大简化了安装流程) 且本次 ...

  9. Hadoop伪分布式搭建CentOS

    所需软件及版本: jdk-7u80-linux-x64.tar.gz hadoop-2.6.0.tar.gz 1.安装JDK Hadoop 在需在JDK下运行,注意JDK最好使用Oracle的否则可能 ...

随机推荐

  1. Linux Mysql 卸载

    Linux下mysql的卸载: 1.查找以前是否装有mysql 命令:rpm -qa|grep -i mysql 可以看到mysql的两个包: mysql-4.1.12-3.RHEL4.1 mysql ...

  2. Qt5连接Mysql环境配置

    已安装的环境:Mysql5.7 64bit ,Qt5.12 64bit. 到mysql官方下载mysql5.7 64bit的压缩包,解压,复制下图框内四个文件. 将四个文件复制到Qt安装目录下bin目 ...

  3. 基于LSB的图像数字水印实验

    1. 实验类别 设计型实验:MATLAB设计并实现基于LSB的图像数字水印算法. 2. 实验目的 了解信息隐藏中最常用的LSB算法的特点,掌握LSB算法原理,设计并实现一种基于图像的LSB隐藏算法. ...

  4. 三层架构,Struts2,SpringMVC实现原理图

    三层架构,Struts2,SpringMVC实现原理图 三层架构实现原理 Struts2实现原理 SpringMVC实现原理

  5. 执行SQL查询导致磁盘耗尽故障演示

            a fellow in IMG wechat group 2 met an error about running out of disk space when using MySQL ...

  6. PHP的抽象类和抽象方法以及接口总结

    PHP中的抽象类和抽象方法自己用的不多,但是经常会在项目中看到别人使用,同样,今天在看别人的代码的时候,发现使用了抽象类,就总结下: 抽象类:1.如果一个类中有一个方法是抽象方法,则这个类就是抽象类: ...

  7. 2.1 摄像头V4L2驱动框架分析

    学习目标:学习V4L2(V4L2:vidio for linux version 2)摄像头驱动框架,分析vivi.c(虚拟视频硬件相关)驱动源码程序,总结V4L2硬件相关的驱动的步骤:  一.V4L ...

  8. openwrt利用openvpn两网互通

    目录 创建证书文件服务器端配置防火墙配置客户端配置uvs-001(远端PC)uvs-002(网关下属设备)测试连接 创建证书文件 安装证书工具 opkg openvpn-easy-rsa 创建证书 b ...

  9. 虚拟机搭建hadoop的步骤

    1.首先是安装Vmware Workstation,下载地址:https://www.vmware.com/products/workstation-player/workstation-player ...

  10. Java基础之static关键字的用法

    Java中的static关键字主要用于内存管理.我们可以应用static关键字在变量.方法.块和嵌套类中. static关键字属于类,而不是类的实例.        静态(static)可以是: 变量 ...