window本地部署单机hadoop,修改配置文件和脚本如下,只记录关键配置和步骤,仅供参考

  • hadoop-2.6.5
  • spark-2.3.3

1.配置文件core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.data.dir</name>
<value>file:/D:/02_bigdata/hadoop-2.6.5/data</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>${hadoop.data.dir}</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>${user.name}</value>
</property>
</configuration>

2.配置文件hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:50070</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>${hadoop.data.dir}/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>${hadoop.data.dir}/dfs/data</value>
</property>
</configuration>

3.配置文件mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

4.配置文件yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>2592000</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://localhost:19888/jobhistory/logs</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://localhost:9000/user/merit/yarn-logs/</value>
</property> <property>
<name>yarn.nodemanager.address</name>
<value>localhost:8041</value>
</property>
</configuration>

5.环境设置脚本hadoop-env.cmd

@rem 设置JAVA_HOME
set JAVA_HOME=D:\06_devptools\jdk1_8_0_73

6.环境设置脚本yarn-env.cmd

@rem 设置yarn组件的日志文件名称
set YARN_RESOURCEMANAGER_OPTS=-Dyarn.log.file=YARN-RESOURCEMANAGER.log -Dhadoop.log.file=YARN-RESOURCEMANAGER.log
set HADOOP_NODEMANAGER_OPTS=-Dyarn.log.file=YARN-NODEMANAGER.log -Dhadoop.log.file=YARN-NODEMANAGER.log
set HADOOP_HISTORYSERVER_OPTS=-Dyarn.log.file=YARN-HISTORYSERVER.log -Dhadoop.log.file=YARN-HISTORYSERVER.log

7.启动脚本start-dfs.cmd

@rem 设置Path
set PATH=%HADOOP_HOME%\bin;%PATH% start "Apache Hadoop Distribution" hadoop namenode
start "Apache Hadoop Distribution" hadoop datanode

8.启动脚本start-yarn.cmd

@rem 设置Path
set PATH=%HADOOP_HOME%\bin;%PATH%
@rem start resourceManager
start "Apache Hadoop Distribution" yarn resourcemanager
@rem start nodeManager
start "Apache Hadoop Distribution" yarn nodemanager @rem 修改默认的historyserver启动脚本,将yarn historyserver改为mapred historyserver
@rem start historyserver
start "Apache Hadoop Distribution" mapred historyserver

9.测试yarn运行mapreduce任务

C:\Windows\system32>D:\02_bigdata\hadoop-2.6.5\bin\hadoop jar D:\02_bigdata\hadoop-2.6.5\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.6.5.jar pi 12 3
Number of Maps = 12
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Starting Job
22/06/18 14:09:30 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
22/06/18 14:09:30 INFO input.FileInputFormat: Total input paths to process : 12
22/06/18 14:09:30 INFO mapreduce.JobSubmitter: number of splits:12
22/06/18 14:09:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1655532552208_0001
22/06/18 14:09:30 INFO impl.YarnClientImpl: Submitted application application_1655532552208_0001
22/06/18 14:09:30 INFO mapreduce.Job: The url to track the job: http://LAPTOP-TC4A0SCV:8088/proxy/application_1655532552208_0001/
22/06/18 14:09:30 INFO mapreduce.Job: Running job: job_1655532552208_0001
22/06/18 14:09:43 INFO mapreduce.Job: Job job_1655532552208_0001 running in uber mode : false
22/06/18 14:09:43 INFO mapreduce.Job: map 0% reduce 0%
22/06/18 14:09:55 INFO mapreduce.Job: map 8% reduce 0%
22/06/18 14:09:58 INFO mapreduce.Job: map 17% reduce 0%
22/06/18 14:10:02 INFO mapreduce.Job: map 25% reduce 0%
22/06/18 14:10:05 INFO mapreduce.Job: map 42% reduce 0%
22/06/18 14:10:06 INFO mapreduce.Job: map 50% reduce 0%
22/06/18 14:10:08 INFO mapreduce.Job: map 58% reduce 0%
22/06/18 14:10:16 INFO mapreduce.Job: map 58% reduce 19%
22/06/18 14:10:21 INFO mapreduce.Job: map 67% reduce 19%
22/06/18 14:10:23 INFO mapreduce.Job: map 75% reduce 19%
22/06/18 14:10:24 INFO mapreduce.Job: map 75% reduce 25%
22/06/18 14:10:25 INFO mapreduce.Job: map 100% reduce 25%
22/06/18 14:10:27 INFO mapreduce.Job: map 100% reduce 100%
22/06/18 14:10:27 INFO mapreduce.Job: Job job_1655532552208_0001 completed successfully
22/06/18 14:10:27 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=270
FILE: Number of bytes written=1413432
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=3182
HDFS: Number of bytes written=215
HDFS: Number of read operations=51
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=12
Launched reduce tasks=1
Rack-local map tasks=12
Total time spent by all maps in occupied slots (ms)=321884
Total time spent by all reduces in occupied slots (ms)=53392
Total time spent by all map tasks (ms)=160942
Total time spent by all reduce tasks (ms)=26696
Total vcore-milliseconds taken by all map tasks=160942
Total vcore-milliseconds taken by all reduce tasks=26696
Total megabyte-milliseconds taken by all map tasks=164804608
Total megabyte-milliseconds taken by all reduce tasks=27336704
Map-Reduce Framework
Map input records=12
Map output records=24
Map output bytes=216
Map output materialized bytes=336
Input split bytes=1766
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=336
Reduce input records=24
Reduce output records=0
Spilled Records=48
Shuffled Maps =12
Failed Shuffles=0
Merged Map outputs=12
GC time elapsed (ms)=877
CPU time spent (ms)=8426
Physical memory (bytes) snapshot=3624378368
Virtual memory (bytes) snapshot=4172316672
Total committed heap usage (bytes)=2562195456
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1416
File Output Format Counters
Bytes Written=97
Job Finished in 57.683 seconds
Estimated value of Pi is 3.44444444444444444444

10.测试yarn运行spark任务

D:\02_bigdata\spark-2.3.3-bin-hadoop2.6>bin\spark-submit.cmd --master yarn --deploy-mode cluster --class org.apache.spark.examples.SparkPi examples\jars\spark-examples_2.11-2.3.3.jar 122
22/06/18 14:11:15 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/06/18 14:11:15 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
22/06/18 14:11:15 INFO Client: Requesting a new application from cluster with 1 NodeManagers
22/06/18 14:11:15 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
22/06/18 14:11:15 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
22/06/18 14:11:15 INFO Client: Setting up container launch context for our AM
22/06/18 14:11:15 INFO Client: Setting up the launch environment for our AM container
22/06/18 14:11:15 INFO Client: Preparing resources for our AM container
22/06/18 14:11:16 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/06/18 14:11:20 INFO Client: Uploading resource file:/C:/Users/merit/AppData/Local/Temp/spark-c2946869-0b04-4a44-97b6-b8389f691999/__spark_libs__3819455099437137527.zip -> file:/C:/Users/merit/.sparkStaging/application_1655532552208_0002/__spark_libs__3819455099437137527.zip
22/06/18 14:11:21 INFO Client: Uploading resource file:/D:/02_bigdata/spark-2.3.3-bin-hadoop2.6/examples/jars/spark-examples_2.11-2.3.3.jar -> file:/C:/Users/merit/.sparkStaging/application_1655532552208_0002/spark-examples_2.11-2.3.3.jar
22/06/18 14:11:22 INFO Client: Uploading resource file:/C:/Users/merit/AppData/Local/Temp/spark-c2946869-0b04-4a44-97b6-b8389f691999/__spark_conf__1079735780404125589.zip -> file:/C:/Users/merit/.sparkStaging/application_1655532552208_0002/__spark_conf__.zip
22/06/18 14:11:22 INFO SecurityManager: Changing view acls to: merit
22/06/18 14:11:22 INFO SecurityManager: Changing modify acls to: merit
22/06/18 14:11:22 INFO SecurityManager: Changing view acls groups to:
22/06/18 14:11:22 INFO SecurityManager: Changing modify acls groups to:
22/06/18 14:11:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(merit); groups with view permissions: Set(); users with modify permissions: Set(merit); groups with modify permissions: Set()
22/06/18 14:11:22 INFO Client: Submitting application application_1655532552208_0002 to ResourceManager
22/06/18 14:11:22 INFO YarnClientImpl: Submitted application application_1655532552208_0002
22/06/18 14:11:23 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:23 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1655532682804
final status: UNDEFINED
tracking URL: http://LAPTOP-TC4A0SCV:8088/proxy/application_1655532552208_0002/
user: merit
22/06/18 14:11:24 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:25 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:26 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:27 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:28 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:29 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:30 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:31 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:32 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:33 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:34 INFO Client: Application report for application_1655532552208_0002 (state: ACCEPTED)
22/06/18 14:11:35 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:35 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 191.168.2.78
ApplicationMaster RPC port: 0
queue: default
start time: 1655532682804
final status: UNDEFINED
tracking URL: http://LAPTOP-TC4A0SCV:8088/proxy/application_1655532552208_0002/
user: merit
22/06/18 14:11:36 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:37 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:38 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:39 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:40 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:41 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:42 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:44 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:45 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:46 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:47 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:48 INFO Client: Application report for application_1655532552208_0002 (state: RUNNING)
22/06/18 14:11:49 INFO Client: Application report for application_1655532552208_0002 (state: FINISHED)
22/06/18 14:11:49 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 191.168.2.78
ApplicationMaster RPC port: 0
queue: default
start time: 1655532682804
final status: SUCCEEDED
tracking URL: http://LAPTOP-TC4A0SCV:8088/proxy/application_1655532552208_0002/
user: merit
22/06/18 14:11:49 INFO ShutdownHookManager: Shutdown hook called
22/06/18 14:11:49 INFO ShutdownHookManager: Deleting directory C:\Users\merit\AppData\Local\Temp\spark-0282c910-523b-495d-bae8-f42d4559dac2
22/06/18 14:11:49 INFO ShutdownHookManager: Deleting directory C:\Users\merit\AppData\Local\Temp\spark-c2946869-0b04-4a44-97b6-b8389f691999

window下部署单机hadoop环境的更多相关文章

  1. window下eclipse搭建hadoop环境

    1 生成插件jar 1.1 安装java,ant运行环境 1.2 下载hadoop-2.5.0.tar.gz并解压到指定目录 1.3 下载hadoop2x-eclipse-plugin-master. ...

  2. Linux 下部署单机 hadoop 测试

    最终运行结果展示: 格式化namenode. 开始测试 显示测试进程 浏览器查看效果展示:(虽然还不清楚是什么意思,但是能看到这个效果已经很开心了) 话不多说,进入主题: 1. 安装 VMwareSt ...

  3. Window下搭建foundation apps环境

    Window下搭建foundation apps环境 框架:AngularJS.Foundation, 构建工具:Gulp, 开发环境:node.js. 操作系统:windows (一)环境准备 1 ...

  4. Linux:Ubuntu下部署Web运行环境

    Linux:Ubuntu下部署Web运行环境 本次博客将会从三部分内容详述Ubuntu系统下Web运行环境的配置: 依次是:FTP服务器的搭建.MYSQL数据库的搭建.JDK的安装等. 参考文章如下: ...

  5. window下搭建c开发环境(GNU环境的安装)

    一.在windows平台上安装GNU环境 windows操作系统不自带GNU环境,如果需要开发跨平台的C语言程序,那么需要给windows安装GNU环境 windows下的两款GNU环境:MinGW和 ...

  6. Linux下部署Samba服务环境的操作记录

    关于Linux和Windows系统之间的文件传输,很多人选择使用FTP,相对较安全,但是有时还是会出现一些问题,比如上传文件时,文件名莫名出现乱码,文件大小改变等问题.相比较来说,使用Samba作为文 ...

  7. Window下Tomcat单机部署多应用

    1. 新增tomcat相关环境变量 如上图,有两个tomcat,tomcat1和tomcat2 2.修改catalina.bat 文件 第一个tomcat不变 第二个tamcat的catalina.b ...

  8. 【hadoop】——window下elicpse连接hadoop集群基础超详细版

    1.Hadoop开发环境简介 1.1 Hadoop集群简介 Java版本:jdk-6u31-linux-i586.bin Linux系统:CentOS6.0 Hadoop版本:hadoop-1.0.0 ...

  9. window 下Qt for android 环境搭建

    ******************************************************************* 转自http://www.cnblogs.com/rophie/ ...

  10. Mac下部署Android开发环境附加NDK

    作为开发者,我们深有体会,不管是进行什么开发,为了部署开发环境,我们往往需要折腾很长时间.查阅很多资料才能完成,而且这次折腾完了,下次到了另一台新电脑上又得重新来过,整个部署过程记得还好,要是不记得又 ...

随机推荐

  1. Nginx--安装&&配置文件

    官网:http://nginx.org/en/download.html nginx版本:1.18   一 安装 1 下载预编译环境(预编译报错需要安装什么库 直接在库名后面接 -devel 使用yu ...

  2. CNS0创建交货单没有WBS元素

    1.问题 CNS0创建交货单带不出WBS,但是交货单过账之后,又可以读取到WBS. 2.原因 2.1.项目挂料 创建项目挂料时,当物料为通用料,则在网络中挂料时,采购类型为网络预留 当物料为专用料,则 ...

  3. MIGO生产订单入库写入批次特性增强

    一.生产订单入库 MIGO根据生产订单入库时,将生产订单中的字段,写入到批次特性中 二.BADI:MB_MIGO_BADI 调用BADI中的IF_EX_MB_MIGO_BADI~POST_DOCUME ...

  4. SE11/SE16N修改表数据

    1.SE11修改方法 首先修改显示格式 选择SE16标准列表 双击这条数据 输入/H,回车,再回车 修改CODE为EDIT,F8 此时,数据已经可以修改了 2.SE16N修改方法 2.1断点修改 输入 ...

  5. 为什么加了@Transactional注解,事务没有回滚?

    在昨天的<事务管理入门>一文发布之后,有读者联系说根据文章尝试,加了@Transactional注解之后,事务并没有回滚.经过一顿沟通排查之后,找到了原因,在此记录一下,给后面如果碰到类似 ...

  6. Codeforces Round #650 (Div. 3) F1经典离散化DP

    比赛链接:Here 1367A. Short Substrings Description 一个字符串 abac,然后把所有长度为2的子串加起来变成新串,abbaac,由 ab ba ac组成.现在给 ...

  7. AtCoder ABC 181 个人题解(本场GJ x 3)

    补题链接:Here A - Heavy Rotation 对 \(N\) 进行奇偶判断,奇数穿 Black .偶数穿 White B - Trapezoid Sum 前 \(n\) 项和公式:\(S_ ...

  8. 全国城市地级市区县sql

    全国城市地级市区县sql CREATE TABLE `region` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `title` varchar( ...

  9. 推荐收藏!年度Top20开源许可证风险等级

    开源许可现状 开发人员经常在软件中引入开源的代码片段.函数.方法和操作代码.因此,软件代码中经常会包含各种声明不同许可证的子组件.这些子组件的许可证条款和条件与项目整体主许可证的条款和条件冲突时,就会 ...

  10. poj 1426 深搜

    ***可能有多个答案,DFS一下找出一个答案即可*** #include<stdio.h> #include<string.h> #include<stdlib.h> ...