很多内容之前的博客已经提过,这里不再赘述,详细内容参照本系列前面的博客:https://www.cnblogs.com/ratels/p/10970905.html

创建并修改配置文件conf/spark.conf

cp conf/spark.conf.template conf/spark.conf

参考:https://github.com/Intel-bigdata/HiBench/blob/master/docs/run-sparkbench.md,设置属性为下列值

   # Spark home
   hibench.spark.home      /opt/cloudera/parcels/CDH--.cdh5./lib/spark

   # Spark master
   #   standalone mode: spark://xxx:7077
   #   YARN mode: yarn-client
   hibench.spark.master    yarn-client

执行脚本

 bin/workloads/micro/wordcount/prepare/prepare.sh

返回信息

[root@node1 prepare]# ./prepare.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/micro/wordcount.conf
probe -.cdh5./lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient--cdh5.14.2-tests.jar
start HadoopPrepareWordcount bench
hdfs -.cdh5./bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Wordcount/Input
Deleted hdfs://node1:8020/HiBench/Wordcount/Input
Submit MapReduce Job: /opt/cloudera/parcels/CDH--.cdh5./bin/hadoop --config /etc/hadoop/conf.cloudera.yarn jar /opt/cloudera/parcels/CDH--.cdh5./lib/hadoop/../../jars/hadoop-mapreduce-examples--cdh5. -D mapreduce.randomtextwriter.bytespermap= -D mapreduce.job.maps= -D mapreduce.job.reduces= hdfs://node1:8020/HiBench/Wordcount/Input
The job took  seconds.
finish HadoopPrepareWordcount bench

执行脚本

bin/workloads/micro/wordcount/spark/run.sh

返回信息

[root@node1 spark]# ./run.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/micro/wordcount.conf
probe -.cdh5./lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient--cdh5.14.2-tests.jar
start ScalaSparkWordcount bench
hdfs -.cdh5./bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Wordcount/Output
Deleted hdfs://node1:8020/HiBench/Wordcount/Output
hdfs -.cdh5./bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -du -s hdfs://node1:8020/HiBench/Wordcount/Input
Export env: SPARKBENCH_PROPERTIES_FILES=/home/cf/app/HiBench-master/report/wordcount/spark/conf/sparkbench/sparkbench.conf
Export env: HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn
Submit Spark job: /opt/cloudera/parcels/CDH--.cdh5./lib/spark/bin/spark-submit  --properties- --executor-cores  --executor-memory 4g /home/cf/app/HiBench-master/sparkbench/assembly/target/sparkbench-assembly-7.1-SNAPSHOT-dist.jar hdfs://node1:8020/HiBench/Wordcount/Input hdfs://node1:8020/HiBench/Wordcount/Output
// :: INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
finish ScalaSparkWordcount bench

查看report/hibench.report

Type         Date       Time     Input_data_size      Duration(s)          Throughput(bytes/s)  Throughput/node
HadoopWordcount -- ::
ScalaSparkWordcount -- ::                                                   

\report\wordcount\spark下有多个文件:monitor.log是原始日志,bench.log是scheduler.DAGScheduler和scheduler.TaskSetManager信息,monitor.html可视化了系统的性能信息,\conf\wordcount.conf、\conf\sparkbench\spark.conf和\conf\sparkbench\sparkbench.conf是本次任务的环境变量

monitor.html中包含了Memory usage heatmap等统计图:

根据官方文档 https://github.com/Intel-bigdata/HiBench/blob/master/docs/run-sparkbench.md ,还可以修改 hibench.scale.profile 调整测试的数据规模,修改 hibench.default.map.parallelism 和 hibench.default.shuffle.parallelism 调整并行化,修改hibench.yarn.executor.num、hibench.yarn.executor.cores、spark.executor.memory和spark.driver.memory控制Spark executor 的数量、核数、内存和driver的内存。

HiBench成长笔记——(3) HiBench测试Spark的更多相关文章

  1. HiBench成长笔记——(1) HiBench概述

    测试分类 HiBench共计19个测试方向,可大致分为6个测试类别:分别是micro,ml(机器学习),sql,graph,websearch和streaming. 2.1 micro Benchma ...

  2. HiBench成长笔记——(4) HiBench测试Spark SQL

    很多内容之前的博客已经提过,这里不再赘述,详细内容参照本系列前面的博客:https://www.cnblogs.com/ratels/p/10970905.html 和 https://www.cnb ...

  3. HiBench成长笔记——(6) HiBench测试结果分析

    Scan Join Aggregation Scan Join Aggregation Scan Join Aggregation Scan Join Aggregation Scan Join Ag ...

  4. HiBench成长笔记——(5) HiBench-Spark-SQL-Scan源码分析

    run.sh #!/bin/bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributo ...

  5. HiBench成长笔记——(2) CentOS部署安装HiBench

    安装Scala 使用spark-shell命令进入shell模式,查看spark版本和Scala版本: 下载Scala2.10.5 wget https://downloads.lightbend.c ...

  6. HiBench成长笔记——(7) 阅读《The HiBench Benchmark Suite: Characterization of the MapReduce-Based Data Analysis》

    <The HiBench Benchmark Suite: Characterization of the MapReduce-Based Data Analysis>内容精选 We th ...

  7. HiBench成长笔记——(10) 分析源码execute_with_log.py

    #!/usr/bin/env python2 # Licensed to the Apache Software Foundation (ASF) under one or more # contri ...

  8. HiBench成长笔记——(9) 分析源码monitor.py

    monitor.py 是主监控程序,将监控数据写入日志,并统计监控数据生成HTML统计展示页面: #!/usr/bin/env python2 # Licensed to the Apache Sof ...

  9. HiBench成长笔记——(8) 分析源码workload_functions.sh

    workload_functions.sh 是测试程序的入口,粘连了监控程序 monitor.py 和 主运行程序: #!/bin/bash # Licensed to the Apache Soft ...

随机推荐

  1. HDFS核心类FileSystem的使用

    一.导入jar包 本次使用的是eclipse操作的,所以需要手动导入jar包 在Hadoop.7.7/share/hadoop里有几个文件夹 common为核心类,此次需要引入common和hdfs两 ...

  2. 区分git ,github,github桌面版,gitbash、gitshell

    推荐链接:https://www.runoob.com/git/git-tutorial.html     https://www.zhihu.com/question/34582452?sort=c ...

  3. 看Web视频整理标签笔记

    原来观看web视频,初学html的时候发现记忆不太深刻,所以自己整理了一些笔记,加深记忆且方便忘记时查看.html的规范(遵循)1.一个html文件开始标签和结束标签<html></ ...

  4. Java8新特性——Optional

    前言 在开发中,我们常常需要对一个引用进行判空以防止空指针异常的出现.Java8引入了Optional类,为的就是优雅地处理判空等问题.现在也有很多类库在使用Optional封装返回值,比如Sprin ...

  5. 五 Action访问方法,method配置,通配符(常用),动态

    1 通过method配置(有点low) 建立前端JSP:demo4.jsp <%@ page language="java" contentType="text/h ...

  6. JavaWeb开发记录全过程--(1)环境配置

    一. 开发工具:idea 理由:根据idea 如何连接服务器,可以直接在idea上连接服务器 安装:根据IntelliJ IDEA 下载安装(含注册码),进行非常规手段使用idea 二.分析问题: # ...

  7. Python 之并发编程之进程下(事件(Event())、队列(Queue)、生产者与消费者模型、JoinableQueue)

    八:事件(Event()) # 阻塞事件:    e = Event() 生成事件对象e    e.wait() 动态给程序加阻塞,程序当中是否加阻塞完全取决于该对象中的is_set() [默认返回值 ...

  8. spark aggregateByKey 时 java.lang.OutOfMemoryError: GC overhead limit exceeded

    最后发现有一个用户单日访问我们网站次数为 4千万,直接导致 aggregate 时内存不够.过滤掉该用户即可.

  9. leetcode209 Minimum Size Subarray Sum

    """ Given an array of n positive integers and a positive integer s, find the minimal ...

  10. iframe切换

    iframe(locator有三种情况,可以是:定位表达式(driver.find_element_by_xpath()),frame的名称,下标index) driver.switch_to.fra ...