spark HelloWorld程序(scala版)
使用本地模式,不需要安装spark,引入相关JAR包即可:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency>
创建spark:
val sparkUrl = "local"
val conf = new SparkConf()
//.setJars(Seq("/home/panteng/IdeaProjects/sparkscala/target/spark-scala.jar"))
.set("fs.hdfs.impl.disable.cache", "true")
.set("spark.executor.memory", "8g") val spark = SparkSession
.builder()
.appName("Spark SQL basic example")
.config(conf)
.config("spark.some.config.option", "some-value")
.master(sparkUrl)
.getOrCreate()
加载本地文件:
val parquetFileDF = spark.read.parquet("/home/panteng/下载/000001_0")
//spark.read.parquet("hdfs://10.38.164.80:9000/user/root/000001_0")
文件操作:
parquetFileDF.createOrReplaceTempView("parquetFile")
val descDF = spark.sql("SELECT substring(description,0,3) as pre ,description FROM parquetFile LIMIT 100000")
val diffDesc = descDF.distinct().sort("description")
diffDesc.createOrReplaceTempView("pre_desc")
val zhaoshang = spark.sql("select * from pre_desc")
zhaoshang.printSchema()
遍历处理:
zhaoshang.foreach(row => clustering(row))
val regexRdd = spark.sparkContext.parallelize(regexList)
regexRdd.repartition(1).saveAsTextFile("/home/panteng/下载/temp6") spark.stop()
附其他函数:
def clustering(row: Row): String = {
try {
var tempRegex = new Regex("null")
if (textPre.equals(row.getAs[String]("pre"))) {
textList = row.getAs[String]("description").replaceAll("\\d","0") :: textList
return "continue"
} else {
if (textList.size > 2) {
tempRegex = ScalaClient.getRegex(textList)
regexList = tempRegex :: regexList
}
if (row.getAs[String]("pre") != null && row.getAs[String]("description") != null) {
textPre = row.getAs[String]("pre")
textList = textList.dropRight(textList.size)
textList = row.getAs[String]("description") :: textList
}
return "ok - " + tempRegex.toString()
}
} catch {
case e: Exception => println("kkkkkkk" + e)
}
return "error"
}
package scala.learn
import top.letsgogo.rpc.ThriftProxy
import scala.util.matching.Regex
object ScalaClient {
def main(args: Array[String]): Unit = {
val client = ThriftProxy.client
val seqList = List("您尾号9081的招行账户入账人民币689.00元",
"您尾号1234的招行一卡通支出人民币11.00元",
"您尾号2345的招行一卡通支出人民币110.00元",
"您尾号5432的招行一卡通支出人民币200.00元",
"您尾号5436的招行一卡通入账人民币142.00元")
var words: List[String] = List()
for (seq <- seqList) {
val list = client.splitSentence(seq)
for (wordIndex <- 0 until list.size()) {
words = list.get(wordIndex) :: words
}
}
val wordlist = words.map(word => (word, 1))
//方法一:先groupBy再map
var genealWords: List[String] = List()
wordlist.groupBy(_._1).map {
case (word, list) => (word, list.size)
}.foreach((row) => {
(if (row._2 >= seqList.size) genealWords = row._1 :: genealWords)
})
val list = client.splitSentence("您尾号1234的招行一卡通支出人民币200.00元")
val regexSeq: StringBuilder = new StringBuilder
val specialChar = List("[", "]", "(", ")")
for (wordIndex <- 0 until list.size()) {
var word = list.get(wordIndex)
if (genealWords.contains(word) && !("*".equals(word))) {
if (specialChar.contains(word.mkString(""))) {
word = "\\" + word
}
regexSeq.append(word)
} else {
regexSeq.append("(.*)")
}
}
println(regexSeq)
val regex = new Regex(regexSeq.mkString)
for (seq <- seqList) {
println(regex.findAllIn(seq).isEmpty)
}
}
def getRegex(seqList: List[String]) = {
val client = ThriftProxy.client
var words: List[String] = List()
for (seq <- seqList) {
val list = client.splitSentence(seq)
for (wordIndex <- 0 until list.size()) {
words = list.get(wordIndex) :: words
}
}
val wordlist = words.map(word => (word, 1))
//方法一:先groupBy再map
var genealWords: List[String] = List()
wordlist.groupBy(_._1).map {
case (word, list) => (word, list.size)
}.foreach((row) => {
(if (row._2 >= seqList.size) genealWords = row._1 :: genealWords)
})
val list = client.splitSentence(seqList(0))
val regexSeq: StringBuilder = new StringBuilder
val specialChar = List("[", "]", "(", ")")
for (wordIndex <- 0 until list.size()) {
var word = list.get(wordIndex)
if (genealWords.contains(word) && !("*".equals(word))) {
if (specialChar.contains(word.mkString(""))) {
word = "\\" + word
}
regexSeq.append(word)
} else {
if(regexSeq.size > 4) {
val endStr = regexSeq.substring(regexSeq.size - 4, regexSeq.size - 0)
if (!"(.*)".equals(endStr)) {
regexSeq.append("(.*)")
}
}else{
regexSeq.append("(.*)")
}
}
}
println(regexSeq + " " + seqList.size)
val regex = new Regex(regexSeq.mkString.replaceAll("0+","\\\\d+"))
//for (seq <- seqList) {
// println(regex.findAllIn(seq).isEmpty)
//}
regex
}
}
批量数据提取正则
输出目录覆盖:
spark.hadoop.validateOutputSpecs false
基于dataSet执行Map,必须定义encoder 否则编译异常!但是对于某些type DataTypes没有提供,只能转为rdd进行map,之后再由RDD 转dataframe
val schema = StructType(Seq(
StructField("pre", StringType),
StructField("description", StringType)
))
val encoder = RowEncoder(schema)
val replaceRdd = diffDesc.map(row => myReplace(row))(encoder).sort("description") 任务提交:
./spark-2.2.0-bin-hadoop2.7/bin/spark-submit --name panteng --num-executors 100 --executor-cores 4 ./spark-scala.jar spark://dommain:7077 去除部分日志:
// Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
// Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
// spark.sparkContext.setLogLevel("WARN")
常用配置:
spark-submit --java 8 \
--cluster xxx --master yarn-cluster \
--class xx.xx.xx.xx.Xxx \
--queue default \
--conf spark.yarn.appMasterEnv.JAVA_HOME=/opt/soft/jdk1.8.0 \
--conf spark.executorEnv.JAVA_HOME=/opt/soft/jdk1.8.0 \
--conf spark.yarn.user.classpath.first=true \
--num-executors 128 \
--conf spark.yarn.job.owners=panteng \
--conf spark.executor.memory=10G \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.shuffle.service.enabled=true \
--conf spark.dynamicAllocation.minExecutors=2 \
--conf spark.yarn.executor.memoryOverhead=4000 \
--conf spark.yarn.driver.memoryOverhead=6000 \
--conf spark.driver.memory=10G \
--conf spark.driver.maxResultSize=4G \
--conf spark.rpc.message.maxSize=512 \
--driver-class-path hdfs://c3prc-hadoop/tmp/u_panteng/lda-lib/guava-14.0.1.jar \
xx-1.0-SNAPSHOT.jar parm1 parm2
spark HelloWorld程序(scala版)的更多相关文章
- Spark Scala语言学习系列之完成HelloWorld程序(三种方式)
三种方式完成HelloWorld程序 分别采用在REPL,命令行(scala脚本)和Eclipse下运行hello world. 一.Scala REPL. windows下安装好scala后,直接C ...
- IDEA搭建scala开发环境开发spark应用程序
通过IDEA搭建scala开发环境开发spark应用程序 一.idea社区版安装scala插件 因为idea默认不支持scala开发环境,所以当需要使用idea搭建scala开发环境时,首先需要安 ...
- Spark编程环境搭建(基于Intellij IDEA的Ultimate版本)(包含Java和Scala版的WordCount)(博主强烈推荐)
福利 => 每天都推送 欢迎大家,关注微信扫码并加入我的4个微信公众号: 大数据躺过的坑 Java从入门到架构师 人工智能躺过的坑 Java全栈大联盟 ...
- 利用Scala语言开发Spark应用程序
Spark内核是由Scala语言开发的,因此使用Scala语言开发Spark应用程序是自然而然的事情.如果你对Scala语言还不太熟悉,可 以阅读网络教程A Scala Tutorial for Ja ...
- Spark架构与作业执行流程简介(scala版)
在讲spark之前,不得不详细介绍一下RDD(Resilient Distributed Dataset),打开RDD的源码,一开始的介绍如此: 字面意思就是弹性分布式数据集,是spark中最基本的数 ...
- Scala学习2 ———— 三种方式完成HelloWorld程序
三种方式完成HelloWorld程序 分别采用在REPL,命令行(scala脚本)和Eclipse下运行hello world. 一.Scala REPL. 按照第一篇在windows下安装好scal ...
- Idea下用SBT搭建Spark Helloworld
没用过IDEA工具,听说跟Eclipse差不多,sbt在Idea其实就等于maven在Eclipse.Spark运行在JVM中,所以要在Idea下运行spark,就先要安装JDK 1.8+ 然后加入S ...
- (一)Spark简介-Java&Python版Spark
Spark简介 视频教程: 1.优酷 2.YouTube 简介: Spark是加州大学伯克利分校AMP实验室,开发的通用内存并行计算框架.Spark在2013年6月进入Apache成为孵化项目,8个月 ...
- 【原创】Kafka producer原理 (Scala版同步producer)
本文分析的Kafka代码为kafka-0.8.2.1.另外,由于Kafka目前提供了两套Producer代码,一套是Scala版的旧版本:一套是Java版的新版本.虽然Kafka社区极力推荐大家使用J ...
随机推荐
- CMU-15445 LAB3:事务隔离,two-phase locking,锁管理器
概述 本lab将实现一个锁管理器,事务通过锁管理器获取锁,事务管理器根据情况决定是否授予锁,或是阻塞等待其它事务释放该锁. 背景 事务属性 众所周知,事务具有如下属性: 原子性:事务要么执行完成,要么 ...
- B - The Suspects(并查集)
B - The Suspects Time Limit:1000MS Memory Limit:20000KB 64bit IO Format:%lld & %llu Desc ...
- mysql中对应oracle中的to_char()和to_number()函数
TO_CHAR(): CAST(123 AS CHAR(3)) TO_NUMBER(): cast( '123 ' as SIGNED INTEGER)
- 常用代码块:java使用剪贴板复制文本
// 获得系统剪切板 Clipboard clipboard = Toolkit.getDefaultToolkit().getSystemClipboard(); // 复制到剪切板上 String ...
- Python3.6全栈开发实例[019]
19.干掉主播.现有如下主播收益信息, 按照要求, 完成相应操作:(1)计算主播平均收益值 sum = 0 for i in zhubo.values(): sum +=i print(round(s ...
- java 抽象类实现接口
1.抽象类肯定可以实现接口: 2.这不是有没有意义的事情,是一种思想,当你自己写的类想用接口中个别方法的时候(注意不是所有的方法),那么你就可以用一个抽象类先实现这个接口(方法体中为空),然后再用你 ...
- ApexSQL Recover 恢复一个被drop的表的数据
没有备份的情况下恢复一个被drop的表的数据 ApexSQL Recover 恢复一个被drop的表的数据 转自:https://solutioncenter.apexsql.com/zh/%E6%B ...
- Ubuntu 14.04上安装WineTMQQ2013麒麟版
版权声明:本文为博主原创文章.未经博主同意不得转载. https://blog.csdn.net/tao_627/article/details/24187699 我先后试用了longterm团队的2 ...
- boost之正确性和测试
BOOST_ASSERT在debug模式下有效. #include <iostream> #include <boost/assert.hpp> using namespace ...
- corethink功能模块探索开发(五)开启这个模块的配置
上图: 主要就是两点. 1.在opencmf.php中填写好配置页面的按钮还是文本域 Equip/opencmf.php只需要注意主要的配置数组的内容 <?php // 模块信息配置 retur ...