使用本地模式,不需要安装spark,引入相关JAR包即可:

        <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency>

创建spark:

        val sparkUrl = "local"
val conf = new SparkConf()
//.setJars(Seq("/home/panteng/IdeaProjects/sparkscala/target/spark-scala.jar"))
.set("fs.hdfs.impl.disable.cache", "true")
.set("spark.executor.memory", "8g") val spark = SparkSession
.builder()
.appName("Spark SQL basic example")
.config(conf)
.config("spark.some.config.option", "some-value")
.master(sparkUrl)
.getOrCreate()

加载本地文件:

val parquetFileDF = spark.read.parquet("/home/panteng/下载/000001_0")
//spark.read.parquet("hdfs://10.38.164.80:9000/user/root/000001_0")

文件操作:

parquetFileDF.createOrReplaceTempView("parquetFile")

val descDF = spark.sql("SELECT substring(description,0,3) as pre ,description FROM parquetFile LIMIT 100000")
val diffDesc = descDF.distinct().sort("description")
diffDesc.createOrReplaceTempView("pre_desc")
val zhaoshang = spark.sql("select * from pre_desc")
zhaoshang.printSchema()

遍历处理:

zhaoshang.foreach(row => clustering(row))
val regexRdd = spark.sparkContext.parallelize(regexList)
regexRdd.repartition(1).saveAsTextFile("/home/panteng/下载/temp6") spark.stop()

附其他函数:

def clustering(row: Row): String = {
try {
var tempRegex = new Regex("null")
if (textPre.equals(row.getAs[String]("pre"))) {
textList = row.getAs[String]("description").replaceAll("\\d","0") :: textList
return "continue"
} else {
if (textList.size > 2) {
tempRegex = ScalaClient.getRegex(textList)
regexList = tempRegex :: regexList
}
if (row.getAs[String]("pre") != null && row.getAs[String]("description") != null) {
textPre = row.getAs[String]("pre")
textList = textList.dropRight(textList.size)
textList = row.getAs[String]("description") :: textList
}
return "ok - " + tempRegex.toString()
}
} catch {
case e: Exception => println("kkkkkkk" + e)
}
return "error"
}
package scala.learn

import top.letsgogo.rpc.ThriftProxy

import scala.util.matching.Regex

object ScalaClient {
def main(args: Array[String]): Unit = {
val client = ThriftProxy.client
val seqList = List("您尾号9081的招行账户入账人民币689.00元",
"您尾号1234的招行一卡通支出人民币11.00元",
"您尾号2345的招行一卡通支出人民币110.00元",
"您尾号5432的招行一卡通支出人民币200.00元",
"您尾号5436的招行一卡通入账人民币142.00元")
var words: List[String] = List()
for (seq <- seqList) {
val list = client.splitSentence(seq)
for (wordIndex <- 0 until list.size()) {
words = list.get(wordIndex) :: words
}
}
val wordlist = words.map(word => (word, 1))
//方法一:先groupBy再map
var genealWords: List[String] = List()
wordlist.groupBy(_._1).map {
case (word, list) => (word, list.size)
}.foreach((row) => {
(if (row._2 >= seqList.size) genealWords = row._1 :: genealWords)
}) val list = client.splitSentence("您尾号1234的招行一卡通支出人民币200.00元")
val regexSeq: StringBuilder = new StringBuilder
val specialChar = List("[", "]", "(", ")")
for (wordIndex <- 0 until list.size()) {
var word = list.get(wordIndex)
if (genealWords.contains(word) && !("*".equals(word))) {
if (specialChar.contains(word.mkString(""))) {
word = "\\" + word
}
regexSeq.append(word)
} else {
regexSeq.append("(.*)")
}
}
println(regexSeq)
val regex = new Regex(regexSeq.mkString)
for (seq <- seqList) {
println(regex.findAllIn(seq).isEmpty)
}
} def getRegex(seqList: List[String]) = {
val client = ThriftProxy.client
var words: List[String] = List()
for (seq <- seqList) {
val list = client.splitSentence(seq)
for (wordIndex <- 0 until list.size()) {
words = list.get(wordIndex) :: words
}
}
val wordlist = words.map(word => (word, 1))
//方法一:先groupBy再map
var genealWords: List[String] = List()
wordlist.groupBy(_._1).map {
case (word, list) => (word, list.size)
}.foreach((row) => {
(if (row._2 >= seqList.size) genealWords = row._1 :: genealWords)
}) val list = client.splitSentence(seqList(0))
val regexSeq: StringBuilder = new StringBuilder
val specialChar = List("[", "]", "(", ")")
for (wordIndex <- 0 until list.size()) {
var word = list.get(wordIndex)
if (genealWords.contains(word) && !("*".equals(word))) {
if (specialChar.contains(word.mkString(""))) {
word = "\\" + word
}
regexSeq.append(word)
} else {
if(regexSeq.size > 4) {
val endStr = regexSeq.substring(regexSeq.size - 4, regexSeq.size - 0)
if (!"(.*)".equals(endStr)) {
regexSeq.append("(.*)")
}
}else{
regexSeq.append("(.*)")
}
}
}
println(regexSeq + " " + seqList.size)
val regex = new Regex(regexSeq.mkString.replaceAll("0+","\\\\d+"))
//for (seq <- seqList) {
// println(regex.findAllIn(seq).isEmpty)
//}
regex
}
}

批量数据提取正则

输出目录覆盖:

spark.hadoop.validateOutputSpecs false

基于dataSet执行Map,必须定义encoder  否则编译异常!但是对于某些type DataTypes没有提供,只能转为rdd进行map,之后再由RDD 转dataframe

val schema = StructType(Seq(
StructField("pre", StringType),
StructField("description", StringType)
))
val encoder = RowEncoder(schema)
val replaceRdd = diffDesc.map(row => myReplace(row))(encoder).sort("description") 任务提交:
./spark-2.2.0-bin-hadoop2.7/bin/spark-submit --name panteng --num-executors 100 --executor-cores 4 ./spark-scala.jar spark://dommain:7077 去除部分日志:
//        Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
// Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
//        spark.sparkContext.setLogLevel("WARN")
 
常用配置:

spark-submit --java 8 \
--cluster xxx --master yarn-cluster \
--class xx.xx.xx.xx.Xxx \
--queue default \
--conf spark.yarn.appMasterEnv.JAVA_HOME=/opt/soft/jdk1.8.0 \
--conf spark.executorEnv.JAVA_HOME=/opt/soft/jdk1.8.0 \
--conf spark.yarn.user.classpath.first=true \
--num-executors 128 \
--conf spark.yarn.job.owners=panteng \
--conf spark.executor.memory=10G \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.shuffle.service.enabled=true \
--conf spark.dynamicAllocation.minExecutors=2 \
--conf spark.yarn.executor.memoryOverhead=4000 \
--conf spark.yarn.driver.memoryOverhead=6000 \
--conf spark.driver.memory=10G \
--conf spark.driver.maxResultSize=4G \
--conf spark.rpc.message.maxSize=512 \
--driver-class-path hdfs://c3prc-hadoop/tmp/u_panteng/lda-lib/guava-14.0.1.jar \
xx-1.0-SNAPSHOT.jar parm1 parm2

spark HelloWorld程序(scala版)的更多相关文章

  1. Spark Scala语言学习系列之完成HelloWorld程序(三种方式)

    三种方式完成HelloWorld程序 分别采用在REPL,命令行(scala脚本)和Eclipse下运行hello world. 一.Scala REPL. windows下安装好scala后,直接C ...

  2. IDEA搭建scala开发环境开发spark应用程序

    通过IDEA搭建scala开发环境开发spark应用程序   一.idea社区版安装scala插件 因为idea默认不支持scala开发环境,所以当需要使用idea搭建scala开发环境时,首先需要安 ...

  3. Spark编程环境搭建(基于Intellij IDEA的Ultimate版本)(包含Java和Scala版的WordCount)(博主强烈推荐)

    福利 => 每天都推送 欢迎大家,关注微信扫码并加入我的4个微信公众号:   大数据躺过的坑      Java从入门到架构师      人工智能躺过的坑         Java全栈大联盟   ...

  4. 利用Scala语言开发Spark应用程序

    Spark内核是由Scala语言开发的,因此使用Scala语言开发Spark应用程序是自然而然的事情.如果你对Scala语言还不太熟悉,可 以阅读网络教程A Scala Tutorial for Ja ...

  5. Spark架构与作业执行流程简介(scala版)

    在讲spark之前,不得不详细介绍一下RDD(Resilient Distributed Dataset),打开RDD的源码,一开始的介绍如此: 字面意思就是弹性分布式数据集,是spark中最基本的数 ...

  6. Scala学习2 ———— 三种方式完成HelloWorld程序

    三种方式完成HelloWorld程序 分别采用在REPL,命令行(scala脚本)和Eclipse下运行hello world. 一.Scala REPL. 按照第一篇在windows下安装好scal ...

  7. Idea下用SBT搭建Spark Helloworld

    没用过IDEA工具,听说跟Eclipse差不多,sbt在Idea其实就等于maven在Eclipse.Spark运行在JVM中,所以要在Idea下运行spark,就先要安装JDK 1.8+ 然后加入S ...

  8. (一)Spark简介-Java&Python版Spark

    Spark简介 视频教程: 1.优酷 2.YouTube 简介: Spark是加州大学伯克利分校AMP实验室,开发的通用内存并行计算框架.Spark在2013年6月进入Apache成为孵化项目,8个月 ...

  9. 【原创】Kafka producer原理 (Scala版同步producer)

    本文分析的Kafka代码为kafka-0.8.2.1.另外,由于Kafka目前提供了两套Producer代码,一套是Scala版的旧版本:一套是Java版的新版本.虽然Kafka社区极力推荐大家使用J ...

随机推荐

  1. 使用PHP创建一个socket服务端

    与常规web开发不同,使用socket开发可以摆脱http的限制.可自定义协议,使用长连接.PHP代码常驻内存等.学习资料来源于workerman官方视频与文档. 通常创建一个socket服务包括这几 ...

  2. Android无线测试之—UiAutomator UiSelector API介绍之六

    对象搜索—类名与包名 一.类名属性定位对象 返回值 API 描述 UiSelector calssName(String className) 完整类名匹配 UiSelector calssNameM ...

  3. zxing 二维码扫描 配置和使用

    本文转载至 http://blog.csdn.net/a6472953/article/details/8796501   二维码扫描使用最多的主要有两个库:zbarSDK 和zxing 关于zbar ...

  4. lua(仿单继承)

    --lua仿单继承 Account = { balance = } function Account:new(o) o = o or {} setmetatable(o, self)--Account ...

  5. 【BZOJ3502/2288】PA2012 Tanie linie/【POJ Challenge】生日礼物 堆+链表(模拟费用流)

    [BZOJ3502]PA2012 Tanie linie Description n个数字,求不相交的总和最大的最多k个连续子序列. 1<= k<= N<= 1000000. Sam ...

  6. 《从零开始学Swift》学习笔记(Day43)——构造函数继承

    原创文章,欢迎转载.转载请注明:关东升的博客 Swift中的子类构造函数的来源有两种:自己编写和从父类继承.并不是父类的所有的构造函数都能继承下来,能够从父类继承下来的构造函数是有条件的,如下所示. ...

  7. js引入方式的弹框方法2

    html代码: <!DOCTYPE html> <html lang="zh-CN"> <head> <meta http-equiv=& ...

  8. ubuntu下MySQL无法启动Couldn't find MySQL server (/usr/bin/mysqld_safe)”

    一台虚拟测试机,启动的时候,报上述错误,从这个报错来看,多半是因为读取到了另外的my.cnf导致的 那么,my.cnf放置在什么地方? 可以通过如下指令获取到 root@mysql:~# mysqld ...

  9. jQuery中获取特定顺序子元素(子元素种类不定)的方法

    提出问题:只已知父元素和父元素中子元素的次序,怎么通过jQuery方法获得该元素? <p>第一部分:</p> <ul> <li>1</li> ...

  10. Ubuntu14.04下Nginx反向代理Odoo域名

    安装nginx sudo apt-get install -y nginx 修改配置文件 vi /etc/nginx/nginx.conf #注释掉下面这行代码 #include /etc/nginx ...