本文环境说明

centos服务器
jupyter的scala核spylon-kernel
spark-2.4.0
scala-2.11.12
hadoop-2.6.0

本文主要内容

  • spark读取hive表的数据,主要包括直接sql读取hive表;通过hdfs文件读取hive表,以及hive分区表的读取。
  • 通过jupyter上的cell来初始化sparksession。
  • 文末还有通过spark提取hdfs文件的完整示例

jupyter配置文件

  • 我们可以在jupyter的cell框里面,对spark的session做出对应的初始化,具体可以见下面的示例。
%%init_spark
launcher.master = "local[*]"
launcher.conf.spark.app.name = "BDP-xw"
launcher.conf.spark.driver.cores = 2
launcher.conf.spark.num_executors = 3
launcher.conf.spark.executor.cores = 4
launcher.conf.spark.driver.memory = '4g'
launcher.conf.spark.executor.memory = '4g'
// launcher.conf.spark.serializer = "org.apache.spark.serializer.KryoSerializer"
// launcher.conf.spark.kryoserializer.buffer.max = '4g'
import org.apache.spark.sql.SparkSession
var NumExecutors = spark.conf.getOption("spark.num_executors").repr
var ExecutorMemory = spark.conf.getOption("spark.executor.memory").repr
var AppName = spark.conf.getOption("spark.app.name").repr
var max_buffer = spark.conf.getOption("spark.kryoserializer.buffer.max").repr
println(f"Config as follows: \nNumExecutors: $NumExecutors, \nAppName: $AppName,\nmax_buffer:$max_buffer")

  • 直接查看我们初始化的sparksession对应的环境各变量

从hive中取数

直接sparksql走起
import org.apache.spark.sql.SparkSession
val sql_1 = """select * from tbs limit 4 """
var df = sql(sql_1)
df.show(5, false)

通过hdfs取数
object LoadingData_from_hdfs_base extends mylog{// with Logging
... def main(args: Array[String]=Array("tb1", "3", "\001", "cols", "")): Unit = {
if (args.length < 2) {
println("Usage: LoadingData_from_hdfs <tb_name, parts. sep_line, cols, paths>")
System.err.println("Usage: LoadingData_from_hdfs <tb_name, parts, sep_line, cols, paths>")
System.exit(1)
}
log.warn("开始啦调度")
val tb_name = args(0)
val parts = args(1)
val sep_line = args(2)
val select_col = args(3)
val save_paths = args(4)
val select_cols = select_col.split("#").toSeq
log.warn(s"Loading cols are : \n $select_cols")
val gb_sql = s"DESCRIBE FORMATTED ${tb_name}"
val gb_desc = sql(gb_sql)
val hdfs_address = gb_desc.filter($"col_name".contains("Location")).take(1)(0).getString(1)
val hdfs_address_cha = s"$hdfs_address/*/"
val Cs = new DataProcess_base(spark)
val tb_desc = Cs.get_table_desc(tb_name)
val raw_data = Cs.get_hdfs_data(hdfs_address)
val len1 = raw_data.map(item => item.split(sep_line)).first.length
val names = tb_desc.filter(!$"col_name".contains("#")).dropDuplicates(Seq("col_name")).sort("id").select("col_name").take(len1).map(_(0)).toSeq.map(_.toString)
val schema1 = StructType(names.map(fieldName => StructField(fieldName, StringType)))
val rawRDD = raw_data.map(_.split(sep_line).map(_.toString)).map(p => Row(p: _*)).filter(_.length == len1)
val df_data = spark.createDataFrame(rawRDD, schema1)//.filter("custommsgtype = '1'")
val df_desc = select_cols.toDF.join(tb_desc, $"value"===$"col_name", "left")
val df_gb_result = df_data.select(select_cols.map(df_data.col(_)): _*)//.limit(100)
df_gb_result.show(5, false)
...
// spark.stop()
}
}
val cols = "area_name#city_name#province_name"
val tb_name = "tb1"
val sep_line = "\u0001"
// 执行脚本
LoadingData_from_hdfs_base.main(Array(tb_name, "4", sep_line, cols, ""))

)

判断路径是否为文件夹

  • 方式1
def pathIsExist(spark: SparkSession, path: String): Boolean = {
val filePath = new org.apache.hadoop.fs.Path( path )
val fileSystem = filePath.getFileSystem( spark.sparkContext.hadoopConfiguration )
fileSystem.exists( filePath )
} pathIsExist(spark, hdfs_address) // 得到结果如下:
// pathIsExist: (spark: org.apache.spark.sql.SparkSession, path: String)Boolean
// res4: Boolean = true
  • 方式2
import java.io.File
val d = new File("/usr/local/xw")
d.isDirectory // 得到结果如下:
// d: java.io.File = /usr/local/xw
// res3: Boolean = true

分区表读取源数据

  • 对分区文件需要注意下,需要保证原始的hdfs上的raw文件里面是否有对应的分区字段值

    • 如果分区字段在hdfs中的原始文件中,则可以直接通过通过hdfs取数
    • 若原始文件中,不包括分区字段信息,则需要按照以下方式取数啦
    • 具体示例可以参考文末的从hdfs取数完整脚本示例
单个文件读取
object LoadingData_from_hdfs_onefile_with_path extends mylog{

    def main(args: Array[String]=Array("tb_name", "hdfs:/", "3","\n", "\001", "cols", "")): Unit = {
...
val hdfs_address = args(1)
val len1 = raw_data.map(item => item.split(sep_line)).first.length
val rawRDD = raw_data.flatMap(line => line.split(sep_text)).map(word => (word.split(sep_line):+hdfs_address)).map(p => Row(p: _*))
println(rawRDD.take(2))
val names = tb_desc.filter(!$"col_name".contains("#")).dropDuplicates(Seq("col_name")).sort("id").select("col_name").take(len1).map(_(0)).toSeq.map(_.toString)
import org.apache.spark.sql.types.StructType
val schema1 = StructType(names.map(fieldName => StructField(fieldName, StringType)))
val new_schema1 = schema1.add(StructField("path", StringType))
val df_data = spark.createDataFrame(rawRDD, new_schema1)
val df_desc = select_cols.toDF.join(tb_desc, $"value"===$"col_name", "left")
// df_desc.show(false)
val df_gb_result = df_data.select(select_cols.map(df_data.col(_)): _*)//.limit(100)
df_gb_result.show(5, false)
...
// spark.stop()
}
}
val file1 = "hdfs:file1.csv"
val tb_name = "tb_name"
val sep_text = "\n"
val sep_line = "\001"
val cols = "city#province#etl_date#path"
// 执行脚本
LoadingData_from_hdfs_onefile_with_path.main(Array(tb_name, file1, "4", sep_line, sep_text, cols, ""))

多个文件读取尝试1
object LoadingData_from_hdfs_wholetext_with_path extends mylog{// with Logging
... def main(args: Array[String]=Array("tb1", "hdfs:/", "3","\n", "\001", "cols", "")): Unit = {
...
val tb_name = args(0)
val hdfs_address = args(1)
val parts = args(2)
val sep_line = args(3)
val sep_text = args(4)
val select_col = args(5)
val save_paths = args(6)
val select_cols = select_col.split("#").toSeq
val Cs = new DataProcess_get_data(spark)
val tb_desc = Cs.get_table_desc(tb_name)
val rddWhole = spark.sparkContext.wholeTextFiles(s"$hdfs_address", 10)
rddWhole.foreach(f=>{
println(f._1+"数据量是=>"+f._2.split("\n").length)
})
val files = rddWhole.collect
val len1 = files.flatMap(item => item._2.split(sep_text)).take(1).flatMap(items=>items.split(sep_line)).length
val names = tb_desc.filter(!$"col_name".contains("#")).dropDuplicates(Seq("col_name")).sort("id").select("col_name").take(len1).map(_(0)).toSeq.map(_.toString)
import org.apache.spark.sql.types.StructType
// 解析wholeTextFiles读取的结果并转化成dataframe
val wordCount = files.map(f=>f._2.split(sep_text).map(g=>g.split(sep_line):+f._1.split("/").takeRight(1)(0))).flatMap(h=>h).map(p => Row(p: _*))
val schema1 = StructType(names.map(fieldName => StructField(fieldName, StringType)))
val new_schema1 = schema1.add(StructField("path", StringType))
val rawRDD = sc.parallelize(wordCount)
val df_data = spark.createDataFrame(rawRDD, new_schema1)
val df_desc = select_cols.toDF.join(tb_desc, $"value"===$"col_name", "left")
//df_desc.show(false)
val df_gb_result = df_data.select(select_cols.map(df_data.col(_)): _*)
df_gb_result.show(5, false)
println("生成的dataframe,依path列groupby的结果如下")
df_gb_result.groupBy("path").count().show(false)
...
// spark.stop()
}
}
val file1 = "hdfs:file1_1[01].csv"
val tb_name = "tb_name"
val sep_text = "\n"
val sep_line = "\001"
val cols = "city#province#etl_date#path"
// 执行脚本
LoadingData_from_hdfs_wholetext_with_path.main(Array(tb_name, file1, "4", sep_line, sep_text, cols, ""))

读取多文件且保留文件名为列名技术实现
  • 以下实现功能

    • Array[(String, String)]类型的按(String, String)拆成多行;
    • 将(String, String)中的第2个元素,按照\n分割符分成多行,按\?分隔符分成多列;
    • 将(String, String)中的第1个元素,分别加到2中的每行后面。在dataframe中呈现的就是新增一列啦
  • 业务场景

    • 如果要一次读取多个文件,且相对合并后的数据集中,对数据来源于哪一个文件作出区分。
// 测试用例,主要是把wholetextfile得到的结果转化为DataFrame
val test1 = Array(("abasdfsdf", "a?b?c?d\nc?d?d?e"), ("sdfasdf", "b?d?a?e\nc?d?e?f"))
val test2 = test1.map(line=>line._2.split("\n").map(line1=>line1.split("\\?"):+line._1)).flatMap(line2=>line2).map(p => Row(p: _*))
val cols = "cn1#cn2#cn3#cn4#path"
val names = cols.split("#")
val schema1 = StructType(names.map(fieldName => StructField(fieldName, StringType)))
val rawRDD = sc.parallelize(test2)
val df_data = spark.createDataFrame(rawRDD, schema1)
df_data.show(4, false)
test1

多个文件读取for循环
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{DataFrame, Row}
import org.apache.spark.sql.SparkSession
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.functions.monotonically_increasing_id
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType}
import org.apache.hadoop.fs.{FileSystem, Path} Logger.getLogger("org").setLevel(Level.WARN)
// val log = Logger.getLogger(this.getClass)
@transient lazy val log:Logger = Logger.getLogger(this.getClass) class DataProcess_get_data_byfor (ss: SparkSession) extends java.io.Serializable{
import ss.implicits._
import ss.sql
import org.apache.spark.sql.types.DataTypes ... def union_dataframe(df_1:RDD[String], df_2:RDD[String]):RDD[String] ={
val count1 = df_1.map(item=>item.split(sep_line)).take(1)(0).length
val count2 = df_2.map(item=>item.split(sep_line)).take(1)(0).length
val name2 = df_2.name.split("/").takeRight(1)(0)
val arr2 = df_2.map(item=>item.split(sep_line):+name2).map(p => Row(p: _*))
println(s"运行到这儿了")
var name1 = ""
var arr1 = ss.sparkContext.makeRDD(List().map(p => Row(p: _*)))
// var arr1 = Array[org.apache.spark.sql.Row]
if (count1 == count2){
name1 = df_1.name.split("/").takeRight(1)(0)
arr1 = df_1.map(item=>item.split(sep_line):+name1).map(p => Row(p: _*))
// arr1.foreach(f=>print(s"arr1嘞$f" + f.length + "\n"))
println(s"运行到这儿了没?$count1~$count2 $name1/$name2")
arr1
}
else{
println(s"运行到这儿了不相等哈?$count1~$count2 $name1/$name2")
arr1 = df_1.map(item=>item.split(sep_line)).map(p => Row(p: _*))
}
var rawRDD = arr1.union(arr2)
// arr3.foreach(f=>print(s"$f" + f.length + "\n"))
// var rawRDD = sc.parallelize(arr3)
var sourceRdd = rawRDD.map(_.mkString(sep_line))
// var count31 = arr1.take(1)(0).length
// var count32 = arr2.take(1)(0).length
// var count3 = sourceRdd.map(item=>item.split(sep_line)).take(1)(0).length
// var nums = sourceRdd.count
// print(s"arr1: $count31、arr2: $count32、arr3: $count3, 数据量为:$nums")
sourceRdd
}
}
object LoadingData_from_hdfs_text_with_path_byfor extends mylog{// with Logging
... def main(args: Array[String]=Array("tb1", "hdfs:/", "3","\n", "\001", "cols","data1", "test", "")): Unit = {
...
val hdfs_address = args(1)
...
val pattern = args(6)
val pattern_no = args(7)
val select_cols = select_col.split("#").toSeq
log.warn(s"Loading cols are : \n $select_cols")
val files = FileSystem.get(spark.sparkContext.hadoopConfiguration).listStatus(new Path(s"$hdfs_address"))
val files_name = files.toList.map(f=> f.getPath.getName)
val file_filter = files_name.filter(_.contains(pattern)).filterNot(_.contains(pattern_no))
val df_1 = file_filter.map(item=> sc.textFile(s"$path1$item"))
df_1.foreach(f=>{
println(f + "数据量是" + f.count)
})
val df2 = df_1.reduce(_ union _)
println("合并后的数据量是" + df2.count)
val Cs = new DataProcess_get_data_byfor(spark)
...
// 将for循环读取的结果合并起来
val result = df_1.reduce((a, b)=>union_dataframe(a, b))
val result2 = result.map(item=>item.split(sep_line)).map(p => Row(p: _*))
val df_data = spark.createDataFrame(result2, new_schema1)
val df_desc = select_cols.toDF.join(tb_desc, $"value"===$"col_name", "left")
println("\n")
//df_desc.show(false)
val df_gb_result = df_data.select(select_cols.map(df_data.col(_)): _*)
df_gb_result.show(5, false)
println("生成的dataframe,依path列groupby的结果如下")
df_gb_result.groupBy("path").count().show(false)
...
// spark.stop()
}
}
val path1 = "hdfs:202001/"
val tb_name = "tb_name"
val sep_text = "\n"
val sep_line = "\001"
val cols = "city#province#etl_date#path"
val pattern = "result_copy_1"
val pattern_no = "1.csv"
// val file_filter = List("file1_10.csv", "file_12.csv", "file_11.csv")
// 执行脚本
LoadingData_from_hdfs_text_with_path_byfor.main(Array(tb_name, path1, "4", sep_line, sep_text, cols, pattern, pattern_no, ""))

执行脚本的完整示例

import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{DataFrame, Row}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.monotonically_increasing_id
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType} Logger.getLogger("org").setLevel(Level.WARN)
val log = Logger.getLogger(this.getClass) class DataProcess_base (ss: SparkSession) extends java.io.Serializable{
import ss.implicits._
import ss.sql
import org.apache.spark.sql.types.DataTypes def get_table_desc(tb_name:String="tb"):DataFrame ={
val gb_sql = s"desc ${tb_name}"
val gb_desc = sql(gb_sql)
val names = gb_desc.filter(!$"col_name".contains("#")).withColumn("id", monotonically_increasing_id())
names
} def get_hdfs_data(hdfs_address:String="hdfs:"):RDD[String]={
val gb_data = ss.sparkContext.textFile(hdfs_address)
gb_data.cache()
val counts1 = gb_data.count
println(f"the rows of origin hdfs data is $counts1%-1d")
gb_data
}
}
    object LoadingData_from_hdfs_base extends mylog{// with Logging
Logger.getLogger("org").setLevel(Level.WARN)
val conf = new SparkConf()
conf.setMaster("yarn") conf.setAppName("LoadingData_From_hdfs")
conf.set("spark.home", System.getenv("SPARK_HOME"))
val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
import spark.implicits._
import spark.sql
var UIAddress = spark.conf.getOption("spark.driver.appUIAddress").repr
var yarnserver = spark.conf.getOption("spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES").repr
println(f"Config as follows: \nUIAddress: $UIAddress, \nyarnserver: $yarnserver") def main(args: Array[String]=Array("tb1", "3", "\001", "cols", "")): Unit = {
if (args.length < 2) {
println("Usage: LoadingData_from_hdfs <tb_name, parts. sep_line, cols, paths>")
System.err.println("Usage: LoadingData_from_hdfs <tb_name, parts, sep_line, cols, paths>")
System.exit(1)
}
log.warn("开始啦调度")
val tb_name = args(0)
val parts = args(1)
val sep_line = args(2)
val select_col = args(3)
val save_paths = args(4)
val select_cols = select_col.split("#").toSeq
log.warn(s"Loading cols are : \n $select_cols")
val gb_sql = s"DESCRIBE FORMATTED ${tb_name}"
val gb_desc = sql(gb_sql)
val hdfs_address = gb_desc.filter($"col_name".contains("Location")).take(1)(0).getString(1)
println(s"tbname路径是$hdfs_address")
val hdfs_address_cha = s"$hdfs_address/*/"
val Cs = new DataProcess_base(spark)
val tb_desc = Cs.get_table_desc(tb_name)
val raw_data = Cs.get_hdfs_data(hdfs_address)
val len1 = raw_data.map(item => item.split(sep_line)).first.length
val names = tb_desc.filter(!$"col_name".contains("#")).dropDuplicates(Seq("col_name")).sort("id").select("col_name").take(len1).map(_(0)).toSeq.map(_.toString)
val schema1 = StructType(names.map(fieldName => StructField(fieldName, StringType)))
val rawRDD = raw_data.map(_.split(sep_line).map(_.toString)).map(p => Row(p: _*)).filter(_.length == len1)
val df_data = spark.createDataFrame(rawRDD, schema1)//.filter("custommsgtype = '1'")
val df_desc = select_cols.toDF.join(tb_desc, $"value"===$"col_name", "left")
val df_gb_result = df_data.select(select_cols.map(df_data.col(_)): _*)//.limit(100)
df_gb_result.show(5, false)
println("生成的dataframe,依path列groupby的结果如下")
// val part = parts.toInt
// df_gb_result.repartition(part).write.mode("overwrite").option("header","true").option("sep","#").csv(save_paths)
// log.warn(f"the rows of origin data compare to mysql results is $ncounts1%-1d VS $ncounts3%-4d")
// spark.stop()
}
}
val cols = "area_name#city_name#province_name"
val tb_name = "tb1"
val sep_line = "\u0001"
// 执行脚本
LoadingData_from_hdfs_base.main(Array(tb_name, "4", sep_line, cols, ""))

spark相关介绍-提取hive表(一)的更多相关文章

  1. 【原创】大叔经验分享(65)spark读取不到hive表

    spark 2.4.3 spark读取hive表,步骤: 1)hive-site.xml hive-site.xml放到$SPARK_HOME/conf下 2)enableHiveSupport Sp ...

  2. Spark SQL解析查询parquet格式Hive表获取分区字段和查询条件

    首先说一下,这里解决的问题应用场景: sparksql处理Hive表数据时,判断加载的是否是分区表,以及分区表的字段有哪些?再进一步限制查询分区表必须指定分区? 这里涉及到两种情况:select SQ ...

  3. 使用spark对hive表中的多列数据判重

    本文处理的场景如下,hive表中的数据,对其中的多列进行判重deduplicate. 1.先解决依赖,spark相关的所有包,pom.xml spark-hive是我们进行hive表spark处理的关 ...

  4. Spark访问Hive表

    知识点1:Spark访问HIVE上面的数据 配置注意点:. 1.拷贝mysql-connector-java-5.1.38-bin.jar等相关的jar包到你${spark_home}/lib中(sp ...

  5. spark使用Hive表操作

    spark Hive表操作 之前很长一段时间是通过hiveServer操作Hive表的,一旦hiveServer宕掉就无法进行操作. 比如说一个修改表分区的操作 一.使用HiveServer的方式 v ...

  6. spark+hcatalog操作hive表及其数据

    package iie.hadoop.hcatalog.spark; import iie.udps.common.hcatalog.SerHCatInputFormat; import iie.ud ...

  7. Spark 读写hive 表

    spark 读写hive表主要是通过sparkssSession 读表的时候,很简单,直接像写sql一样sparkSession.sql("select * from xx") 就 ...

  8. Spark访问与HBase关联的Hive表

    知识点1:创建关联Hbase的Hive表 知识点2:Spark访问Hive 知识点3:Spark访问与Hbase关联的Hive表 知识点1:创建关联Hbase的Hive表 两种方式创建,内部表和外部表 ...

  9. [Spark][Hive][Python][SQL]Spark 读取Hive表的小例子

    [Spark][Hive][Python][SQL]Spark 读取Hive表的小例子$ cat customers.txt 1 Ali us 2 Bsb ca 3 Carls mx $ hive h ...

随机推荐

  1. 【Lua篇】静态代码扫描分析(二)词法分析

    一.词法分析 词法分析(英语:lexical analysis)是计算机科学中将字符序列转换为单词(Token)序列的过程.进行词法分析的程序或者函数叫作词法分析器(Lexical analyzer, ...

  2. Distribute SSH Pubkey to Multiple Hosts with Fabric

    Generate ssh keys on source host with ssh-keygen; Disable known_hosts prompt(optional): add "St ...

  3. 21JavaScript笔记(1)

    JavaScript 基于对象和事件驱动 简单描述性语言 函数优先 解释型(即时编译型) 具有安全性的脚本语言 1.js组成 核心语法(ECMAScript):开放的.标准的脚本语言规范,主要包含了语 ...

  4. MySQL-08-索引简介

    B树 基于不同的查找算法分类介绍 B*Tree B-tree B+Tree 在范围查询方面提供了更好的性能(> < >= <= like) 索引简介 索引作用 提供了类似于书中 ...

  5. 用python 30行代码,搞定一个简单截图调取的百度识字功能

    在做一个数据标注过程中人工需要识别文字. 想了想写了一个小脚本, 大致过程这样的. 截图功能写了好久也没写明白,索性直接调用第三方的截图工具了,在采用qq或者微信截图时,截图完成后保存大致保存在剪切板 ...

  6. 当任意文件上传偶遇Safedog

    0x01 写在前面 渗透过程中可能会经常遭遇WAF,此时不要轻易放弃,绞尽脑汁竭尽全力,或许弹尽粮绝之时也是柳暗花明之日. 0x02 过狗上传 一次项目渗透过程中,找个一处上传功能 先上传图片,测试上 ...

  7. spring cloud 项目

    ### 项目需求 客户端:针对普通用户,用户登录.用户退出.菜品订购.我的订单. 后台管理系统:针对管理员,管理员登录.管理员退出.添加菜品.查询菜品.修改菜品.删除菜品.订单处理.添加用户.查询用户 ...

  8. docker 安装Hive

    转自:https://www.cnblogs.com/upupfeng/p/13452385.html#%E9%83%A8%E7%BD%B2hive 使用docker快速搭建hive环境   记录一下 ...

  9. Executor执行器

    Executors: CachedThreadPool  将为每个任务创建一个线程. public class CachedThreadPool { public static void main(S ...

  10. springcloud超时重试机制的先后顺序

    https://blog.csdn.net/zzzgd_666/article/details/83314833