sparksql hive作为数据源
根据官方文档的说法,要把hive-site.xml,core-site.xml,hdfs-site.xml拷贝到spark的conf目录下,保证mysql已经启动
java
public class Demo {
private static SparkSession session = SparkSession.builder().appName("demo").enableHiveSupport()
.config("spark.sql.warehouse.dir", "/user/hive/warehouse").getOrCreate(); public static void main(String[] args) {
session.sql("drop table if exists students_info");
session.sql("create table if not exists students_info(name string,age int) "
+ "row format delimited fields terminated by '\t' \r\n"); // 将数据导入学生信息表
session.sql(
"load data local inpath '/opt/module/spark-test/data/student_infos.txt' into table default.students_info"); session.sql("drop table if exists students_score");
session.sql("create table if not exists students_score(name string,score int) \r\n"
+ "row format delimited fields terminated by '\t' \r\n"); // 将数据导入学生成绩表
session.sql(
"load data local inpath '/opt/module/spark-test/data/student_scores.txt' into table default.students_score"); // 查询
Dataset<Row> dataset = session.sql(
"select s1.name,s1.age,s2.score from students_info s1 join students_score s2 on s1.name=s2.name where s2.score>80"); // 将dataset中的数据保存到hive中
session.sql("drop table if exists students_result");
dataset.write().saveAsTable("students_result"); // 将hive中的表转成dataset,查看数据是否成功保存
Dataset<Row> table = session.table("students_result");
table.show(); session.stop(); }
}
scala
object Demo {
def main(args: Array[String]): Unit = {
val session = SparkSession.builder().appName("demo").enableHiveSupport().config("spark.sql.warehouse.dir", "/user/hive/warehouse").getOrCreate() session.sql("drop table if exists students_info")
session.sql("create table if not exists students_info(name string,age int) \r\n row format delimited fields terminated by '\t'") session.sql("load data local inpath '/opt/module/spark-test/data/student_infos.txt' into table default.students_info") session.sql("drop table if exists students_score")
session.sql("create table if not exists students_score(name string,score int) \r\n row format delimited fields terminated by '\t'") session.sql("load data local inpath '/opt/module/spark-test/data/student_scores.txt' into table default.students_score") //保存到hive中
session.sql("drop table if exists students_result")
session.sql("select s1.name,s1.age,s2.score from students_info s1 join students_score s2 on s1.name=s2.name where s2.score >90").write.saveAsTable("students_result") //检查数据是否保存
val df = session.table("students_result")
df.show() session.stop()
}
}
sparksql hive作为数据源的更多相关文章
- SparkSQL读写外部数据源--数据分区
import com.twq.dataset.Utils._ import org.apache.spark.sql.{SaveMode, SparkSession} object FileParti ...
- SparkSQL读写外部数据源-基本操作load和save
数据源-基本操作load和save object BasicTest { def main(args: Array[String]): Unit = { val spark = SparkSessio ...
- SparkSQL读写外部数据源-jext文件和table数据源的读写
object ParquetFileTest { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() ...
- SparkSQL读写外部数据源-通过jdbc读写mysql数据库
object JdbcDatasourceTest { def main(args: Array[String]): Unit = { val spark = SparkSession .builde ...
- SparkSQL读写外部数据源--csv文件的读写
object CSVFileTest { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .ap ...
- SparkSQL读写外部数据源-json文件的读写
object JsonFileTest { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .m ...
- 报表使用hive数据源报java.net.SocketTimeoutException: Read timed out
数据库表的数据量大概50W左右,在报表设计器下创建了hive的数据源,连接正常,由于数据量比较大,就用了润乾报表的大数据报表功能,报表设置好后,发布到页面中报错: 数据集ds1中,SQL语句SELEC ...
- Sparksql 取代 Hive?
sparksql hive https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-o ...
- SparkSQL程序设计
1.创建Spark Session val spark = SparkSession.builder . master("local") .appName("spark ...
随机推荐
- C#集合类:动态数组、队列、栈、哈希表、字典
1.动态数组:ArrayList 主要方法:Add.AddRange.RemoveAt.Remove 2.队列:Queue 主要方法:Enqueue入队列.Dequeue出队列.Peek返回Queue ...
- 解决Keystore was tampered with, or password was incorrect
使用签名文件keystore查看生成的数字签名中报错解决 Keystore was tampered with, or password was incorrect 这是由于android规定自己定义 ...
- vs2015 EF code first 问题待解决
在vs 2013 上可以成功ef 生成代码.EF power Tools 安装在vs 2015 :一般不可安装, 把扩展名改成zip,解压缩. 打开extension.vsixmanifest文件 找 ...
- shell学习四十天----awk的惊人表现
awk的惊人表现 awk能够胜任差点儿全部的文本处理工作. awk 调用 1.调用awk: 方式一:命令行方式 awk [-F field-separator ] 'commands' inp ...
- c++ builder firemonkey 实现填充椭圆
相信同类Delphi 类似文章非常多了,这里我用c++ builder firemonkey 实现填充椭圆 本例主要在FormPaint实现,当然你想在Image1->Bitmap->Ca ...
- 异步FIFO设计
参考http://www.cnblogs.com/BitArt/archive/2013/04/10/3010073.html http://blog.sina.com.cn/s/blog_6d30f ...
- MVC中url路由规则
Routing:首先获取视图页面传过来的请求,并接受url路径中的controller和action以及参数数据,根据规则将识别出来的数据传递给某controller中的某个action方法 MapR ...
- Scala基础知识(二)
1.条件表达式 object ConditionDemo { def main(args: Array[String]) { val x = //判断x的值,将结果赋给y val y = ) //打印 ...
- 安装orabbix
须知: (1). orabbix使用root用户安装. (2). orabbix安装在zabbix server端,而不是安装在Oracle端. 1.下载 Orabbix 2. 解压软件 un ...
- POJ 1384 Piggy-Bank (ZOJ 2014 Piggy-Bank) 完全背包
POJ :http://poj.org/problem?id=1384 ZOJ:http://acm.zju.edu.cn/onlinejudge/showProblem.do?problemCode ...