chapter 1 introduction to the analysis with spark

the conponents of Sparks

  spark core(contains the basic  functionality of sparks. spark Core  is also the  home to the APIs that defines the RDDs),

  spark sql(structured data ) is the package  for working with the structured data.it allow query  data via SQL as well as Apache hive , and it support many sources of data ,including Hive tables ,Parquet And jason.also allow developers to intermix SQL queries with the programatic data manipulation supported By the RDDs in Python ,java And Scala .

  spark streaming(real-time),enables processing the live of streaming of data.

  MLib(machine learning)

  GraphX(graph processing )is a library for manipulating the graph .

A Brief History of Spark

  spark is a open source project that has beed And is maintained By a thriving And diverse community  of developer .

chapter 2 downloading spark And getting started

  walking through the process of downloding And running the sprak on local mode on single computer .

  you don't needmaster Scala,java orPython.Spark itself is written in Scala, and runs on the Java Virtual Machine (JVM). To run Spark
on either your laptop or a cluster, all you need is an installation of Java 6 or newer. If you wish to use the Python API you will also need a Python interpreter (version 2.6 or newer).Spark does not yet work with Python 3.

downloading spark,select the "pre-build for Hadoop 2.4 And later".

tips:

widows user May run into issues installing .you can use the ziptool untar the .tar file Note :instatll spark in a directionalry with no space (e.g. c:\spark).

after you untar you will get a new directionaru with the same name but without the final .tar suffix .

damn it:

Most of this book includes code in all of Spark’s languages, but interactive shells are
available only in Python and Scala. Because a shell is very useful for learning the API, we recommend using one of these languages for these examples even if you are a Java
developer. The API is similar in every language.

change the directionaty to the spark,type bin\pyspark,you will see the logo.

Introduction  to Core spark concepts

Driver program

  |----your application

  |----distributed datasets that you defined

  usually weapply many operations on thedatasets.

  ***in the preceding example ,the Driver program was the spark she'll intself ,And you can type in the operation that you wanted.

  ***Driver program access the spark through a SparkContext object,which representing the  connection  to  a  computing cluster. what's  more  the  sparkcontext is automatically created for you called as  sc,in the  pyspark ,you can print  the infomation  about this  Object  By  typing "sc".well i think you will  know  the SparkContext have 3 kinds in java ,Python And Scala respectively.

SparkContext  have  many operations ,such as count(),first() and so on. Driver program typically manage a number of nodes called executors. when you call any operation on a cluster different machines might count in different ranges of the file.beacuse we run the spark shell locally,it execute all works on a single machine.

Passing Funtions to Spark

  look at the following example in python:

 lines = sc.textFile(""README.md);
pythonLines = lines.filter(lambda line : "Python" in line);
pythoLines.first();

if you are unfamilat with the lambda sytax. it's a shorthand way to define function inline in Python or Scala, then pass the function's name to the Spark. you can do like this :

def hasPython(line): # this function judge that wheter every line contain the "Python" in a file

    return "Python" in line.

pythonLines = lines.filter(hasPython)

of course you can write in java.but they are defined as classes, implementing interface Funtion:

JavaRDD<String> pythonLines = filter(new Funtion<String, Bollean>()
{
Boolean call(String line)
{
return lines.contains(line);
}
});

nowadays java8 have supported lambda.

Spark qutomatically takes your functions (e.g. lines.contains("Python"))and ships it to executors nodes. Thus, you can write code in a single driver program and automatically have parts of it run on mutiple nodes.

 Standalone Applications

Apart from running Spark interactively, Spark can be linked to standalone applications in either Java, Python or Scala. The main difference from using it in the shell is that you need to initialize your SparkContext.After that ,the functions is same.Remember , if you using it in the shell, the SparkContext is created automatically for you called "sc", you can use it direactly.

The proces linked to Spark varies from languages. In Java and Scala , you give your aplliation Maven dependency on the spark-core artifact.Maven is a popular package managerment tool for java-based languages let you link to libaries in public repositories.you can use Maven itself build your projet , or use other tools that can talk to the Maven repositories, including Scala's sbt od Gradle. Popular IDE like Eclipse also allow you to directly add a Maven dependency to a  project.

In Python,you simply write application as Python scripts, but you must run them using the bin\spark-submit script included in Spark. The spark-submit script include the dependency for us in Python,what's more it sets up the enrivonment for Spark's Python API to function. Simply run your scripts like this:

bin\spak-submit my_script.py

(Note that you will have to use backslashes instead of forward slashes on Window)s

读learning spark lighting chapter1~chapter2的更多相关文章

  1. 【原】Learning Spark (Python版) 学习笔记(一)----RDD 基本概念与命令

    <Learning Spark>这本书算是Spark入门的必读书了,中文版是<Spark快速大数据分析>,不过豆瓣书评很有意思的是,英文原版评分7.4,评论都说入门而已深入不足 ...

  2. 【原】Learning Spark (Python版) 学习笔记(四)----Spark Sreaming与MLlib机器学习

    本来这篇是准备5.15更的,但是上周一直在忙签证和工作的事,没时间就推迟了,现在终于有时间来写写Learning Spark最后一部分内容了. 第10-11 章主要讲的是Spark Streaming ...

  3. 【原】Learning Spark (Python版) 学习笔记(三)----工作原理、调优与Spark SQL

    周末的任务是更新Learning Spark系列第三篇,以为自己写不完了,但为了改正拖延症,还是得完成给自己定的任务啊 = =.这三章主要讲Spark的运行过程(本地+集群),性能调优以及Spark ...

  4. Learning Spark: Lightning-Fast Big Data Analysis 中文翻译

    Learning Spark: Lightning-Fast Big Data Analysis 中文翻译行为纯属个人对于Spark的兴趣,仅供学习. 如果我的翻译行为侵犯您的版权,请您告知,我将停止 ...

  5. Learning Spark (Python版) 学习笔记(一)----RDD 基本概念与命令

    <Learning Spark>这本书算是Spark入门的必读书了,中文版是<Spark快速大数据分析>,不过豆瓣书评很有意思的是,英文原版评分7.4,评论都说入门而已深入不足 ...

  6. 【原】Learning Spark (Python版) 学习笔记(二)----键值对、数据读取与保存、共享特性

    本来应该上周更新的,结果碰上五一,懒癌发作,就推迟了 = =.以后还是要按时完成任务.废话不多说,第四章-第六章主要讲了三个内容:键值对.数据读取与保存与Spark的两个共享特性(累加器和广播变量). ...

  7. Learning Spark中文版--第三章--RDD编程(1)

       本章介绍了Spark用于数据处理的核心抽象概念,具有弹性的分布式数据集(RDD).一个RDD仅仅是一个分布式的元素集合.在Spark中,所有工作都表示为创建新的RDDs.转换现有的RDD,或者调 ...

  8. Learning Spark 第四章——键值对处理

    本章主要介绍Spark如何处理键值对.K-V RDDs通常用于聚集操作,使用相同的key聚集或者对不同的RDD进行聚集.部分情况下,需要将spark中的数据记录转换为键值对然后进行聚集处理.我们也会对 ...

  9. 线性回归的Spark实现 [Linear Regression / Machine Learning / Spark]

    1- 问题提出 2- 线性回归 3- 理论推导 4- Python/Spark实现 # -*- coding: utf-8 -*- from pyspark import SparkContext t ...

随机推荐

  1. 在C++中反射调用.NET(三)

    在.NET与C++之间传输集合数据 上一篇<在C++中反射调用.NET(二)>中,我们尝试了反射调用一个返回DTO对象的.NET方法,今天来看看如何在.NET与C++之间传输集合数据. 使 ...

  2. arcpy.mapping常用四大件-MapsurroundElement

    arcpy.mapping常用四大件-MapsurroundElement by 李远祥 在arcpy.mapping 中,除了数据入口MapDocument.图层Layer之外,另一重要的角色就是M ...

  3. Jmeter生成html格式测试报告

    使用jmeter进行性能测试,运行完毕后生成html格式的测试报告,需要进行如下操作: 1.在C:\apache-jmeter-3.0\bin文件夹下的user.properties文本中添加如下信息 ...

  4. 设置tableView的分割线填满cell的方式总结

    方式一:cell的底部添加一个UIView 1.在tableViewController的viewDidLoad中取消系统的分割线 // 取消系统的分割线 self.tableView.separat ...

  5. class和id的区别

    我们平常在用DIV CSS制作Xhtml网页页面时,常会用到class 和id来选择调用CSS样式属性.对学习CSS的新手来说class和id可能比较模糊,同时不知道什么时候该用class,什么时候又 ...

  6. H5中背景音乐无法自动播放问题

    苹果禁止了Autoplay和JS "onload" 加载播放,使在html文件里使用了preload和autoplay属性,在移动版 Safari 上,此属性会被忽视,并且不会加载 ...

  7. 配置NFS服务与tftp服务

    在VMware在安装ubuntu的图解 链接:http://pan.baidu.com/s/1jIofvYu 密码:da72 图解里已经解压安装了VMware Tools,接下来必须要安装的就是NFS ...

  8. margin-top、margin-bottom的一些分析

    margin-top:表示该容器距离上面容器的距离 情况一:如果该容器上面没有容器,则这个样式属性则被父容器占用了 html代码如下: <div id ="fa"> & ...

  9. 子进程 已安装 post-installation 脚本 返回错误状态 1,dpkg: 处理软件包 python-crypto (--configure)时出错: 该软件包正处于非常不稳定的状态;

    这几天在学习redis的时候,装软件总是报错,两个问题都和dpkg有关,上网查阅了些解决办法,发现整体来说执行以下方法均可解决. 虽然每个人需要安装的包不同,但是出现此类问题的不同也只有安装包的名字, ...

  10. [bzoj1067][SCOI2007]降雨量——线段树+乱搞

    题目大意 传送门 题解 我国古代有一句俗话. 骗分出奇迹,乱搞最神奇! 这句话在这道题上得到了鲜明的体现. 我的方法就是魔改版线段树,乱搞搞一下,首先借鉴了黄学长的建树方法,直接用一个节点维护年份的区 ...