使用Docker-Hub中Spark排行最高的sequenceiq/spark:1.6.0。

操作:

拉取镜像:

[root@localhost home]# docker pull sequenceiq/spark:1.6.
Trying to pull repository docker.io/sequenceiq/spark ...

启动容器:

[root@localhost home]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/sequenceiq/spark 1.6. 40a687b3cdcc years ago 2.88 GB
docker.io/sequenceiq/hadoop-docker 2.6. 140b265bd62a years ago 1.62 GB
[root@localhost home]# docker run -dit -p : -p : -p : -h sandbox sequenceiq/spark:1.6. bash

进入容器内部:

[root@localhost home]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75e3d67806bc sequenceiq/spark:1.6. "/etc/bootstrap.sh..." seconds ago Up seconds /tcp, -/tcp, 0.0.0.0:->/tcp, 0.0.0.0:->/tcp, /tcp, /tcp, /tcp, /tcp, /tcp, /tcp, /tcp, 0.0.0.0:->/tcp thirsty_gates
[root@localhost home]# docker exec -it /bin/bash

Spark:

YARN-client(单机)模式

在YARN-client模式中,驱动程序在客户机进程中运行,应用程序master仅用于请求来自yarn的资源。

bash-4.1# spark-shell --master yarn-client --driver-memory 1g --executor-memory 1g --executor-cores
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
// :: INFO spark.HttpServer: Starting HTTP Server
// :: INFO server.Server: jetty-.y.z-SNAPSHOT
// :: INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:
// :: INFO util.Utils: Successfully started service 'HTTP class server' on port .
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.
/_/ Using Scala version 2.10. (Java HotSpot(TM) -Bit Server VM, Java 1.7.0_51)
Type in expressions to have them evaluated.
Type :help for more information.
// :: INFO spark.SparkContext: Running Spark version 1.6.
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
// :: INFO util.Utils: Successfully started service 'sparkDriver' on port .
// :: INFO slf4j.Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@172.17.0.2:32811]
// :: INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port .
// :: INFO spark.SparkEnv: Registering MapOutputTracker
// :: INFO spark.SparkEnv: Registering BlockManagerMaster
// :: INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-8c30cc1c-dfea-4ebf-94b9-c45ff3a1b849
// :: INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
// :: INFO spark.SparkEnv: Registering OutputCommitCoordinator
// :: INFO server.Server: jetty-.y.z-SNAPSHOT
// :: INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:
// :: INFO util.Utils: Successfully started service 'SparkUI' on port .
// :: INFO ui.SparkUI: Started SparkUI at http://172.17.0.2:4040
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: INFO yarn.Client: Requesting a new application from cluster with NodeManagers
// :: INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
// :: INFO yarn.Client: Will allocate AM container, with MB memory including MB overhead
// :: INFO yarn.Client: Setting up container launch context for our AM
// :: INFO yarn.Client: Setting up the launch environment for our AM container
// :: INFO yarn.Client: Preparing resources for our AM container
// :: WARN yarn.Client: Failed to cleanup staging dir .sparkStaging/application_1534228565880_0001
java.net.ConnectException: Call From sandbox/172.17.0.2 to sandbox: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:)
at org.apache.spark.deploy.yarn.Client.cleanupStagingDir(Client.scala:)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:)
at $line3.$read$$iwC$$iwC.<init>(<console>:)
at $line3.$read$$iwC.<init>(<console>:)
at $line3.$read.<init>(<console>:)
at $line3.$read$.<init>(<console>:)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoop.reallyInterpret$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$$$anonfun$apply$mcZ$sp$.apply$mcV$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply$mcZ$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:)
at org.apache.spark.repl.Main$.main(Main.scala:)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more
// :: ERROR spark.SparkContext: Error initializing SparkContext.

特么的,居然失败,尝试了几次都不行,也不知道网上其他人怎么搞的,一样的操作。

将git里面的docker拉去下来,重新docker build,还是报错,错误信息:

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. safemode: Call From 70b4a57bb473/172.17.0.2 to 70b4a57bb473: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

这明显是hadoop里面的safemode未关闭,但是dockerfile里面已经操作命令关闭了,不知道哪里出问题了。

参考:

https://www.jianshu.com/p/4801bb7ab9e0

https://www.cnblogs.com/ybst/p/9050660.html

https://github.com/sequenceiq/docker-spark

https://blog.csdn.net/farawayzheng_necas/article/details/54341036

https://blog.csdn.net/yeasy/article/details/48654965

https://blog.csdn.net/hanss2/article/details/78505446

http://wgliang.github.io/pages/spark-on-docker.html

Docker 搭建Spark 依赖sequenceiq/spark:1.6镜像的更多相关文章

  1. docker搭建本地仓库并制作自己的镜像

    原文地址https://blog.csdn.net/junmoxi/article/details/80004796 1. 搭建本地仓库1.1 下载仓库镜像1.2 启动仓库容器2. 在CentOS容器 ...

  2. 用Docker搭建RabbitMq的普通集群和镜像集群

    普通集群:多个节点组成的普通集群,消息随机发送到其中一个节点的队列上,其他节点仅保留元数据,各个节点仅有相同的元数据,即队列结构.消费者消费消息时,会从各个节点拉取消息,如果保存消息的节点故障,则无法 ...

  3. Docker 搭建Spark 依赖singularities/spark:2.2镜像

    singularities/spark:2.2版本中 Hadoop版本:2.8.2 Spark版本: 2.2.1 Scala版本:2.11.8 Java版本:1.8.0_151 拉取镜像: [root ...

  4. 使用 docker 搭建 nginx+php-fpm 环境 (两个独立镜像)

    :first-child{margin-top:0!important}.markdown-body>:last-child{margin-bottom:0!important}.markdow ...

  5. Mac下docker搭建lnmp环境 + redis + elasticsearch

    之前在windows下一直使用vagrant做开发, 团队里面也是各种开发环境,几个人也没有统一环境,各种上线都是人肉,偶尔还会有因为开发.测试.生产环境由于软件版本或者配置不一致产生的问题, 今年准 ...

  6. docker搭建elasticsearch、kibana,并集成至spring boot

    步骤如下: 一.基于docker搭建elasticsearch环境 1.拉取镜像 docker pull elasticsearch5.6.8 2.制作elasticsearch的配置文件 maste ...

  7. 使用Docker搭建Spark集群(用于实现网站流量实时分析模块)

    上一篇使用Docker搭建了Hadoop的完全分布式:使用Docker搭建Hadoop集群(伪分布式与完全分布式),本次记录搭建spark集群,使用两者同时来实现之前一直未完成的项目:网站日志流量分析 ...

  8. Docker搭建大数据集群 Hadoop Spark HBase Hive Zookeeper Scala

    Docker搭建大数据集群 给出一个完全分布式hadoop+spark集群搭建完整文档,从环境准备(包括机器名,ip映射步骤,ssh免密,Java等)开始,包括zookeeper,hadoop,hiv ...

  9. Spark认识&环境搭建&运行第一个Spark程序

    摘要:Spark作为新一代大数据计算引擎,因为内存计算的特性,具有比hadoop更快的计算速度.这里总结下对Spark的认识.虚拟机Spark安装.Spark开发环境搭建及编写第一个scala程序.运 ...

随机推荐

  1. sudo安装某一文件报错:E: 无法获得锁 /var/lib/dpkg/lock - open (11: 资源暂时不可用) E: 无法锁定管理目录(/var/lib/dpkg/),是否有其他进程正占用它?

    报错原因:资源被占用 解决方法: sudo rm /var/cache/apt/archives/lock sudo rm /var/lib/dpkg/lock

  2. HTML5语义化元素

    语义化元素:有意义的元素. 对语义化的理解: 正确的标签做正确的事情: HTML5语义化元素让页面内容结构化清晰: 便于开发人员阅读,理解,维护: 搜索引擎爬虫可以依赖语义化元素来确定上下文和每个关键 ...

  3. html5-浮动

    #div1{    background: rgba(255,0,0,0.5);    width: 250px;    height: 250px;    float: right;}#div2{  ...

  4. 【转】基于Selenium的web自动化框架(python)

    1 什么是selenium Selenium 是一个基于浏览器的自动化工具,它提供了一种跨平台.跨浏览器的端到端的web自动化解决方案.Selenium主要包括三部分:Selenium IDE.Sel ...

  5. ReactiveCocoa(II)

    RAC类关系图: RAC 信号源: 需要导入的头文件: import ReactiveCocoa import Result import ReactiveSwift 冷信号 //1.冷信号 let ...

  6. 文件格式(图像 IO 14.3)

    文件格式 图片加载性能取决于加载大图的时间和解压小图时间的权衡.很多苹果的文档都说PNG是iOS所有图片加载的最好格式.但这是极度误导的过时信息了. PNG图片使用的无损压缩算法可以比使用JPEG的图 ...

  7. JAVA基础1---Object类解析

    1.Object简介 众所周知,Object类是Java所有类的万类之源,所有Java类都是继承之Object类,而默认就直接忽略了extends Object这段代码. 2.Object类的源码 话 ...

  8. RESTful API 设计指南,RESTful API 设计最佳实践

    RESTful API 设计指南,RESTful API 设计最佳实践 网络应用程序,分为前端和后端两个部分.当前的发展趋势,就是前端设备层出不穷(手机.平板.桌面电脑.其他专用设备......). ...

  9. [转载]ADO.NET中的五个主要对象

    Connection:主要是开启程序和数据库之间的连接.没有利用连接对象将数据库打开,是无法从数据库中取得数据的.Close和Dispose的区别,Close以后还可以Open,Dispose以后则不 ...

  10. 原生Ajax和jqueryAjax写法

    原生写法: $('#send').click(function(){ //请求的5个阶段,对应readyState的值 //0: 未初始化,send方法未调用: //1: 正在发送请求,send方法已 ...