1、完全分布式  ./bin/run-example streaming.NetworkWordCount localhost 9999无法正常运行:

  1 [hadoop@slaver1 spark-1.5.1-bin-hadoop2.4]$ ./bin/run-example streaming.NetworkWordCount slaver1 9999
2 18/04/23 04:11:20 INFO SparkContext: Running Spark version 1.5.1
3 18/04/23 04:11:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
4 18/04/23 04:11:21 WARN SparkConf:
5 SPARK_WORKER_INSTANCES was detected (set to '1').
6 This is deprecated in Spark 1.0+.
7
8 Please instead use:
9 - ./spark-submit with --num-executors to specify the number of executors
10 - Or set SPARK_EXECUTOR_INSTANCES
11 - spark.executor.instances to configure the number of instances in the spark config.
12
13 18/04/23 04:11:21 INFO SecurityManager: Changing view acls to: hadoop
14 18/04/23 04:11:21 INFO SecurityManager: Changing modify acls to: hadoop
15 18/04/23 04:11:21 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16 18/04/23 04:11:22 INFO Slf4jLogger: Slf4jLogger started
17 18/04/23 04:11:22 INFO Remoting: Starting remoting
18 18/04/23 04:11:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.19.131:48823]
19 18/04/23 04:11:23 INFO Utils: Successfully started service 'sparkDriver' on port 48823.
20 18/04/23 04:11:23 INFO SparkEnv: Registering MapOutputTracker
21 18/04/23 04:11:23 INFO SparkEnv: Registering BlockManagerMaster
22 18/04/23 04:11:23 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-a48ee060-c8e8-4089-92b5-180cd8081890
23 18/04/23 04:11:23 INFO MemoryStore: MemoryStore started with capacity 534.5 MB
24 18/04/23 04:11:23 INFO HttpFileServer: HTTP File server directory is /tmp/spark-b6aec1ea-9a70-4814-b1f8-e752c27b9cee/httpd-fed4eaa6-5aec-4656-8d38-984997030d43
25 18/04/23 04:11:23 INFO HttpServer: Starting HTTP Server
26 18/04/23 04:11:23 INFO Utils: Successfully started service 'HTTP file server' on port 55775.
27 18/04/23 04:11:23 INFO SparkEnv: Registering OutputCommitCoordinator
28 18/04/23 04:11:24 INFO Utils: Successfully started service 'SparkUI' on port 4040.
29 18/04/23 04:11:24 INFO SparkUI: Started SparkUI at http://192.168.19.131:4040
30 18/04/23 04:11:24 INFO SparkContext: Added JAR file:/home/hadoop/soft/spark-1.5.1-bin-hadoop2.4/lib/spark-examples-1.5.1-hadoop2.4.0.jar at http://192.168.19.131:55775/jars/spark-examples-1.5.1-hadoop2.4.0.jar with timestamp 1524471084606
31 18/04/23 04:11:24 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
32 18/04/23 04:11:24 INFO Executor: Starting executor ID driver on host localhost
33 18/04/23 04:11:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36228.
34 18/04/23 04:11:25 INFO NettyBlockTransferService: Server created on 36228
35 18/04/23 04:11:25 INFO BlockManagerMaster: Trying to register BlockManager
36 18/04/23 04:11:25 INFO BlockManagerMasterEndpoint: Registering block manager localhost:36228 with 534.5 MB RAM, BlockManagerId(driver, localhost, 36228)
37 18/04/23 04:11:25 INFO BlockManagerMaster: Registered BlockManager
38 18/04/23 04:11:26 INFO EventLoggingListener: Logging events to hdfs://slaver1:9000/spark/history/local-1524471084719.snappy
39 18/04/23 04:11:28 INFO ReceiverTracker: Starting 1 receivers
40 18/04/23 04:11:28 INFO ReceiverTracker: ReceiverTracker started
41 18/04/23 04:11:28 INFO ForEachDStream: metadataCleanupDelay = -1
42 18/04/23 04:11:28 INFO ShuffledDStream: metadataCleanupDelay = -1
43 18/04/23 04:11:28 INFO MappedDStream: metadataCleanupDelay = -1
44 18/04/23 04:11:28 INFO FlatMappedDStream: metadataCleanupDelay = -1
45 18/04/23 04:11:28 INFO SocketInputDStream: metadataCleanupDelay = -1
46 18/04/23 04:11:28 INFO SocketInputDStream: Slide time = 1000 ms
47 18/04/23 04:11:28 INFO SocketInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
48 18/04/23 04:11:28 INFO SocketInputDStream: Checkpoint interval = null
49 18/04/23 04:11:28 INFO SocketInputDStream: Remember duration = 1000 ms
50 18/04/23 04:11:28 INFO SocketInputDStream: Initialized and validated org.apache.spark.streaming.dstream.SocketInputDStream@29e21cd6
51 18/04/23 04:11:28 INFO FlatMappedDStream: Slide time = 1000 ms
52 18/04/23 04:11:28 INFO FlatMappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
53 18/04/23 04:11:28 INFO FlatMappedDStream: Checkpoint interval = null
54 18/04/23 04:11:28 INFO FlatMappedDStream: Remember duration = 1000 ms
55 18/04/23 04:11:28 INFO FlatMappedDStream: Initialized and validated org.apache.spark.streaming.dstream.FlatMappedDStream@3bd33b15
56 18/04/23 04:11:28 INFO MappedDStream: Slide time = 1000 ms
57 18/04/23 04:11:28 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
58 18/04/23 04:11:28 INFO MappedDStream: Checkpoint interval = null
59 18/04/23 04:11:28 INFO MappedDStream: Remember duration = 1000 ms
60 18/04/23 04:11:28 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@28cbfe62
61 18/04/23 04:11:28 INFO ShuffledDStream: Slide time = 1000 ms
62 18/04/23 04:11:28 INFO ShuffledDStream: Storage level = StorageLevel(false, false, false, false, 1)
63 18/04/23 04:11:28 INFO ShuffledDStream: Checkpoint interval = null
64 18/04/23 04:11:28 INFO ShuffledDStream: Remember duration = 1000 ms
65 18/04/23 04:11:28 INFO ShuffledDStream: Initialized and validated org.apache.spark.streaming.dstream.ShuffledDStream@68a9e8da
66 18/04/23 04:11:28 INFO ForEachDStream: Slide time = 1000 ms
67 18/04/23 04:11:28 INFO ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
68 18/04/23 04:11:28 INFO ForEachDStream: Checkpoint interval = null
69 18/04/23 04:11:28 INFO ForEachDStream: Remember duration = 1000 ms
70 18/04/23 04:11:28 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@6af675e4
71 18/04/23 04:11:28 INFO RecurringTimer: Started timer for JobGenerator at time 1524471089000
72 18/04/23 04:11:28 INFO JobGenerator: Started JobGenerator at 1524471089000 ms
73 18/04/23 04:11:28 INFO JobScheduler: Started JobScheduler
74 18/04/23 04:11:28 INFO StreamingContext: StreamingContext started
75 18/04/23 04:11:28 INFO ReceiverTracker: Receiver 0 started
76 18/04/23 04:11:29 INFO DAGScheduler: Got job 0 (start at NetworkWordCount.scala:57) with 1 output partitions
77 18/04/23 04:11:29 INFO DAGScheduler: Final stage: ResultStage 0(start at NetworkWordCount.scala:57)
78 18/04/23 04:11:29 INFO DAGScheduler: Parents of final stage: List()
79 18/04/23 04:11:29 INFO DAGScheduler: Missing parents: List()
80 18/04/23 04:11:29 INFO DAGScheduler: Submitting ResultStage 0 (Receiver 0 ParallelCollectionRDD[0] at makeRDD at ReceiverTracker.scala:554), which has no missing parents
81 18/04/23 04:11:29 INFO JobScheduler: Added jobs for time 1524471089000 ms
82 18/04/23 04:11:29 INFO JobScheduler: Starting job streaming job 1524471089000 ms.0 from job set of time 1524471089000 ms
83 18/04/23 04:11:29 INFO SparkContext: Starting job: print at NetworkWordCount.scala:56
84 18/04/23 04:11:29 INFO MemoryStore: ensureFreeSpace(55336) called with curMem=0, maxMem=560497950
85 18/04/23 04:11:29 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 54.0 KB, free 534.5 MB)
86 18/04/23 04:11:29 INFO MemoryStore: ensureFreeSpace(18523) called with curMem=55336, maxMem=560497950
87 18/04/23 04:11:29 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 18.1 KB, free 534.5 MB)
88 18/04/23 04:11:29 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:36228 (size: 18.1 KB, free: 534.5 MB)
89 18/04/23 04:11:29 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
90 18/04/23 04:11:29 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (Receiver 0 ParallelCollectionRDD[0] at makeRDD at ReceiverTracker.scala:554)
91 18/04/23 04:11:29 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
92 18/04/23 04:11:30 INFO JobScheduler: Added jobs for time 1524471090000 ms
93 18/04/23 04:11:30 INFO DAGScheduler: Registering RDD 3 (map at NetworkWordCount.scala:55)
94 18/04/23 04:11:30 INFO DAGScheduler: Got job 1 (print at NetworkWordCount.scala:56) with 1 output partitions
95 18/04/23 04:11:30 INFO DAGScheduler: Final stage: ResultStage 2(print at NetworkWordCount.scala:56)
96 18/04/23 04:11:30 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
97 18/04/23 04:11:30 INFO DAGScheduler: Missing parents: List()
98 18/04/23 04:11:30 INFO DAGScheduler: Submitting ResultStage 2 (ShuffledRDD[4] at reduceByKey at NetworkWordCount.scala:55), which has no missing parents
99 18/04/23 04:11:30 INFO MemoryStore: ensureFreeSpace(2360) called with curMem=73859, maxMem=560497950
100 18/04/23 04:11:30 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.3 KB, free 534.5 MB)
101 18/04/23 04:11:30 INFO MemoryStore: ensureFreeSpace(1440) called with curMem=76219, maxMem=560497950
102 18/04/23 04:11:30 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1440.0 B, free 534.5 MB)
103 18/04/23 04:11:30 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:36228 (size: 1440.0 B, free: 534.5 MB)
104 18/04/23 04:11:30 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
105 18/04/23 04:11:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (ShuffledRDD[4] at reduceByKey at NetworkWordCount.scala:55)
106 18/04/23 04:11:30 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
107 18/04/23 04:11:30 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, ANY, 2721 bytes)
108 18/04/23 04:11:30 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
109 18/04/23 04:11:30 INFO Executor: Fetching http://192.168.19.131:55775/jars/spark-examples-1.5.1-hadoop2.4.0.jar with timestamp 1524471084606
110 18/04/23 04:11:30 INFO Utils: Fetching http://192.168.19.131:55775/jars/spark-examples-1.5.1-hadoop2.4.0.jar to /tmp/spark-b6aec1ea-9a70-4814-b1f8-e752c27b9cee/userFiles-63aba9cd-16f7-4843-81b6-ec48b9b6df67/fetchFileTemp3110460484590096544.tmp
111 18/04/23 04:11:31 INFO JobScheduler: Added jobs for time 1524471091000 ms
112 18/04/23 04:11:31 INFO Executor: Adding file:/tmp/spark-b6aec1ea-9a70-4814-b1f8-e752c27b9cee/userFiles-63aba9cd-16f7-4843-81b6-ec48b9b6df67/spark-examples-1.5.1-hadoop2.4.0.jar to class loader
113 18/04/23 04:11:31 INFO RecurringTimer: Started timer for BlockGenerator at time 1524471091600
114 18/04/23 04:11:31 INFO BlockGenerator: Started BlockGenerator
115 18/04/23 04:11:31 INFO BlockGenerator: Started block pushing thread
116 18/04/23 04:11:31 INFO ReceiverTracker: Registered receiver for stream 0 from 192.168.19.131:48823
117 18/04/23 04:11:31 INFO ReceiverSupervisorImpl: Starting receiver
118 18/04/23 04:11:31 INFO ReceiverSupervisorImpl: Called receiver onStart
119 18/04/23 04:11:31 INFO ReceiverSupervisorImpl: Waiting for receiver to be stopped
120 18/04/23 04:11:31 INFO SocketReceiver: Connecting to slaver1:9999
121 18/04/23 04:11:31 INFO SocketReceiver: Connected to slaver1:9999
122 18/04/23 04:11:32 INFO JobScheduler: Added jobs for time 1524471092000 ms
123 18/04/23 04:11:33 INFO JobScheduler: Added jobs for time 1524471093000 ms
124 18/04/23 04:11:34 INFO JobScheduler: Added jobs for time 1524471094000 ms
125 18/04/23 04:11:35 INFO JobScheduler: Added jobs for time 1524471095000 ms
126 18/04/23 04:11:36 INFO JobScheduler: Added jobs for time 1524471096000 ms
127 18/04/23 04:11:37 INFO JobScheduler: Added jobs for time 1524471097000 ms
128 18/04/23 04:11:38 INFO JobScheduler: Added jobs for time 1524471098000 ms
129 18/04/23 04:11:39 INFO JobScheduler: Added jobs for time 1524471099000 ms
130 18/04/23 04:11:40 INFO JobScheduler: Added jobs for time 1524471100000 ms
131 18/04/23 04:11:41 INFO JobScheduler: Added jobs for time 1524471101000 ms
132 18/04/23 04:11:42 INFO JobScheduler: Added jobs for time 1524471102000 ms
133 18/04/23 04:11:43 INFO JobScheduler: Added jobs for time 1524471103000 ms
134 18/04/23 04:11:44 INFO JobScheduler: Added jobs for time 1524471104000 ms
135 18/04/23 04:11:45 INFO JobScheduler: Added jobs for time 1524471105000 ms
136 18/04/23 04:11:46 INFO JobScheduler: Added jobs for time 1524471106000 ms
137 18/04/23 04:11:47 INFO JobScheduler: Added jobs for time 1524471107000 ms
138 ^A18/04/23 04:11:48 INFO JobScheduler: Added jobs for time 1524471108000 ms
139 18/04/23 04:11:49 INFO JobScheduler: Added jobs for time 1524471109000 ms
140 18/04/23 04:11:50 INFO JobScheduler: Added jobs for time 1524471110000 ms
141 18/04/23 04:11:51 INFO JobScheduler: Added jobs for time 1524471111000 ms
142 18/04/23 04:11:52 INFO JobScheduler: Added jobs for time 1524471112000 ms
143 18/04/23 04:11:53 INFO JobScheduler: Added jobs for time 1524471113000 ms

2、启动过程如上所示,下面就是问题,当在nc -lk 9999命令窗口,输入例如hello world hello world hadoop world spark world flume world hello world的字符的时候,另一个窗口执行如下命令[hadoop@slaver1 spark-1.5.1-bin-hadoop2.4]$ ./bin/run-example streaming.NetworkWordCount 192.168.19.131 9999  ,以后出现这样的错误:

18/04/23 04:12:25 INFO MemoryStore: ensureFreeSpace(79) called with curMem=77659, maxMem=560497950
18/04/23 04:12:25 INFO MemoryStore: Block input-0-1524471145600 stored as bytes in memory (estimated size 79.0 B, free 534.5 MB)
18/04/23 04:12:25 INFO BlockManagerInfo: Added input-0-1524471145600 in memory on localhost:36228 (size: 79.0 B, free: 534.5 MB)
18/04/23 04:12:25 INFO BlockGenerator: Pushed block input-0-

3、解决方法,将你的虚拟机核数修改位多核,再次执行,可以看到统计的数量:修改为2核执行速度已经很快了,开始我的内存是1G,后来添加到2G还是解决不了问题,把核数修改为2核,解决问题:

再次执行结果如下所示:

INFO JobScheduler: Added jobs for time 1524468752000 ms/INFO MemoryStore: Block input-0-1524469143000 stored as bytes in memory/完全分布式 ./bin/run-example streaming.NetworkWordCount localhost 9999无法正常运行的更多相关文章

  1. Spark小课堂Week6 启动日志详解

    Spark小课堂Week6 启动日志详解 作为分布式系统,Spark程序是非常难以使用传统方法来进行调试的,所以我们主要的武器是日志,今天会对启动日志进行一下详解. 日志详解 今天主要遍历下Strea ...

  2. Spark Streaming揭秘 Day28 在集成开发环境中详解Spark Streaming的运行日志内幕

    Spark Streaming揭秘 Day28 在集成开发环境中详解Spark Streaming的运行日志内幕 今天会逐行解析一下SparkStreaming运行的日志,运行的是WordCountO ...

  3. Spark Streaming 002 统计单词的例子

    1.准备 事先在hdfs上创建两个目录: 保存上传数据的目录:hdfs://alamps:9000/library/SparkStreaming/data checkpoint的目录:hdfs://a ...

  4. <Spark><Spark Streaming><作业分析><JobHistory>

    Intro 这篇是对一个Spark (Streaming)作业的log进行分析.用来加深对Spark application运行过程,优化空间的各种理解. Here to Start 从我这个初学者写 ...

  5. SparkStreaming实时日志分析--实时热搜词

    Overview 整个项目的整体架构如下: 关于SparkStreaming的部分: Flume传数据到SparkStreaming:为了简单使用的是push-based的方式.这种方式可能会丢失数据 ...

  6. <译>Spark Sreaming 编程指南

    Spark Streaming 编程指南 Overview A Quick Example Basic Concepts Linking Initializing StreamingContext D ...

  7. Apache Spark 2.2.0 中文文档 - Spark Streaming 编程指南 | ApacheCN

    Spark Streaming 编程指南 概述 一个入门示例 基础概念 依赖 初始化 StreamingContext Discretized Streams (DStreams)(离散化流) Inp ...

  8. spark stream初探

    spark带了一个NetworkWordCount测试程序,用以统计来自某TCP连接的单词输入: /usr/local/spark/bin/run-example streaming.NetworkW ...

  9. Spark Streaming编程指南

    Overview A Quick Example Basic Concepts Linking Initializing StreamingContext Discretized Streams (D ...

随机推荐

  1. input[type=file]上传文件(格式判断、文件大小、上传成功后操作)

    var isUploadImg = false; //在input file内容改变的时候触发事件******************上传图片 $('#filed').change(function( ...

  2. C++ 读取字符串中的数字

    今天真是试了各种方法,笨方法聪明方法都有了 方法1:一个字符一个字符的读取 方法2:借助strtok实现split 适用于char 方法3:借助istringstream实现split 适用于stri ...

  3. python3+selenium入门07-元素等待

    在使用selenium进行操作时,有时候在定位元素时会报错.这可能是因为元素还没有来得及加载导致的.可以等过元素等待,等待元素出现.有强制等待,显式等待,隐式等待. 强制等待 就是之前文章中的time ...

  4. 手动注册 Omron SYSMAC OPC Server 2

    使用如下注册表操作实现 Omron SYSMAC OPC Server 2 的注册,包括COM组件和ProgID. 注意,手动注册适用于在win10等不兼容老版本程序的正常使用的情况,此方法一般在单机 ...

  5. 几种 WebP 动态图制作方法

    1.RealWorld Paint 目前唯一有图形用户界面的 动态 webp 编辑器, 利用最新版本 libwebp v1.0.0 生成的有损动态图是打不开.这个有两个版本, 推荐使用 2013.1, ...

  6. OpenStack实践系列⑤网络服务Neutron

    OpenStack实践系列⑤网络服务Neutron 3.8 Neturn 服务部署 注册neutron服务 [root@node1 ~]# source admin-openrc.sh [root@n ...

  7. Python2018-列表的相关操作

    列表中存放的数据是可以进行修改的,比如"增"."删"."改" .“查” "增"-----append, extend, ...

  8. I/O 模型

    5种I/O模型的基本区别: 阻塞式I/O 非阻塞式I/O I/O复用 信号异步模型 异步I/O 1. 阻塞 I/O 最流行的I/O模型是阻塞I/O模型,缺省情形下,所有套接口都是阻塞的.我们以数据报套 ...

  9. vue项目中实现复制内容到剪贴板

    项目中要实现分享功能,现在各种接口都关闭了,而且不同的浏览器要使用不同的代码,最后决定直接复制url,然后手动分享 Vue中使用了vue-clipboard2 github地址:https://git ...

  10. Oracle imp exp 导入导出 执行脚本

    一:用命令 imp/exp 的方式进行数据的导入和导出 一:文件后缀名: 二:oracle  导出 exp 命令 echo 开始备份数据库 if not exist D:\oracle_bak\fil ...