[Spark][kafka]kafka 的topic 创建和删除试验
kafka 的topic 创建和删除试验
zookeeper和kafka 的安装,参考:
http://www.cnblogs.com/caoguo/p/5958608.html
参考上述URL后,在个人的伪分布式环境下,安装了kafka
确认 zookeeper 为运行状态:
$ service zookeeper-server status
zookeeper-server is running
启动kafka:
[training@localhost ~]$ /etc/init.d/kafka start
Starting Kafka:/sbin/runuser: cannot set groups: Operation not permitted
done.
创建topic:
# ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test333
Created topic "test333".
确认topic:
[root@localhost bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
test333
删除topic:
# ./kafka-topics.sh --delete --zookeeper localhost:2181 --topic test333
Topic test333 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
再度确认topic:
# ./kafka-topics.sh --list --zookeeper localhost:2181
test333 - marked for deletion
由于kafka 的配置文件中,没有设置 delete.topic.enable =true。
此topic 标记为 "marked for deletion"。
设置 delete.topic.enable =true后,仍然没有得到删除。
进行永久删除:
执行 rmr /brokers/topics/test333:
# zookeeper-client
Connecting to localhost:2181
2017-10-15 17:40:54,251 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh5.7.0--1, built on 03/23/2016 18:30 GMT
2017-10-15 17:40:54,311 [myid:] - INFO [main:Environment@100] - Client environment:host.name=localhost
2017-10-15 17:40:54,312 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_60
2017-10-15 17:40:54,359 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2017-10-15 17:40:54,360 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_60/jre
2017-10-15 17:40:54,360 [myid:] - INFO [main:Environment@100] - Client
environment:java.class.path=/usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/netty-3.2.2.Final.jar:/usr/lib/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/lib/zookeeper/bin/../lib/jline-2.11.jar:/usr/lib/zookeeper/bin/../zookeeper-3.4.5-cdh5.7.0.jar:/usr/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.7.0.jar:/usr/lib/zookeeper/lib/slf4j-log4j12.jar:/usr/lib/zookeeper/lib/log4j-1.2.16.jar:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/lib/jline-2.11.jar:/usr/lib/zookeeper/lib/netty-3.2.2.Final.jar
2017-10-15 17:40:54,365 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-10-15 17:40:54,365 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2017-10-15 17:40:54,373 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2017-10-15 17:40:54,374 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2017-10-15 17:40:54,374 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2017-10-15 17:40:54,376 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-504.30.3.el6.x86_64
2017-10-15 17:40:54,376 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2017-10-15 17:40:54,394 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2017-10-15 17:40:54,395 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/usr/local/kafka_2.11-0.10.0.1/bin
2017-10-15 17:40:54,431 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@1a86f2f1
Welcome to ZooKeeper!
2017-10-15 17:40:54,709 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2017-10-15 17:40:55,962 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:54572, server: localhost/127.0.0.1:2181
2017-10-15 17:40:56,140 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15f1e700b330016, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /admin/delete_topics
[test333]
[zk: localhost:2181(CONNECTED) 1] ls /brokers/topics
[zk: localhost:2181(CONNECTED) 3] rmr /brokers/topics/test333
[zk: localhost:2181(CONNECTED) 4]
[zk: localhost:2181(CONNECTED) 4] ls /brokers/topics
[]
[zk: localhost:2181(CONNECTED) 5] quit
Quitting...
2017-10-15 17:56:07,144 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@512] - EventThread shut down
2017-10-15 17:56:07,146 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x15f1e700b330016 closed
#
[Spark][kafka]kafka 的topic 创建和删除试验的更多相关文章
- [Big Data - Kafka] kafka学习笔记:知识点整理
一.为什么需要消息系统 1.解耦: 允许你独立的扩展或修改两边的处理过程,只要确保它们遵守同样的接口约束. 2.冗余: 消息队列把数据进行持久化直到它们已经被完全处理,通过这一方式规避了数据丢失风险. ...
- 使用Java API创建(create),查看(describe),列举(list),删除(delete)Kafka主题(Topic)
使用Kafka的同学都知道,我们每次创建Kafka主题(Topic)的时候可以指定分区数和副本数等信息,如果将这些属性配置到server.properties文件中,以后调用Java API生成的主题 ...
- 彻底删除Kafka中的topic
1.删除kafka存储目录(server.properties文件log.dirs配置,默认为"/tmp/kafka-logs")相关topic目录 2.Kafka 删除topic ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十九):推送avro格式数据到topic,并使用spark structured streaming接收topic解析avro数据
推送avro格式数据到topic 源代码:https://github.com/Neuw84/structured-streaming-avro-demo/blob/master/src/main/j ...
- 彻底删除kafka下面的topic
如果只是用kafka-topics.sh的delete命令删除topic,会有两种情况: 如果当前topic没有使用过即没有传输过信息:可以彻底删除 如果当前topic有使用过即有过传输过信息:并没有 ...
- 删除Kafka中的topic
删除Kafka中的topic 一.配置delete.topic.enable=true 二.其他方法 一.配置delete.topic.enable=true 修改kafaka配置文件server.p ...
- Kafka 0.8 如何创建topic
1. 操作命令 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions ...
- Spark streaming + Kafka 流式数据处理,结果存储至MongoDB、Solr、Neo4j(自用)
KafkaStreaming.scala文件 import kafka.serializer.StringDecoder import org.apache.spark.SparkConf impor ...
- Spark踩坑记——Spark Streaming+Kafka
[TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...
随机推荐
- 2014/08/31 Zushi
今天是逗子森户海滨浴场开放的最后一天,趁着最后的光景来这里透透气. 在学皮划艇准备下海的人们,貌似还挺有趣. 来自云端的上帝之手. 谁愿意和我一起向着夕阳弄桨. 夕阳西下,那里是家乡的方向. 灯塔和神 ...
- Flutter 布局(一)- Container详解
本文主要介绍Flutter中非常常见的Container,列举了一些实际例子介绍如何使用. 1. 简介 A convenience widget that combines common painti ...
- Flutter Plugin开发流程
这篇文章主要介绍了Flutter Plugin开发流程,包括如何利用Android Studio开发以及发布等. 本文主要给大家介绍如何开发Flutter Plugin中Android的部分.有关Fl ...
- 你不可不知的Java引用类型之——PhantomReference源码详解
定义 PhantomReference是虚引用,该引用不会影响不会影响对象的生命周期,也无法从虚引用中获取对象实例. 说明 源码介绍部分其实也没多大内容,主要内容都在前面介绍中说完了.PhantomR ...
- (办公)百度api的使用
这个只是入门,详细的还得看官方的文档http://lbsyun.baidu.com/index.php?title=jspopular3.0/guide/helloworld 百度地图的“Hello, ...
- Java:JavaBean和BeanUtils
本文内容: 什么是JavaBean JavaBean的使用 BeanUitls 利用DBUtils从数据库中自动加载数据到javabean对象中 首发日期:2018-07-21 什么是JavaBean ...
- Spark之UDAF
import org.apache.spark.sql.{Row, SparkSession} import org.apache.spark.sql.expressions.{MutableAggr ...
- [20181031]12c 在线移动数据文件.txt
[20181031]12c 在线移动数据文件.txt --//12c以前,移动或者改名数据文件是一项比较麻烦的事情,至少要停一下业务.而12c支持在线移动或者改名数据文件,并且有点不可思议--//的是 ...
- SQL Server datetime类型转换超出范围的报错
一个很基础的插入语句: insert into table1 select col1,convert(datetime,col2),convert(datetime,col3),col4,col5 f ...
- 高性能JavaScript模板引擎 artTemplate
下载地址 <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <ti ...