kafaka quickstart
http://kafka.apache.org/
http://kafka.apache.org/downloads
cd /root/kafuka/kafka_2.12-0.11.0.0
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup bin/kafka-server-start.sh config/server.properties & bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181 producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test consumer
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
Quickstart
This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows platforms use bin\windows\ instead of bin/, and change the script extension to .bat.
Step 1: Download the code
Download the 0.11.0.0 release and un-tar it.
|
1
2
|
> tar -xzf kafka_2.11-0.11.0.0.tgz> cd kafka_2.11-0.11.0.0 |
Step 2: Start the server
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
|
1
2
3
|
> bin/zookeeper-server-start.sh config/zookeeper.properties[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)... |
Now start the Kafka server:
|
1
2
3
4
|
> bin/kafka-server-start.sh config/server.properties[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)... |
Step 3: Create a topic
Let's create a topic named "test" with a single partition and only one replica:
|
1
|
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test |
We can now see that topic if we run the list topic command:
|
1
2
|
> bin/kafka-topics.sh --list --zookeeper localhost:2181test |
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Step 4: Send some messages
Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.
|
1
2
3
|
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testThis is a messageThis is another message |
Step 5: Start a consumer
Kafka also has a command line consumer that will dump out messages to standard output.
|
1
2
3
|
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginningThis is a messageThis is another message |
If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
All of the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.
Step 6: Setting up a multi-broker cluster
So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).
First we make a config file for each of the brokers (on Windows use the copy command instead):
|
1
2
|
> cp config/server.properties config/server-1.properties> cp config/server.properties config/server-2.properties |
Now edit these new files and set the following properties:
|
1
2
3
4
5
6
7
8
9
|
config/server-1.properties: broker.id=1 listeners=PLAINTEXT://:9093 log.dir=/tmp/kafka-logs-1config/server-2.properties: broker.id=2 listeners=PLAINTEXT://:9094 log.dir=/tmp/kafka-logs-2 |
The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
|
1
2
3
4
|
> bin/kafka-server-start.sh config/server-1.properties &...> bin/kafka-server-start.sh config/server-2.properties &... |
Now create a new topic with a replication factor of three:
|
1
|
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic |
Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:
|
1
2
3
|
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topicTopic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs: Topic: my-replicated-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0 |
Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.
- "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
- "replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
- "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
Note that in my example node 1 is the leader for the only partition of the topic.
We can run the same command on the original topic we created to see where it is:
|
1
2
3
|
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic testTopic:test PartitionCount:1 ReplicationFactor:1 Configs: Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0 |
So there is no surprise there—the original topic has no replicas and is on server 0, the only server in our cluster when we created it.
Let's publish a few messages to our new topic:
|
1
2
3
4
5
|
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic...my test message 1my test message 2^C |
Now let's consume these messages:
|
1
2
3
4
5
|
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic...my test message 1my test message 2^C |
Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:
|
1
2
3
|
> ps aux | grep server-1.properties7564 ttys002 0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...> kill -9 7564 |
On Windows use:
|
1
2
3
|
> wmic process get processid,caption,commandline | find "java.exe" | find "server-1.properties"java.exe java -Xmx1G -Xms1G -server -XX:+UseG1GC ... build\libs\kafka_2.11-0.11.0.0.jar" kafka.Kafka config\server-1.properties 644> taskkill /pid 644 /f |
Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:
|
1
2
3
|
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topicTopic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs: Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2,0 |
But the messages are still available for consumption even though the leader that took the writes originally is down:
|
1
2
3
4
5
|
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic...my test message 1my test message 2^C |
Step 7: Use Kafka Connect to import/export data
Writing data from the console and writing it back to the console is a convenient place to start, but you'll probably want to use data from other sources or export data from Kafka to other systems. For many systems, instead of writing custom integration code you can use Kafka Connect to import or export data.
Kafka Connect is a tool included with Kafka that imports and exports data to Kafka. It is an extensible tool that runs connectors, which implement the custom logic for interacting with an external system. In this quickstart we'll see how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a Kafka topic to a file.
First, we'll start by creating some seed data to test with:
|
1
|
> echo -e "foo\nbar" > test.txt |
Next, we'll start two connectors running in standalone mode, which means they run in a single, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data. The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector class to instantiate, and any other configuration required by the connector.
|
1
|
> bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties |
These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier and create two connectors: the first is a source connector that reads lines from an input file and produces each to a Kafka topic and the second is a sink connector that reads messages from a Kafka topic and produces each as a line in an output file.
During startup you'll see a number of log messages, including some indicating that the connectors are being instantiated. Once the Kafka Connect process has started, the source connector should start reading lines from test.txt and producing them to the topic connect-test, and the sink connector should start reading messages from the topic connect-test and write them to the file test.sink.txt. We can verify the data has been delivered through the entire pipeline by examining the contents of the output file:
|
1
2
3
|
> cat test.sink.txtfoobar |
Note that the data is being stored in the Kafka topic connect-test, so we can also run a console consumer to see the data in the topic (or use custom consumer code to process it):
|
1
2
3
4
|
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning{"schema":{"type":"string","optional":false},"payload":"foo"}{"schema":{"type":"string","optional":false},"payload":"bar"}... |
The connectors continue to process data, so we can add data to the file and see it move through the pipeline:
|
1
|
> echo "Another line" >> test.txt |
You should see the line appear in the console consumer output and in the sink file.
Step 8: Use Kafka Streams to process data
Kafka Streams is a client library for building mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed, and much more. This quickstart example will demonstrate how to run a streaming application coded in this library.
kafaka quickstart的更多相关文章
- Kafaka高可用集群环境搭建
zk集群环境搭建:https://www.cnblogs.com/toov5/p/9897868.html 三台主机每台的Java版本1.8 下面kafka集群的搭建: 3台虚拟机均进行以下操作: ...
- [译]App Framework 2.1 (1)之 Quickstart
最近有移动App项目,选择了 Hybrid 的框架Cordova 和 App Framework 框架开发. 本来应该从配置循序渐进开始写的,但由于上班时间太忙,这段时间抽不出空来,只能根据心情和 ...
- 免安裝、免設定的 Hadoop 開發環境 - cloudera 的 QuickStart VM
cloudera 的 QuickStart VM,為一種免安裝.免設定 Linux 及 Hadoop,已幫你建好 CDH 5.x.Hadoop.Eclipse 的一個虛擬機環境.下載後解壓縮,可直接以 ...
- JBoss QuickStart之Helloworld
下载Jboss, quickstart, 按照quickstart说明, mvn clean install. 由于ssl handshake问题(应该是网络连接不稳定), 写了一个脚本不停地尝试bu ...
- [翻译]lithium 快速上手(QuickStart)
快速入门 经典博客教程 很感谢你尝试Li3!这一部分栏目为那些想了解这个框架可以做什么的php用户所设计.像这样深入代码是一种很好的方式去体会快速应用开发(Rapid Application ...
- Deep learning:四十四(Pylearn2中的Quick-start例子)
前言: 听说Pylearn2是个蛮适合搞深度学习的库,它建立在Theano之上,支持GPU(估计得以后工作才玩这个,现在木有这个硬件条件)运算,由DL大牛Bengio小组弄出来的,再加上Pylearn ...
- quickstart.sh
#!/bin/bashjava_pid=`ps -ef | grep java | grep 'com.kzhong.huamu.sisyphus.QuickStartServer' | awk '{ ...
- Confluent介绍(二)--confluent platform quickstart
下载 http://www.confluent.io/download,打开后,显示最新版本3.0.0,然后在右边填写信息后,点击Download下载. 之后跳转到下载页面,选择zip 或者 tar都 ...
- WPF QuickStart系列
接触WPF有一段时间了,现在做的项目也是WPF相关的.所以决定写一个WPF QuickStart系列的文章.也是自己对WPF学习的总结,如果对你有帮助,就非常棒了.因为不善言辞,所以尽量以WPF示例和 ...
随机推荐
- Tomcat启用GZIP压缩,提升web性能
一.前言 最近做了个项目,遇到这么一个问题:服务器返回给客户端的json数据量太大(大概65M),在客户端加载了1分多钟才渲染完毕,费时耗流量,用户体验极其不好.后来网上搜优化的方法,就是Http压缩 ...
- [转帖]Docker 清理占用的磁盘空间
Docker(二十七)-Docker 清理占用的磁盘空间 https://www.cnblogs.com/zhuochong/p/10076599.html docker system docker ...
- laravel 循环中子元素使用&符号嵌入到父级,经典版
/**ajax 获取企业名称 * * @param Request $request * * @return \Illuminate\Http\JsonResponse * @author lxw * ...
- spring 标签
*/ @Slf4j @Service public class RetryService { @Autowired private MqConfig mqConfig; /** * 如果网络连接失败, ...
- 如何使用命令从linux服务器下载文件到windows
1.直接使用命令从linux下载文件到windows //登录linux服务器导出mysql数据 mysqldump -hrm-2ze8mpi5i65429l1q.mysql.rds.aliyuncs ...
- Python——Flask框架——模板
一.渲染模板 render_template 函数把Jinja2模板引擎集成到程序中 二.Jinja2变量过滤器 过滤器名 说明 safe 渲染值是不转义 capitalize 把值得首字母转换成大写 ...
- Js--String、Date、Array对象
/* * String 对象 属性 length 方法 */ //String的length属性 var strL = "abcde"; document.write(" ...
- Siki的虚幻第一季
空项目.一闪而过的解决方法 命名空间std::cout的作用: int ,long , long long类型的范围 unsigned int 0-4294967295 int 21474 ...
- linux系统命令大全
文件管理 cat chattr chgrp chmod chown cksum cmp cp cut diff diffstat file find git gitview in indent les ...
- puppet一个完整的实例
一个具体实例来简单说明puppet的具体结构 创建第一个配置 puppet的组成清单这主要包含这几个部分 资源,文件,模板,节点,类,定义 puppet中有个模块的定义,这个比较重要,基本是puppe ...