Apache Kafka 分布式消息队列中间件安装与配置 转载
bin/zkServer.sh start /home/guym/down/kafka_2.8.0-0.8.0/config/zookeeper.properties&
bin/kafka-server-start.sh config/server.properties
bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic mykafka
bin/kafka-list-topic.sh --zookeeper localhost:2181
bin/kafka-console-producer.sh --zookeeper 127.0.0.1:2181 --topic mykafka
bin/kafka-list-topic.sh --zookeeper localhost:2181
bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic mykafka
本文演示从1个zookeeper+1个kafka broker到3个zookeeper+2个kafka broker集群的配置过程。
kafka依赖于zookeeper, 首先下载zookeeper和kafka
- $ wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
- $ gzip -d zookeeper-3.4.6.tar.gz
- $ tar -xvf zookeeper-3.4.6.tar
- $ wget http://apache.fayea.com/apache-mirror/kafka/0.8.1.1/kafka_2.8.0-0.8.1.1.tgz
- $ gtar xvzf kafka_2.8.0-0.8.1.1.tgz
对于CentOS来说在,在本地试验可能会遇到莫名其妙的问题,这一般是由于主机名不能正确识别导致。为了避免可能遇到的问题,首先查询本机主机名,
- $ hostname
- HOME
然后加入一条本地解析到/etc/hosts文件中
- 127.0.0.1 HOME
一个zookeeper + 一个kafka broker的配置
将zookeeper/conf/下的zoo_sample.cfg改名成zoo.cfg。 zookeeper默认会读取此配置文件。配置文件暂时不用改,默认即可
- $mv zookeeper-3.4.6/conf/zoo_sample.cfg zookeeper-3.4.6/conf/zoo.cfg
启动Zookeeper服务, Zookeeper启动成功后在2181端口监听
- $ zookeeper-3.4.6/bin/zkServer.sh start
- JMX enabled by default
- Using config: /home/wj/event/zookeeper-3.4.6/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
启动Kafka服务,启动成功后在9092端口监听
- $ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server.properties
开始测试
- # 连接zookeeper, 创建一个名为test的topic, replication-factor 和 partitions 后面会解释,先设置为1
- $ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
- Created topic "test".
- # 查看已经创建的topic列表
- $ bin/kafka-topics.sh --list --zookeeper localhost:2181
- test
- # 查看此topic的属性
- $ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
- Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
- Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
- # 生产者连接Kafka Broker发布一个消息
- $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- Hello World
消费者连接Zookeeper获取消息
- $ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- Hello World
一个zookeeper + 两个kafka broker的配置
为
避免Kafka Broker的Singal-Point-Failure(单点失败), 需要建立多个Kafka
Broker。先将kakfa目录中的/config/server.properties复制为/config/server-
2.properties然后编辑它的内容, 具体见注释
- # The id of the broker. This must be set to a unique integer for each broker.
- broker.id=1 #每个Kafka Broker应该配置一个唯一的ID
- ############################# Socket Server Settings #############################
- # The port the socket server listens on
- port=19092 #因为是在同一台机器上开多个Broker,所以使用不同的端口号区分
- # Hostname the broker will bind to. If not set, the server will bind to all interfaces
- #host.name=localhost #如果有多个网卡地址,也可以将不同的Broker绑定到不同的网卡
- ############################# Log Basics #############################
- # A comma seperated list of directories under which to store log files
- log.dirs=/tmp/kafka-logs-2 #因为是在同一台机器上开多个Broker,需要确保使用不同的日志目录来避免冲突
现在就可以用新建的配置文件启动这个一个Broker了
- $ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server-2.properties
现在新创建一个topic, replication-factor表示该topic需要在不同的broker中保存几份,这里replication-factor设置为2, 表示在两个broker中保存。
- $ kafka_2.8.0-0.8.1.1/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 1 --topic test2
然后查看此topic的属性。
- $ kafka_2.8.0-0.8.1.1/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test2
- Topic:test2 PartitionCount:1 ReplicationFactor:2 Configs:
- Topic: test2 Partition: 0 Leader: 0 Replicas: 0,1 Isr: 0,1
- Leader: 如果有多个brokerBroker保存同一个topic,那么同时只能有一个Broker负责该topic的读写,其它的Broker作为实时备份。负责读写的Broker称为Leader.
- Replicas : 表示该topic的0分区在0号和1号broker中保存
- Isr : 表示当前有效的broker, Isr是Replicas的子集
在test2这个topic下发布新的消息验证工作正常
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test2
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- HHH
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper lochost:2181 --from-beginning --topic test2
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- HHH
现在杀掉第一个Broker,模拟此点的崩溃
- $ ps aux | grep server.properties
- user 2620 1.5 5.6 2082704 192424 pts/1 Sl+ 08:57 0:25 java
- $ kill 2620
重新查询此topic的属性,会发现Leader已经进行了切换,而0号Broker也从Isr中消失了。
- $ kafka_2.8.0-0.8.1.1/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test2
- Topic:test2 PartitionCount:1 ReplicationFactor:2 Configs:
- Topic: test2 Partition: 0 Leader: 1 Replicas: 0,1 Isr: 1
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-producer.sh --broker-list localhost:19092 --topic test2
- # 使用1号broker再发布一个消息到test2下
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- Another message
- # 消费者查询,仍然工作正常
- $ kafka_2.8.0-0.8.1.1/bin/kafka-consolconsumer.sh --zookeeper localhost:2181 --from-beginning92 --topic test2
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- HHH
- Another message
三个zookeeper + 两个kafka broker的配置
同
样,zookeeper也需要搭建cluster, 避免出现Single-Point-Failure.
由于zookeeper采用投票的方式来重新选举某节点失败后的leader,
所以至少需要三个zookeeper才能组成群集。且最好使用奇数个(而非偶数)。
下面是演示单机搭建最简单的zookeeper cluster, 具体的可以参考http://myjeeva.com/zookeeper-cluster-setup.html
- #!/bin/sh
- #下载
- wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
- gzip -d zookeeper-3.4.6.tar.gz
- tar xvf zookeeper-3.4.6.tar
- #重命名 zoo_sample.cfg 为 zoo.cfg
- mv zookeeper-3.4.6/conf/zoo_sample.cfg zookeeper-3.4.6/conf/zoo.cfg
- #新建一个目录
- sudo mkdir /usr/zookeeper-cluster
- sudo chown -R jerry:jerry /usr/zookeeper-cluster
- #3个子目录分别对应三个zookeeper服务
- mkdir /usr/zookeeper-cluster/server1
- mkdir /usr/zookeeper-cluster/server2
- mkdir /usr/zookeeper-cluster/server3
- #建立三个目录存放各自的数据文件
- mkdir /usr/zookeeper-cluster/data
- mkdir /usr/zookeeper-cluster/data/server1
- mkdir /usr/zookeeper-cluster/data/server2
- mkdir /usr/zookeeper-cluster/data/server3
- #建立三个目录存放各自的日志文件
- mkdir /usr/zookeeper-cluster/log
- mkdir /usr/zookeeper-cluster/log/server1
- mkdir /usr/zookeeper-cluster/log/server2
- mkdir /usr/zookeeper-cluster/log/server3
- #在每一个数据文件目录中,新建一个myid文件,文件必须是唯一的服务标识,在后面的配置中会用到
- echo '1' > /usr/zookeeper-cluster/data/server1/myid
- echo '2' > /usr/zookeeper-cluster/data/server2/myid
- echo '3' > /usr/zookeeper-cluster/data/server3/myid
- #将zookeeper复制三份
- cp -rf zookeeper-3.4.6/* /usr/zookeeper-cluster/server1
- cp -rf zookeeper-3.4.6/* /usr/zookeeper-cluster/server2
- cp -rf zookeeper-3.4.6/* /usr/zookeeper-cluster/server3
然后编辑每个zookeeper的zoo.cfg配置文件。
将dataDir和dataLogDir设置为各自独立的目录;然后保证clientPort不会和其它zookeeper冲突(因为这里演示是3个实例安装于一台服务器上)
最后加入下面几行
- server.1=0.0.0.0:2888:3888
- server.2=0.0.0.0:12888:13888
- server.3=0.0.0.0:22888:23888
server.X=IP:port1:port2
X是在该zookeeper数据文件目录中myid指定的服务ID.
IP是当前zookeeper绑定的IP地址,因为是演示,所以全都是localhost
port1 是Quorum Port
port2 是Leader Election Port
由于3个zookeeper在同一台机器上,需要使用不同的端口号避免冲突。
修改后的结果如下
/usr/zookeeper-cluster/server1/conf/zoo.cfg
- # The number of milliseconds of each tick
- tickTime=2000
- # The number of ticks that the initial
- # synchronization phase can take
- initLimit=10
- # The number of ticks that can pass between
- # sending a request and getting an acknowledgement
- syncLimit=5
- # the directory where the snapshot is stored.
- # do not use /tmp for storage, /tmp here is just
- # example sakes.
- dataDir=/usr/zookeeper-cluster/data/server1
- dataLogDir=/usr/zookeeper-cluster/log/server1
- # the port at which the clients will connect
- clientPort=2181
- # the maximum number of client connections.
- # increase this if you need to handle more clients
- #maxClientCnxns=60
- #
- # Be sure to read the maintenance section of the
- # administrator guide before turning on autopurge.
- #
- # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
- #
- # The number of snapshots to retain in dataDir
- #autopurge.snapRetainCount=3
- # Purge task interval in hours
- # Set to "0" to disable auto purge feature
- #autopurge.purgeInterval=1
- server.1=0.0.0.0:2888:3888
- server.2=0.0.0.0:12888:13888
- server.3=0.0.0.0:22888:23888
/usr/zookeeper-cluster/server2/conf/zoo.cfg
- # The number of milliseconds of each tick
- tickTime=2000
- # The number of ticks that the initial
- # synchronization phase can take
- initLimit=10
- # The number of ticks that can pass between
- # sending a request and getting an acknowledgement
- syncLimit=5
- # the directory where the snapshot is stored.
- # do not use /tmp for storage, /tmp here is just
- # example sakes.
- dataDir=/usr/zookeeper-cluster/data/server2
- dataLogDir=/usr/zookeeper-cluster/log/server2
- # the port at which the clients will connect
- clientPort=12181
- # the maximum number of client connections.
- # increase this if you need to handle more clients
- #maxClientCnxns=60
- #
- # Be sure to read the maintenance section of the
- # administrator guide before turning on autopurge.
- #
- # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
- #
- # The number of snapshots to retain in dataDir
- #autopurge.snapRetainCount=3
- # Purge task interval in hours
- # Set to "0" to disable auto purge feature
- #autopurge.purgeInterval=1
- server.1=0.0.0.0:2888:3888
- server.2=0.0.0.0:12888:13888
- server.3=0.0.0.0:22888:23888
/usr/zookeeper-cluster/server3/conf/zoo.cfg
- # The number of milliseconds of each tick
- tickTime=2000
- # The number of ticks that the initial
- # synchronization phase can take
- initLimit=10
- # The number of ticks that can pass between
- # sending a request and getting an acknowledgement
- syncLimit=5
- # the directory where the snapshot is stored.
- # do not use /tmp for storage, /tmp here is just
- # example sakes.
- dataDir=/usr/zookeeper-cluster/data/server3
- dataLogDir=/usr/zookeeper-cluster/log/server3
- # the port at which the clients will connect
- clientPort=22181
- # the maximum number of client connections.
- # increase this if you need to handle more clients
- #maxClientCnxns=60
- #
- # Be sure to read the maintenance section of the
- # administrator guide before turning on autopurge.
- #
- # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
- #
- # The number of snapshots to retain in dataDir
- #autopurge.snapRetainCount=3
- # Purge task interval in hours
- # Set to "0" to disable auto purge feature
- #autopurge.purgeInterval=1
- server.1=0.0.0.0:2888:3888
- server.2=0.0.0.0:12888:13888
- server.3=0.0.0.0:22888:23888
然后分别启动3个zookeeper服务
- $ /usr/zookeeper-cluster/server1/bin/zkServer.sh start
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server1/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- $ /usr/zookeeper-cluster/server2/bin/zkServer.sh start
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server2/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- $ /usr/zookeeper-cluster/server3/bin/zkServer.sh start
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server3/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
启动完成后查看每个服务的状态,下面可以看到server2被选为了leader. 而其它2个服务为follower.
- $ /usr/zookeeper-cluster/server1/bin/zkServer.sh status
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server1/bin/../conf/zoo.cfg
- Mode: follower
- $ /usr/zookeeper-cluster/server2/bin/zkServer.sh status
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server2/bin/../conf/zoo.cfg
- Mode: leader
- $ /usr/zookeeper-cluster/server3/bin/zkServer.sh status
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server3/bin/../conf/zoo.cfg
- Mode: follower
接下来修改kafka的server.properties配置文件
kafka_2.8.0-0.8.1.1/config/server.properties和kafka_2.8.0-0.8.1.1/config/server-2.properties
将3个zookeeper的地址加入到zookeeper.connect中,如下:
- zookeeper.connect=localhost:2181,localhost:12181,localhost:22181
启动2个Kafka broker
- $ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server.properties
- $ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server-2.properties
接下来验证一下
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test3 --from-beginning
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- fhsjdfhdsa
- fjdsljfdsadsfdas
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeepecalhost:12181 --topic test3 --from-beginning
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- fhsjdfhdsa
- fjdsljfdsadsfdas
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeepecalhost:22181 --topic test3 --from-beginning
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- fhsjdfhdsa
- fjdsljfdsadsfdas
现在模拟leader挂掉的情况,直接将server2 的zookeeper杀掉
- $ ps aux | grep server2
- user 2493 1.0 1.8 1661116 53792 pts/0 Sl 14:46 0:02 java
- $ kill 2493
重新查询一次各zookeeper的状态,会发现leader发生了改变
- $ /usr/zookeeper-cluster/server3/bin/zkServer.sh status
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server3/bin/../conf/zoo.cfg
- Mode: leader
- $ /usr/zookeeper-cluster/server1/bin/zkServer.sh status
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server1/bin/../conf/zoo.cfg
- Mode: follower
- $ /usr/zookeeper-cluster/server2/bin/zkServer.sh status
- JMX enabled by default
- Using config: /usr/zookeeper-cluster/server2/bin/../conf/zoo.cfg
- Error contacting service. It is probably not running.
再次验证,kafka集群仍然工作正常。
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test3 --from-beginning
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- fhsjdfhdsa
- fjdsljfdsadsfdas
- $ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeepecalhost:22181 --topic test3 --from-beginning
- SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
- SLF4J: Defaulting to no-operation (NOP) logger implementation
- SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
- fhsjdfhdsa
- fjdsljfdsadsfdas
- $ kafka_2.8.0-0.8.1.1/bin/katopics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic test5
- Created topic "test5".
- $ kafka_2.8.0-0.8.1.1/bin/katopics.sh --create --zookeeper localhost:22181 --replication-factor 2 --partitions 2 --topic test6
- Created topic "test6".
转载 本文地址: http://blog.csdn.net/wangjia184/article/details/37921183
Apache Kafka 分布式消息队列中间件安装与配置 转载的更多相关文章
- 【转】快速理解Kafka分布式消息队列框架
from:http://blog.csdn.net/colorant/article/details/12081909 快速理解Kafka分布式消息队列框架 标签: kafkamessage que ...
- Kafka 分布式消息队列介绍
Kafka 分布式消息队列 类似产品有JBoss.MQ 一.由Linkedln 开源,使用scala开发,有如下几个特点: (1)高吞吐 (2)分布式 (3)支持多语言客户端 (C++.Java) 二 ...
- Kafka分布式消息队列
基本架构 Kafka分布式消息队列的作用: 解耦:将消息生产阶段和处理阶段拆分开,两个阶段互相独立各自实现自己的处理逻辑,通过Kafka提供的消息写入和消费接口实现对消息的连接处理.降低开发复杂度,提 ...
- ActiveMQ RabbitMQ RokcetMQ Kafka实战 消息队列中间件视频教程
附上消息队列中间件百度网盘连接: 链接: https://pan.baidu.com/s/1FFZQ5w17e1TlLDSF7yhzmA 密码: hr63
- kafka分布式消息队列介绍以及集群安装
简介 首先简单说下对kafka的理解: 1.kafka是一个分布式的消息缓存系统: 2.kafka集群中的服务器节点都被称作broker 3.kafka的客户端分为:一是producer(消息生产者) ...
- 快速理解Kafka分布式消息队列框架
作者:刘旭晖 Raymond 转载请注明出处 Email:colorant at 163.com BLOG:http://blog.csdn.net/colorant/ ==是什么 == 简单的说,K ...
- [转载] 快速理解Kafka分布式消息队列框架
转载自http://blog.csdn.net/xiaolang85/article/details/18048631 ==是什么 == 简单的说,Kafka是由Linkedin开发的一个分布式的消息 ...
- Apache Kafka 企业级消息队列
1.大纲 了解 Apache Kafka是什么 掌握Apache Kafka的基本架构 搭建Kafka集群 掌握操作集群的两种方式 了解Apache Kafka高级部分的内容 2.消息系统的作用是什么 ...
- 7月目标 socket , 一致性哈希算法 ; mongodb分片; 分布式消息队列; 中间件的使用场景
分布式的基础:一致性哈希 路由算法的一致性hash http://www.jiacheo.org/blog/174 http://www.tuicool.com/articles/vQVbmai ...
随机推荐
- Team Foundation API - 编程控制文件版本
Team Foundation Server (TFS)工具的亮点之一是文件的版本控制.在TFS中实现文件版本控制的类型: Microsoft.TeamFoundation.Client.TfsTea ...
- java学习第十天
第十二次课 目标 一维数组(创建访问) 一.概念与特点 1.概念 相同数据类型的有序集合[] 数组名: 容器的名字 元素: 下标变量,数组名[下标] 长度: length 下标: 位置.索引 ...
- Evaluate Reverse Polish Notation
Evaluate the value of an arithmetic expression in Reverse Polish Notation. Valid operators are +, -, ...
- linux监控nmon和analyser的使用
测试压力的时候使用linux一个简单的监控工具nmon,不仅可以实时监测,也可以生成文件以图标样式展现,小巧实用 安装nmon nmon实际上是个tarball直接解压到所要放置的目录,然后直接运行就 ...
- USB 描述符
标准的USB设备有5种USB描述符:设备描述符,配置描述符,字符串描述符,接口描述符,端点描述符. // Standard Device Descriptor typedef struct { u8 ...
- (实用篇)微信网页授权(OAuth2.0) PHP 源码简单实现
提要: 1. 建议对OAuth2.0协议做一个学习. 2. 微信官方文档和微信官网工具要得到充分利用. 比较简单,直接帖源代码了.其中"xxxxxxxxxx"部分,是需要依据自己环 ...
- sdut 2159 Ivan comes again!(2010年山东省第一届ACM大学生程序设计竞赛) 线段树+离散
先看看上一个题: 题目大意是: 矩阵中有N个被标记的元素,然后针对每一个被标记的元素e(x,y),你要在所有被标记的元素中找到一个元素E(X,Y),使得X>x并且Y>y,如果存在多个满足条 ...
- jQuery关于Select的操作
jQuery获取Select选择的Text和Value: 1. var checkText=jQuery("#select_id").find("option:selec ...
- UI学习笔记---第九天UITableView表视图
UITableView表视图 一.表视图的使用场景 表视图UITableView是iOS中最重要的视图,随处可见,通常用来管理一组具有相同数据结构的数据 表视图继承自UIScrollView,所以可以 ...
- Java-->IO流模拟实现用户登录以及登录信息
--> Test 测试类 package com.dragon.java.hwlogin; import java.io.FileNotFoundException; import java.u ...