一、安装Zookeeper

参考: Zookeeper的下载、安装和启动

Zookeeper 集群搭建--单机伪分布式集群

二、下载Kafka

进入http://kafka.apache.org/downloads

我这里使用版本:kafka_2.11-1.0.1.tgz

三、Kafka目录

解压到/usr/local路径下: tar -zxvf  kafka_2.11-1.0.1.tgz

/bin  操作kafka的可执行脚本

/config 配置文件所在的目录

/libs 依赖库目录

/logs 日志数据目录。kafka把server端的日志分为: server, request, state, log-cleaner, controller

创建log目录  cd /usr/local/kafka_2.11-1.0.1 && mkdir kafkaLogs

四、配置

1、配置zookeeper

2、kafka配置

进入config/server.properties

#broker的id,集群中的每台机器id唯一,其他两台分别1和2
broker.id=0
#是Kafka绑定的interface,这里需要写本机内网ip地址,不然bind端口失败
#其他两台分别是192.168.1.5和192.168.1.9
host.name=192.168.1.3
#向zookeeper注册的对外暴露的ip和port,118.212.149.51是192.168.1.3的外网ip地址
#如果不配置kafka部署在外网服务器的话本地是访问不到的.
advertised.listeners=PLAINTEXT://118.212.149.51:9092
#zk集群的ip和port,zk集群教程:
zookeeper.connect=192.168.1.3:2181,192.168.1.5:2181,192.168.1.9:2181
#log目录,刚刚上边建好的.
log.dirs=/usr/local/kafka_2.11-1.0.1/kafkaLogs

  

三、启动Kafka

1、启动Kafka

./kafka-server-start.sh  ../config/server.properties

启动Kafka,出现内存不够。默认内存为1G,如果设备内存比较小,修改配置

bin下面的kafka-server-start.sh,修改

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
为export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

这里启动一个Kafka。如果是部署kafka集群,分别启动多个Kafak。

2、查看broker的Id

登录Zookeeper

./zkCli.sh -server 127.0.0.1:2181

这里的0是broker的id

查看broker信息

get /brokers/ids/0

四、创建主题

1、创建主题test1

/kafka-topics.sh  --create --zookeeper localhost:2181  --replication-factor 1 --partitions 1 --topic test1

查看所有的主题:

./kafka-topics.sh --list --zookeeper localhost:2181

五、查看topic详细信息

./kafka-topics.sh  --describe --zookeeper localhost:2181

第一行topic信息摘要: 分别是topic名字(Topic), partition数量(PartitionCount), 副本数量(ReplicationFactor), 配置(Config)

第二行分别为test1的topic所有partition。依次为topic名字(Topic), partition号(Partition), 此partition所在的borker(Leader),副本所在的broker(Replicas),

Isr列表(同步状态副本列表),通俗理解为替补队员。 不是每个broker都可以作为替补队员。首先这个broker得存有副本,其次副本还得满足条件。

六、生产消息

./kafka-console-producer.sh  --broker-list  PLAINTEXT://47.xx.47.120:9092

testTopic1 是主题名称

七、消费消息

./kafka-console-consumer.sh --bootstrap-server PLAINTEXT://47.xx.xx.120:9092 --topic testTopic1 --from-beginning

八、Java集成Kafka实践

1、引入依赖

		<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>1.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.1</version>
</dependency>

  

2、生产者

import java.util.Properties;

import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.log4j.Logger; public class Producer { static Logger log = Logger.getLogger(Producer.class); private static final String TOPIC = "test";
private static final String BROKER_LIST = "47.xx.xx.120:9092";
private static KafkaProducer<String,String> producer = null; /*
初始化生产者
*/
static {
Properties configs = initConfig();
producer = new KafkaProducer<String, String>(configs);
} /*
初始化配置
*/
private static Properties initConfig(){
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,BROKER_LIST);
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
return properties;
} public static void main(String[] args) throws InterruptedException {
//消息实体
ProducerRecord<String , String> record = null;
for (int i = 0; i < 5; i++) {
record = new ProducerRecord<String, String>(TOPIC, "product value"+(int)(10*(Math.random())));
//发送消息
producer.send(record, new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (null != e){
log.info("send error" + e.getMessage());
}else {
System.out.println(String.format("offset:%s,partition:%s",recordMetadata.offset(),recordMetadata.partition()));
}
}
});
}
producer.close();
} }

 启动生产者,控制台输出

SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
2019-08-01 13:52:22.494 INFO --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [47.xx.xx.120:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:52:22.494 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [47.xx.xx.120:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer 2019-08-01 13:52:22.609 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.1
13:52:22.609 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 1.0.1
2019-08-01 13:52:22.609 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : c0518aa65f25317e
13:52:22.609 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : c0518aa65f25317e
2019-08-01 13:52:23.174 INFO --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
13:52:23.174 [main] INFO o.a.k.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
offset:15,partition:0
offset:16,partition:0
offset:17,partition:0
offset:18,partition:0
offset:19,partition:0 Process finished with exit code 0

  

  

3、消费者

import java.util.Arrays;
import java.util.Properties; import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.log4j.Logger; public class Consumer { static Logger log = Logger.getLogger(Producer.class); private static final String TOPIC = "test";
private static final String BROKER_LIST = "47.xx.xx.120:9092";
private static KafkaConsumer<String,String> consumer = null; static {
Properties configs = initConfig();
consumer = new KafkaConsumer<String, String>(configs);
consumer.subscribe(Arrays.asList(TOPIC));
} private static Properties initConfig(){
Properties properties = new Properties();
properties.put("bootstrap.servers",BROKER_LIST);
properties.put("group.id","0");
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.setProperty("enable.auto.commit", "true");
properties.setProperty("auto.offset.reset", "earliest");
return properties;
} public static void main(String[] args) {
while (true) {
ConsumerRecords<String, String> records = consumer.poll(10);
for (ConsumerRecord<String, String> record : records) {
log.info(record);
}
}
} }

 运行后,控制台输出

 

SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
2019-08-01 13:52:43.837 INFO --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [47.xx.xx.120:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = 0
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:52:43.837 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [47.xx.xx.120:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = 0
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2019-08-01 13:52:43.972 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.1
13:52:43.972 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 1.0.1
2019-08-01 13:52:43.972 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : c0518aa65f25317e
13:52:43.972 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : c0518aa65f25317e
2019-08-01 13:52:44.350 INFO --- [ main] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-1, groupId=0] Discovered group coordinator 47.xx.xx.120:9092 (id: 2147483647 rack: null)
13:52:44.350 [main] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=0] Discovered group coordinator 47.xx.xx.120:9092 (id: 2147483647 rack: null)
2019-08-01 13:52:44.356 INFO --- [ main] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-1, groupId=0] Revoking previously assigned partitions []
13:52:44.356 [main] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=0] Revoking previously assigned partitions []
2019-08-01 13:52:44.356 INFO --- [ main] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-1, groupId=0] (Re-)joining group
13:52:44.356 [main] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=0] (Re-)joining group
2019-08-01 13:52:44.729 INFO --- [ main] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-1, groupId=0] Successfully joined group with generation 3
13:52:44.729 [main] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=0] Successfully joined group with generation 3
2019-08-01 13:52:44.730 INFO --- [ main] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-1, groupId=0] Setting newly assigned partitions [test-0]
13:52:44.730 [main] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=0] Setting newly assigned partitions [test-0]
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 15, CreateTime = 1564638743167, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value6)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 15, CreateTime = 1564638743167, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value6)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 16, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value7)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 16, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value7)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 17, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 17, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 18, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 18, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 19, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value3)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 19, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value3)
  

  

 

参考: Kafka入门教程和JAVA客户端使用

Kafka入门 --安装和简单实用的更多相关文章

  1. hook框架frida的安装以及简单实用案例

    1.下载地址 https://github.co/frida/frida/releases 2.另外两种安装方法 1.Install from prebuilt binaries This is th ...

  2. Selenium的安装和简单实用——PhantomJS安装

    简介 Selenium是一个用于Web应用程序测试的工具. Selenium测试直接运行在浏览器中,就像真正的用户在操作一样.支持的浏览器包括IE(7, 8, 9, 10, 11),Firefox,S ...

  3. kafka环境安装及简单使用(单机版)

    一个分布式发布-订阅消息传递系统 特点: 高吞吐量.低延迟 使用场景(举例): 日志收集:用kafka收集各种服务产生的log,通过kafka以统一的接口服务的方式开放给各种consumer,如had ...

  4. 漫游Kafka入门篇之简单介绍

    介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢?   首先让我们看几个基本的消息系统术语: Kafka将消息以 ...

  5. 实战ELK(3) Kibana安装与简单实用

    第一步:下载 https://artifacts.elastic.co/downloads/kibana/kibana-6.5.1-x86_64.rpm 第二步:安装 1.安装 yum install ...

  6. sqoop安装与简单实用

    一,sqoop安装 1.解压源码包 2.配置环境变量 3.在bin目录下的 /bin/configsqoop 注释掉check报错信息 4.配置conf目录下 /conf/sqoop-env.sh 配 ...

  7. 漫游Kafka入门篇之简单介绍(1)

    介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢?   首先让我们看几个基本的消息系统术语: Kafka将消息以 ...

  8. Linux下Libevent安装和简单实用

    前言 Libevent 是一个用C语言编写的.轻量级的开源高性能事件通知库,主要有以下几个亮点:事件驱动( event-driven),高性能;轻量级,专注于网络,不如 ACE 那么臃肿庞大:源代码相 ...

  9. (转)漫游Kafka入门篇之简单介绍

    转自:http://blog.csdn.net/honglei915/article/details/37564521 原文地址:http://blog.csdn.net/honglei915/art ...

随机推荐

  1. Beta冲刺 5

    前言 队名:拖鞋旅游队 组长博客:https://www.cnblogs.com/Sulumer/p/10129059.html 作业博客:https://edu.cnblogs.com/campus ...

  2. commons-lang常用方法

    跟java.lang这个包的作用类似,Commons Lang这一组API也是提供一些基础的.通用的操作和处理,如自动生成toString()的结果.自动实现hashCode()和equals()方法 ...

  3. java HttpClient 忽略证书的信任的实现 MySSLProtocolSocketFactory

    当不需要任何证书访问https时,java中先实现一个MySSLProtocolSocketFactory类忽略证书的信任 package com.tgb.mq.producer.utils; imp ...

  4. 普天同庆,微博开通,从今以后,努力用功! 狗屎一样的顺口溜!Q狗屎!!狗屎。。。。。 测试。。测试。。。没刷过微博。屯里来的。看看啥效果

    普天同庆,微博开通,从今以后,努力用功! 狗屎一样的顺口溜!Q狗屎!!狗屎..... 测试..测试...没刷过微博.屯里来的.看看啥效果

  5. Open Daylight integration with OpenStack: a tutorial

    Open Daylight integration with OpenStack: a tutorial How to deploy OpenDaylight and integrate it wit ...

  6. ArrayList、LinkedList和vector的区别

    1.ArrayList和Vector都是数组存储,插入数据涉及到数组元素移动等操作,所以比较慢,因为有下标,所以查找起来非常的快. LinkedList是双向链表存储,插入时只需要记录本项的前后项,查 ...

  7. SQL字符串处理!

    一.字符转换函数1.ASCII()返回字符表达式最左端字符的ASCII 码值.在ASCII()函数中,纯数字的字符串可不用‘’括起来,但含其它字符的字符串必须用‘’括起来使用,否则会出错. 2.CHA ...

  8. 非对称加密, 助记词, PIN, WIF

    一钱包 1.1非对称加密, 助记词, PIN, WIF, 地址 1.1.1 非对称加密算法 非对称加密算法, 加密与解密使用不同的KEY, 我们分别称为私钥与公钥,其中可以通过私钥生成公钥 在比特币中 ...

  9. Linux命令基础2-ls命令

    本文介绍的是linux中的ls命令,ls的单词是list files的缩写,意思的列出目录文件. 首先我们在admin用户的当前路径,新建一个test的文件夹,为了方便本文操作和介绍,创建了不同文件类 ...

  10. oracle中left join,right join,inner join的坑

    本文主要是记录一下实际使用oracle中join查询遇到的坑 1.用到两张表,学生表和学年分数表,先建立 2.普通连接查询 INNER JOIN,查询每个学年有成绩的学生以及分数情况 LFET JOI ...