Kafka入门 --安装和简单实用
一、安装Zookeeper
二、下载Kafka
进入http://kafka.apache.org/downloads
我这里使用版本:kafka_2.11-1.0.1.tgz
三、Kafka目录
解压到/usr/local路径下: tar -zxvf kafka_2.11-1.0.1.tgz
/bin 操作kafka的可执行脚本
/config 配置文件所在的目录
/libs 依赖库目录
/logs 日志数据目录。kafka把server端的日志分为: server, request, state, log-cleaner, controller
创建log目录 cd /usr/local/kafka_2.11-1.0.1 && mkdir kafkaLogs
四、配置
1、配置zookeeper
2、kafka配置
进入config/server.properties
#broker的id,集群中的每台机器id唯一,其他两台分别1和2
broker.id=0
#是Kafka绑定的interface,这里需要写本机内网ip地址,不然bind端口失败
#其他两台分别是192.168.1.5和192.168.1.9
host.name=192.168.1.3
#向zookeeper注册的对外暴露的ip和port,118.212.149.51是192.168.1.3的外网ip地址
#如果不配置kafka部署在外网服务器的话本地是访问不到的.
advertised.listeners=PLAINTEXT://118.212.149.51:9092
#zk集群的ip和port,zk集群教程:
zookeeper.connect=192.168.1.3:2181,192.168.1.5:2181,192.168.1.9:2181
#log目录,刚刚上边建好的.
log.dirs=/usr/local/kafka_2.11-1.0.1/kafkaLogs
三、启动Kafka
1、启动Kafka
./kafka-server-start.sh ../config/server.properties
启动Kafka,出现内存不够。默认内存为1G,如果设备内存比较小,修改配置

bin下面的kafka-server-start.sh,修改
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
为export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"
这里启动一个Kafka。如果是部署kafka集群,分别启动多个Kafak。
2、查看broker的Id
登录Zookeeper
./zkCli.sh -server 127.0.0.1:2181

这里的0是broker的id
查看broker信息
get /brokers/ids/0

四、创建主题
1、创建主题test1
/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test1

查看所有的主题:
./kafka-topics.sh --list --zookeeper localhost:2181

五、查看topic详细信息
./kafka-topics.sh --describe --zookeeper localhost:2181

第一行topic信息摘要: 分别是topic名字(Topic), partition数量(PartitionCount), 副本数量(ReplicationFactor), 配置(Config)
第二行分别为test1的topic所有partition。依次为topic名字(Topic), partition号(Partition), 此partition所在的borker(Leader),副本所在的broker(Replicas),
Isr列表(同步状态副本列表),通俗理解为替补队员。 不是每个broker都可以作为替补队员。首先这个broker得存有副本,其次副本还得满足条件。
六、生产消息
./kafka-console-producer.sh --broker-list PLAINTEXT://47.xx.47.120:9092

testTopic1 是主题名称
七、消费消息
./kafka-console-consumer.sh --bootstrap-server PLAINTEXT://47.xx.xx.120:9092 --topic testTopic1 --from-beginning

八、Java集成Kafka实践
1、引入依赖
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>1.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.1</version>
</dependency>
2、生产者
import java.util.Properties; import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.log4j.Logger; public class Producer { static Logger log = Logger.getLogger(Producer.class); private static final String TOPIC = "test";
private static final String BROKER_LIST = "47.xx.xx.120:9092";
private static KafkaProducer<String,String> producer = null; /*
初始化生产者
*/
static {
Properties configs = initConfig();
producer = new KafkaProducer<String, String>(configs);
} /*
初始化配置
*/
private static Properties initConfig(){
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,BROKER_LIST);
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
return properties;
} public static void main(String[] args) throws InterruptedException {
//消息实体
ProducerRecord<String , String> record = null;
for (int i = 0; i < 5; i++) {
record = new ProducerRecord<String, String>(TOPIC, "product value"+(int)(10*(Math.random())));
//发送消息
producer.send(record, new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (null != e){
log.info("send error" + e.getMessage());
}else {
System.out.println(String.format("offset:%s,partition:%s",recordMetadata.offset(),recordMetadata.partition()));
}
}
});
}
producer.close();
} }
启动生产者,控制台输出
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
2019-08-01 13:52:22.494 INFO --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [47.xx.xx.120:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:52:22.494 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [47.xx.xx.120:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer 2019-08-01 13:52:22.609 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.1
13:52:22.609 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 1.0.1
2019-08-01 13:52:22.609 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : c0518aa65f25317e
13:52:22.609 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : c0518aa65f25317e
2019-08-01 13:52:23.174 INFO --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
13:52:23.174 [main] INFO o.a.k.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
offset:15,partition:0
offset:16,partition:0
offset:17,partition:0
offset:18,partition:0
offset:19,partition:0 Process finished with exit code 0
3、消费者
import java.util.Arrays;
import java.util.Properties; import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.log4j.Logger; public class Consumer { static Logger log = Logger.getLogger(Producer.class); private static final String TOPIC = "test";
private static final String BROKER_LIST = "47.xx.xx.120:9092";
private static KafkaConsumer<String,String> consumer = null; static {
Properties configs = initConfig();
consumer = new KafkaConsumer<String, String>(configs);
consumer.subscribe(Arrays.asList(TOPIC));
} private static Properties initConfig(){
Properties properties = new Properties();
properties.put("bootstrap.servers",BROKER_LIST);
properties.put("group.id","0");
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.setProperty("enable.auto.commit", "true");
properties.setProperty("auto.offset.reset", "earliest");
return properties;
} public static void main(String[] args) {
while (true) {
ConsumerRecords<String, String> records = consumer.poll(10);
for (ConsumerRecord<String, String> record : records) {
log.info(record);
}
}
} }
运行后,控制台输出
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
2019-08-01 13:52:43.837 INFO --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [47.xx.xx.120:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = 0
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:52:43.837 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [47.xx.xx.120:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = 0
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2019-08-01 13:52:43.972 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.1
13:52:43.972 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 1.0.1
2019-08-01 13:52:43.972 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : c0518aa65f25317e
13:52:43.972 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : c0518aa65f25317e
2019-08-01 13:52:44.350 INFO --- [ main] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-1, groupId=0] Discovered group coordinator 47.xx.xx.120:9092 (id: 2147483647 rack: null)
13:52:44.350 [main] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=0] Discovered group coordinator 47.xx.xx.120:9092 (id: 2147483647 rack: null)
2019-08-01 13:52:44.356 INFO --- [ main] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-1, groupId=0] Revoking previously assigned partitions []
13:52:44.356 [main] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=0] Revoking previously assigned partitions []
2019-08-01 13:52:44.356 INFO --- [ main] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-1, groupId=0] (Re-)joining group
13:52:44.356 [main] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=0] (Re-)joining group
2019-08-01 13:52:44.729 INFO --- [ main] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-1, groupId=0] Successfully joined group with generation 3
13:52:44.729 [main] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=0] Successfully joined group with generation 3
2019-08-01 13:52:44.730 INFO --- [ main] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-1, groupId=0] Setting newly assigned partitions [test-0]
13:52:44.730 [main] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=0] Setting newly assigned partitions [test-0]
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 15, CreateTime = 1564638743167, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value6)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 15, CreateTime = 1564638743167, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value6)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 16, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value7)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 16, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value7)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 17, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 17, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 18, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 18, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value4)
2019-08-01 13:52:45.033 INFO --- [ main] c.e.filesmanager.utils.kafka.Producer : ConsumerRecord(topic = test, partition = 0, offset = 19, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value3)
13:52:45.033 [main] INFO c.e.f.utils.kafka.Producer - ConsumerRecord(topic = test, partition = 0, offset = 19, CreateTime = 1564638743174, serialized key size = -1, serialized value size = 14, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = product value3)
Kafka入门 --安装和简单实用的更多相关文章
- hook框架frida的安装以及简单实用案例
1.下载地址 https://github.co/frida/frida/releases 2.另外两种安装方法 1.Install from prebuilt binaries This is th ...
- Selenium的安装和简单实用——PhantomJS安装
简介 Selenium是一个用于Web应用程序测试的工具. Selenium测试直接运行在浏览器中,就像真正的用户在操作一样.支持的浏览器包括IE(7, 8, 9, 10, 11),Firefox,S ...
- kafka环境安装及简单使用(单机版)
一个分布式发布-订阅消息传递系统 特点: 高吞吐量.低延迟 使用场景(举例): 日志收集:用kafka收集各种服务产生的log,通过kafka以统一的接口服务的方式开放给各种consumer,如had ...
- 漫游Kafka入门篇之简单介绍
介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢? 首先让我们看几个基本的消息系统术语: Kafka将消息以 ...
- 实战ELK(3) Kibana安装与简单实用
第一步:下载 https://artifacts.elastic.co/downloads/kibana/kibana-6.5.1-x86_64.rpm 第二步:安装 1.安装 yum install ...
- sqoop安装与简单实用
一,sqoop安装 1.解压源码包 2.配置环境变量 3.在bin目录下的 /bin/configsqoop 注释掉check报错信息 4.配置conf目录下 /conf/sqoop-env.sh 配 ...
- 漫游Kafka入门篇之简单介绍(1)
介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢? 首先让我们看几个基本的消息系统术语: Kafka将消息以 ...
- Linux下Libevent安装和简单实用
前言 Libevent 是一个用C语言编写的.轻量级的开源高性能事件通知库,主要有以下几个亮点:事件驱动( event-driven),高性能;轻量级,专注于网络,不如 ACE 那么臃肿庞大:源代码相 ...
- (转)漫游Kafka入门篇之简单介绍
转自:http://blog.csdn.net/honglei915/article/details/37564521 原文地址:http://blog.csdn.net/honglei915/art ...
随机推荐
- IIC_slaver 仿真错误
integer 类型不能直接赋值. 改正之后的代码
- my first homepage
<!DOCTYPE html><html><head><style type="text/css">p{ text-indent:2 ...
- CSS效果:不怎么样的登录表单
HTML: <html lang="en"> <head> <meta charset="UTF-8"> <meta ...
- 我的自定义框架 || 基于Spring Boot || 第一步
今天在园子里面看到一位大神写的springboot做的框架,感觉挺不错,遂想起来自己还没有一个属于自己的框架,决定先将大神做好的拿过来,然后加入自己觉得需要的模块,不断完善 目前直接复制粘贴过来的,后 ...
- django 单点登录思路-装饰器
def the_one(func): '''自定义 验唯一证在线 装饰器''' def check_login_status(request): if request.session.get('qq' ...
- 【Python】*args和**kwargs的区别
1.*args表示将参数作为元组传给函数 通过一个函数的定义来理解’*args’的含义 修改函数的定义: >>> def fun(*args): ... print args ... ...
- Linux下Apache的安装【可用】
失败的情况有很多种,但成功的路有时候只有一条.在经历了多次失败安装后,特在此将apache安装的精简步骤罗列出来供日后参考. ====================APACHE 安装方法====== ...
- soupUI基础使用方法
SoapUI简介 文章出处:http://www.cnblogs.com/hong-fithing/ SoapUI是一个开源测试工具,通过soap/http来检查.调用.实现Web Service的功 ...
- 发布spring cloud + vue项目
服务器部署结构 1.服务器访问直接访问NGINX 2.静态资源访问, nginx读取本地文件夹 3.API接口路由, nginx把以api开头的访问都路由到业务逻辑服务器. nginx配置 clien ...
- 【转】vs2017快捷键大全
最近接触到.net,用到vs2017,由于以前用的是eclipse,没有用过vs,加上又是日语版的,给工作带来了很多不便,于是网上查了vs2017的快捷键. 项目相关的快捷键 Ctrl + Shift ...