1. 写在前面

在利用flink实时计算的时候,往往会从kafka读取数据写入数据到kafka,但会发现当kafka多个Partitioner时,特别在P量级数据为了kafka的性能kafka的节点有十几个时,一个topic的Partitioner可能有几十个甚至更多,发现flink写入kafka的时候没有全部写Partitioner,而是写了部分的Partitioner,虽然这个问题不容易被发现,但这个问题会影响flink写入kafka的性能和造成单个Partitioner数据过多的问题,更严重的问题会导致单个Partitioner所在磁盘写满,为什么会出现这种问题,我们来分析flink写入kafka的源码,主要是FlinkKafkaProducer09这个类

2. 分析FlinkKafkaProducer09的源码

public class FlinkKafkaProducer09<IN> extends FlinkKafkaProducerBase<IN> {
private static final long serialVersionUID = 1L; public FlinkKafkaProducer09(String brokerList, String topicId, SerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema, Properties producerConfig) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), producerConfig, (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), producerConfig, (FlinkKafkaPartitioner)customPartitioner);
} public FlinkKafkaProducer09(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)serializationSchema, getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig) {
this(topicId, (KeyedSerializationSchema)serializationSchema, producerConfig, (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, customPartitioner);
} /** @deprecated */
@Deprecated
public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema, Properties producerConfig, KafkaPartitioner<IN> customPartitioner) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), producerConfig, (KafkaPartitioner)customPartitioner);
} /** @deprecated */
@Deprecated
public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, KafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, new FlinkKafkaDelegatePartitioner(customPartitioner));
} protected void flush() {
if (this.producer != null) {
this.producer.flush();
} }
}

只关注下面这个两个构造器

	public FlinkKafkaProducer09(String brokerList, String topicId, SerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, customPartitioner);
}

主要看第一个构造器,可以推测往Partition是这个类new FlinkFixedPartitioner()),再来关注该类

3. 分析FlinkFixedPartitioner类的源码

public class FlinkFixedPartitioner<T> extends FlinkKafkaPartitioner<T> {
private static final long serialVersionUID = -3785320239953858777L;
private int parallelInstanceId; public FlinkFixedPartitioner() {
} public void open(int parallelInstanceId, int parallelInstances) {
Preconditions.checkArgument(parallelInstanceId >= 0, "Id of this subtask cannot be negative.");
Preconditions.checkArgument(parallelInstances > 0, "Number of subtasks must be larger than 0.");
this.parallelInstanceId = parallelInstanceId;
} public int partition(T record, byte[] key, byte[] value, String targetTopic, int[] partitions) {
Preconditions.checkArgument(partitions != null && partitions.length > 0, "Partitions of the target topic is empty.");
return partitions[this.parallelInstanceId % partitions.length];
} public boolean equals(Object o) {
return this == o || o instanceof FlinkFixedPartitioner;
} public int hashCode() {
return FlinkFixedPartitioner.class.hashCode();
}
}

根据代码可以推测 return partitions[this.parallelInstanceId % partitions.length]代码会导致有的partition无法写到,现在来自己重写一个类似FlinkKafkaProducer09的类MyFlinkKafkaProducer09

4. 分析原生自带的FlinkKafkaProducer09的逻辑

1>.需要继承FlinkKafkaProducerBase

//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
// package org.apache.flink.streaming.connectors.kafka; import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
import java.util.Map.Entry;
import org.apache.flink.annotation.Internal;
import org.apache.flink.annotation.VisibleForTesting;
import org.apache.flink.api.common.functions.RuntimeContext;
import org.apache.flink.api.java.ClosureCleaner;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.metrics.MetricGroup;
import org.apache.flink.runtime.state.FunctionInitializationContext;
import org.apache.flink.runtime.state.FunctionSnapshotContext;
import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
import org.apache.flink.streaming.api.functions.sink.SinkFunction.Context;
import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
import org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper;
import org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaDelegatePartitioner;
import org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.apache.flink.util.NetUtils;
import org.apache.flink.util.SerializableObject;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.Metric;
import org.apache.kafka.common.MetricName;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; @Internal
public abstract class FlinkKafkaProducerBase<IN> extends RichSinkFunction<IN> implements CheckpointedFunction {
private static final Logger LOG = LoggerFactory.getLogger(FlinkKafkaProducerBase.class);
private static final long serialVersionUID = 1L;
public static final String KEY_DISABLE_METRICS = "flink.disable-metrics";
protected final Properties producerConfig;
protected final String defaultTopicId;
protected final KeyedSerializationSchema<IN> schema;
protected final FlinkKafkaPartitioner<IN> flinkKafkaPartitioner;
protected final Map<String, int[]> topicPartitionsMap;
protected boolean logFailuresOnly;
protected boolean flushOnCheckpoint = true;
protected transient KafkaProducer<byte[], byte[]> producer;
protected transient Callback callback;
protected transient volatile Exception asyncException;
protected final SerializableObject pendingRecordsLock = new SerializableObject();
protected long pendingRecords; public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner) {
Objects.requireNonNull(defaultTopicId, "TopicID not set");
Objects.requireNonNull(serializationSchema, "serializationSchema not set");
Objects.requireNonNull(producerConfig, "producerConfig not set");
ClosureCleaner.clean(customPartitioner, true);
ClosureCleaner.ensureSerializable(serializationSchema);
this.defaultTopicId = defaultTopicId;
this.schema = serializationSchema;
this.producerConfig = producerConfig;
this.flinkKafkaPartitioner = customPartitioner;
if (!producerConfig.containsKey("key.serializer")) {
this.producerConfig.put("key.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "key.serializer");
} if (!producerConfig.containsKey("value.serializer")) {
this.producerConfig.put("value.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "value.serializer");
} if (!this.producerConfig.containsKey("bootstrap.servers")) {
throw new IllegalArgumentException("bootstrap.servers must be supplied in the producer config properties.");
} else {
this.topicPartitionsMap = new HashMap();
}
} public void setLogFailuresOnly(boolean logFailuresOnly) {
this.logFailuresOnly = logFailuresOnly;
} public void setFlushOnCheckpoint(boolean flush) {
this.flushOnCheckpoint = flush;
} @VisibleForTesting
protected <K, V> KafkaProducer<K, V> getKafkaProducer(Properties props) {
return new KafkaProducer(props);
} public void open(Configuration configuration) {
this.producer = this.getKafkaProducer(this.producerConfig);
RuntimeContext ctx = this.getRuntimeContext();
if (null != this.flinkKafkaPartitioner) {
if (this.flinkKafkaPartitioner instanceof FlinkKafkaDelegatePartitioner) {
((FlinkKafkaDelegatePartitioner)this.flinkKafkaPartitioner).setPartitions(getPartitionsByTopic(this.defaultTopicId, this.producer));
} this.flinkKafkaPartitioner.open(ctx.getIndexOfThisSubtask(), ctx.getNumberOfParallelSubtasks());
} LOG.info("Starting FlinkKafkaProducer ({}/{}) to produce into default topic {}", new Object[]{ctx.getIndexOfThisSubtask() + 1, ctx.getNumberOfParallelSubtasks(), this.defaultTopicId});
if (!Boolean.parseBoolean(this.producerConfig.getProperty("flink.disable-metrics", "false"))) {
Map<MetricName, ? extends Metric> metrics = this.producer.metrics();
if (metrics == null) {
LOG.info("Producer implementation does not support metrics");
} else {
MetricGroup kafkaMetricGroup = this.getRuntimeContext().getMetricGroup().addGroup("KafkaProducer");
Iterator var5 = metrics.entrySet().iterator(); while(var5.hasNext()) {
Entry<MetricName, ? extends Metric> metric = (Entry)var5.next();
kafkaMetricGroup.gauge(((MetricName)metric.getKey()).name(), new KafkaMetricWrapper((Metric)metric.getValue()));
}
}
} if (this.flushOnCheckpoint && !((StreamingRuntimeContext)this.getRuntimeContext()).isCheckpointingEnabled()) {
LOG.warn("Flushing on checkpoint is enabled, but checkpointing is not enabled. Disabling flushing.");
this.flushOnCheckpoint = false;
} if (this.logFailuresOnly) {
this.callback = new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if (e != null) {
FlinkKafkaProducerBase.LOG.error("Error while sending record to Kafka: " + e.getMessage(), e);
} FlinkKafkaProducerBase.this.acknowledgeMessage();
}
};
} else {
this.callback = new Callback() {
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (exception != null && FlinkKafkaProducerBase.this.asyncException == null) {
FlinkKafkaProducerBase.this.asyncException = exception;
} FlinkKafkaProducerBase.this.acknowledgeMessage();
}
};
} } public void invoke(IN next, Context context) throws Exception {
this.checkErroneous();
byte[] serializedKey = this.schema.serializeKey(next);
byte[] serializedValue = this.schema.serializeValue(next);
String targetTopic = this.schema.getTargetTopic(next);
if (targetTopic == null) {
targetTopic = this.defaultTopicId;
} int[] partitions = (int[])this.topicPartitionsMap.get(targetTopic);
if (null == partitions) {
partitions = getPartitionsByTopic(targetTopic, this.producer);
this.topicPartitionsMap.put(targetTopic, partitions);
} ProducerRecord record;
if (this.flinkKafkaPartitioner == null) {
record = new ProducerRecord(targetTopic, serializedKey, serializedValue);
} else {
record = new ProducerRecord(targetTopic, this.flinkKafkaPartitioner.partition(next, serializedKey, serializedValue, targetTopic, partitions), serializedKey, serializedValue);
} if (this.flushOnCheckpoint) {
synchronized(this.pendingRecordsLock) {
++this.pendingRecords;
}
} this.producer.send(record, this.callback);
} public void close() throws Exception {
if (this.producer != null) {
this.producer.close();
} this.checkErroneous();
} private void acknowledgeMessage() {
if (this.flushOnCheckpoint) {
synchronized(this.pendingRecordsLock) {
--this.pendingRecords;
if (this.pendingRecords == 0L) {
this.pendingRecordsLock.notifyAll();
}
}
} } protected abstract void flush(); public void initializeState(FunctionInitializationContext context) throws Exception {
} public void snapshotState(FunctionSnapshotContext ctx) throws Exception {
this.checkErroneous();
if (this.flushOnCheckpoint) {
this.flush();
synchronized(this.pendingRecordsLock) {
if (this.pendingRecords != 0L) {
throw new IllegalStateException("Pending record count must be zero at this point: " + this.pendingRecords);
} this.checkErroneous();
}
} } protected void checkErroneous() throws Exception {
Exception e = this.asyncException;
if (e != null) {
this.asyncException = null;
throw new Exception("Failed to send data to Kafka: " + e.getMessage(), e);
}
} public static Properties getPropertiesFromBrokerList(String brokerList) {
String[] elements = brokerList.split(",");
String[] var2 = elements;
int var3 = elements.length; for(int var4 = 0; var4 < var3; ++var4) {
String broker = var2[var4];
NetUtils.getCorrectHostnamePort(broker);
} Properties props = new Properties();
props.setProperty("bootstrap.servers", brokerList);
return props;
} protected static int[] getPartitionsByTopic(String topic, KafkaProducer<byte[], byte[]> producer) {
List<PartitionInfo> partitionsList = new ArrayList(producer.partitionsFor(topic));
Collections.sort(partitionsList, new Comparator<PartitionInfo>() {
public int compare(PartitionInfo o1, PartitionInfo o2) {
return Integer.compare(o1.partition(), o2.partition());
}
});
int[] partitions = new int[partitionsList.size()]; for(int i = 0; i < partitions.length; ++i) {
partitions[i] = ((PartitionInfo)partitionsList.get(i)).partition();
} return partitions;
} @VisibleForTesting
protected long numPendingRecords() {
synchronized(this.pendingRecordsLock) {
return this.pendingRecords;
}
}
}

2>.需要关注FlinkKafkaProducerBase类下面的构造器:

public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner) {
Objects.requireNonNull(defaultTopicId, "TopicID not set");
Objects.requireNonNull(serializationSchema, "serializationSchema not set");
Objects.requireNonNull(producerConfig, "producerConfig not set");
ClosureCleaner.clean(customPartitioner, true);
ClosureCleaner.ensureSerializable(serializationSchema);
this.defaultTopicId = defaultTopicId;
this.schema = serializationSchema;
this.producerConfig = producerConfig;
this.flinkKafkaPartitioner = customPartitioner;
if (!producerConfig.containsKey("key.serializer")) {
this.producerConfig.put("key.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "key.serializer");
} if (!producerConfig.containsKey("value.serializer")) {
this.producerConfig.put("value.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "value.serializer");
} if (!this.producerConfig.containsKey("bootstrap.servers")) {
throw new IllegalArgumentException("bootstrap.servers must be supplied in the producer config properties.");
} else {
this.topicPartitionsMap = new HashMap();
}
}

3>.同时关注类下面invoke()方法的以下代码

	if (this.flinkKafkaPartitioner == null) {
record = new ProducerRecord(targetTopic, serializedKey, serializedValue);
}

4>.再来分析ProducerRecord这个类

//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
// package org.apache.kafka.clients.producer; public final class ProducerRecord<K, V> {
private final String topic;
private final Integer partition;
private final K key;
private final V value; public ProducerRecord(String topic, Integer partition, K key, V value) {
if (topic == null) {
throw new IllegalArgumentException("Topic cannot be null");
} else {
this.topic = topic;
this.partition = partition;
this.key = key;
this.value = value;
}
} public ProducerRecord(String topic, K key, V value) {
this(topic, (Integer)null, key, value);
} public ProducerRecord(String topic, V value) {
this(topic, (Object)null, value);
} public String topic() {
return this.topic;
} public K key() {
return this.key;
} public V value() {
return this.value;
} public Integer partition() {
return this.partition;
} public String toString() {
String key = this.key == null ? "null" : this.key.toString();
String value = this.value == null ? "null" : this.value.toString();
return "ProducerRecord(topic=" + this.topic + ", partition=" + this.partition + ", key=" + key + ", value=" + value;
} public boolean equals(Object o) {
if (this == o) {
return true;
} else if (!(o instanceof ProducerRecord)) {
return false;
} else {
ProducerRecord that;
label56: {
that = (ProducerRecord)o;
if (this.key != null) {
if (this.key.equals(that.key)) {
break label56;
}
} else if (that.key == null) {
break label56;
} return false;
} label49: {
if (this.partition != null) {
if (this.partition.equals(that.partition)) {
break label49;
}
} else if (that.partition == null) {
break label49;
} return false;
} if (this.topic != null) {
if (!this.topic.equals(that.topic)) {
return false;
}
} else if (that.topic != null) {
return false;
} if (this.value != null) {
if (!this.value.equals(that.value)) {
return false;
}
} else if (that.value != null) {
return false;
} return true;
}
} public int hashCode() {
int result = this.topic != null ? this.topic.hashCode() : 0;
result = 31 * result + (this.partition != null ? this.partition.hashCode() : 0);
result = 31 * result + (this.key != null ? this.key.hashCode() : 0);
result = 31 * result + (this.value != null ? this.value.hashCode() : 0);
return result;
}
}

5>. 关注该类下面的构造器

	public ProducerRecord(String topic, Integer partition, K key, V value) {
if (topic == null) {
throw new IllegalArgumentException("Topic cannot be null");
} else {
this.topic = topic;
this.partition = partition;
this.key = key;
this.value = value;
}
}

6>. 从代码中可以看到获取默认的partition即所有partition,那么我们自定义的类就会很简单的

5. 编写自己的flink写入kafka的类MyFlinkKafkaProducer09

package com.run;

import org.apache.flink.api.common.serialization.SerializationSchema;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase;
import org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchemaWrapper;
import org.codehaus.commons.nullanalysis.Nullable; import java.util.Properties; public class MyFlinkKafkaProducer09<IN> extends FlinkKafkaProducerBase<IN> { public MyFlinkKafkaProducer09(String brokerList, String topicId, SerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), getPropertiesFromBrokerList(brokerList),null);
} public MyFlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, customPartitioner);
} protected void flush() {
if (this.producer != null) {
this.producer.flush();
}
}
}

在这里不给FlinkKafkaPartitioner(new FlinkFixedPartitioner()),直接给一个null,FlinkKafkaProducerBase会直接写所有的Partitioner

在自己的实时计算程序应用

distributeDataStream.addSink(new FlinkKafkaProducer09<String>("localhost:9092", "my-topic", new SimpleStringSchema()));

Flink写入kafka时,只写入kafka的部分Partitioner,无法写所有的Partitioner问题的更多相关文章

  1. flink---实时项目--day02-----1. 解析参数工具类 2. Flink工具类封装 3. 日志采集架构图 4. 测流输出 5. 将kafka中数据写入HDFS 6 KafkaProducer的使用 7 练习

    1. 解析参数工具类(ParameterTool) 该类提供了从不同数据源读取和解析程序参数的简单实用方法,其解析args时,只能支持单只参数. 用来解析main方法传入参数的工具类 public c ...

  2. Flink 使用(一)——从kafka中读取数据写入到HBASE中

    1.前言 本文是在<如何计算实时热门商品>[1]一文上做的扩展,仅在功能上验证了利用Flink消费Kafka数据,把处理后的数据写入到HBase的流程,其具体性能未做调优.此外,文中并未就 ...

  3. 构建一个flink程序,从kafka读取然后写入MYSQL

    最近flink已经变得比较流行了,所以大家要了解flink并且使用flink.现在最流行的实时计算应该就是flink了,它具有了流计算和批处理功能.它可以处理有界数据和无界数据,也就是可以处理永远生产 ...

  4. Kafka权威指南 读书笔记之(三)Kafka 生产者一一向 Kafka 写入数据

    不管是把 Kafka 作为消息队列.消息总线还是数据存储平台来使用 ,总是需要有一个可以往 Kafka 写入数据的生产者和一个从 Kafka 读取数据的消费者,或者一个兼具两种角色的应用程序. 开发者 ...

  5. kafka spark steam 写入elasticsearch的部分问题

    应用版本 elasticsearch 5.5 spark 2.2.0 hadoop 2.7 依赖包版本 docker cp /Users/cclient/.ivy2/cache/org.elastic ...

  6. storm集成kafka的应用,从kafka读取,写入kafka

    storm集成kafka的应用,从kafka读取,写入kafka by 小闪电 0前言 storm的主要作用是进行流式的实时计算,对于一直产生的数据流处理是非常迅速的,然而大部分数据并不是均匀的数据流 ...

  7. 分布式消息流平台:不要只想着Kafka,还有Pulsar

    摘要:Pulsar作为一个云原生的分布式消息流平台,越来越频繁地出现在人们的视野中,大有替代Kafka江湖地位的趋势. 本文分享自华为云社区<MRS Pulsar:下一代分布式消息流平台全新发布 ...

  8. Flink与Spark Streaming在与kafka结合的区别!

    本文主要是想聊聊flink与kafka结合.当然,单纯的介绍flink与kafka的结合呢,比较单调,也没有可对比性,所以的准备顺便帮大家简单回顾一下Spark Streaming与kafka的结合. ...

  9. RFID射频卡超市购物结算系统问题记录--写入卡片时,后台php无法操作数据库

    后台管理人员要给每件商品贴上RF卡作为唯一标识,所以要先给对应的RFID卡中写入响应的信息,我这里为了便于模拟演示只写入商品编号,价格,名称这几个字段,然后要把已经写入的商品上传后台,由后台写入数据库 ...

随机推荐

  1. Nginx+Keepalived搭建高可用负载均衡集群

    本文的重点是Keepalived的配置,Nginx的配置就简略带过.软件:CentOS 7.2 / Nginx 1.12.2 / Keepalived 1.3.9 ha-01:192.168.1.97 ...

  2. 第四周java学习总结

    学号 20175206 <Java程序设计>第四周学习总结 教材学习内容总结 第五章主要讲的是主类与继承 本章主要介绍了:封装.继承.多态的关系:抽象类与接口的区别:各种关键字的类与方法: ...

  3. 第二周博客作业<西北师范大学|李晓婷>

    1.助教博客链接:https://home.cnblogs.com/u/lxt-/ 2.点评作业内容: https://www.cnblogs.com/dxd123/p/10494907.html#4 ...

  4. 状压dp(状态压缩&&dp结合)学习笔记(持续更新)

    嗯,作为一只蒟蒻,今天再次学习了状压dp(学习借鉴的博客) 但是,依旧懵逼·································· 这篇学习笔记是我个人对于状压dp的理解,如果有什么不对的 ...

  5. SQL Server 游标的使用示例

    Ø  简介 本文主要记录 MSSQL 中的游标使用示例,在有必要时方便借鉴查阅.游标一般定义在某段功能性的 SQL 语句中,或者存储过程中.之所以选择用它,是因为有时候无法使用简单的 SQL 语句满足 ...

  6. print number

    # -*- coding: utf-8 -*-"""------------------------------------------------- File Name ...

  7. ASP.NET Web API 2 OData v4教程

    程序数据库格式标准化的开源数据协议 为了增强各种网页应用程序之间的数据兼容性,微软公司启动了一项旨在推广网页程序数据库格式标准化的开源数据协议(OData)计划,于此同时,他们还发 布了一款适用于OD ...

  8. Linux VPS通过安装CPULimit来限制CPU使用率

    说明:我们手上经常有很多廉价的VPS,有时候使用某些软件应用的时候,会出现CPU跑满的情况,而长时间跑满会被VPS商家停掉,所以这里我们需要想办法来限制进程CPU使用率,这里就说个教程. 简介 cpu ...

  9. Web开发常见安全问题

    转载自: http://blog.csdn.net/fengyinchao/article/details/50775121 不是所有 Web 开发者都有安全的概念,甚至可能某些安全漏洞从来都没听说过 ...

  10. SW:HTML DOM

    1:节点:nodeType,nodeValue,nodeName getAttributeNode() 方法从当前元素中通过名称获取属性节点. 元素节点nodeValue是null,属性节点nodeV ...