1. 写在前面

在利用flink实时计算的时候,往往会从kafka读取数据写入数据到kafka,但会发现当kafka多个Partitioner时,特别在P量级数据为了kafka的性能kafka的节点有十几个时,一个topic的Partitioner可能有几十个甚至更多,发现flink写入kafka的时候没有全部写Partitioner,而是写了部分的Partitioner,虽然这个问题不容易被发现,但这个问题会影响flink写入kafka的性能和造成单个Partitioner数据过多的问题,更严重的问题会导致单个Partitioner所在磁盘写满,为什么会出现这种问题,我们来分析flink写入kafka的源码,主要是FlinkKafkaProducer09这个类

2. 分析FlinkKafkaProducer09的源码

public class FlinkKafkaProducer09<IN> extends FlinkKafkaProducerBase<IN> {
private static final long serialVersionUID = 1L; public FlinkKafkaProducer09(String brokerList, String topicId, SerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema, Properties producerConfig) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), producerConfig, (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), producerConfig, (FlinkKafkaPartitioner)customPartitioner);
} public FlinkKafkaProducer09(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)serializationSchema, getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig) {
this(topicId, (KeyedSerializationSchema)serializationSchema, producerConfig, (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, customPartitioner);
} /** @deprecated */
@Deprecated
public FlinkKafkaProducer09(String topicId, SerializationSchema<IN> serializationSchema, Properties producerConfig, KafkaPartitioner<IN> customPartitioner) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), producerConfig, (KafkaPartitioner)customPartitioner);
} /** @deprecated */
@Deprecated
public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, KafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, new FlinkKafkaDelegatePartitioner(customPartitioner));
} protected void flush() {
if (this.producer != null) {
this.producer.flush();
} }
}

只关注下面这个两个构造器

	public FlinkKafkaProducer09(String brokerList, String topicId, SerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner)(new FlinkFixedPartitioner()));
} public FlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, customPartitioner);
}

主要看第一个构造器,可以推测往Partition是这个类new FlinkFixedPartitioner()),再来关注该类

3. 分析FlinkFixedPartitioner类的源码

public class FlinkFixedPartitioner<T> extends FlinkKafkaPartitioner<T> {
private static final long serialVersionUID = -3785320239953858777L;
private int parallelInstanceId; public FlinkFixedPartitioner() {
} public void open(int parallelInstanceId, int parallelInstances) {
Preconditions.checkArgument(parallelInstanceId >= 0, "Id of this subtask cannot be negative.");
Preconditions.checkArgument(parallelInstances > 0, "Number of subtasks must be larger than 0.");
this.parallelInstanceId = parallelInstanceId;
} public int partition(T record, byte[] key, byte[] value, String targetTopic, int[] partitions) {
Preconditions.checkArgument(partitions != null && partitions.length > 0, "Partitions of the target topic is empty.");
return partitions[this.parallelInstanceId % partitions.length];
} public boolean equals(Object o) {
return this == o || o instanceof FlinkFixedPartitioner;
} public int hashCode() {
return FlinkFixedPartitioner.class.hashCode();
}
}

根据代码可以推测 return partitions[this.parallelInstanceId % partitions.length]代码会导致有的partition无法写到,现在来自己重写一个类似FlinkKafkaProducer09的类MyFlinkKafkaProducer09

4. 分析原生自带的FlinkKafkaProducer09的逻辑

1>.需要继承FlinkKafkaProducerBase

//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
// package org.apache.flink.streaming.connectors.kafka; import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
import java.util.Map.Entry;
import org.apache.flink.annotation.Internal;
import org.apache.flink.annotation.VisibleForTesting;
import org.apache.flink.api.common.functions.RuntimeContext;
import org.apache.flink.api.java.ClosureCleaner;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.metrics.MetricGroup;
import org.apache.flink.runtime.state.FunctionInitializationContext;
import org.apache.flink.runtime.state.FunctionSnapshotContext;
import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
import org.apache.flink.streaming.api.functions.sink.SinkFunction.Context;
import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
import org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper;
import org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaDelegatePartitioner;
import org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.apache.flink.util.NetUtils;
import org.apache.flink.util.SerializableObject;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.Metric;
import org.apache.kafka.common.MetricName;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; @Internal
public abstract class FlinkKafkaProducerBase<IN> extends RichSinkFunction<IN> implements CheckpointedFunction {
private static final Logger LOG = LoggerFactory.getLogger(FlinkKafkaProducerBase.class);
private static final long serialVersionUID = 1L;
public static final String KEY_DISABLE_METRICS = "flink.disable-metrics";
protected final Properties producerConfig;
protected final String defaultTopicId;
protected final KeyedSerializationSchema<IN> schema;
protected final FlinkKafkaPartitioner<IN> flinkKafkaPartitioner;
protected final Map<String, int[]> topicPartitionsMap;
protected boolean logFailuresOnly;
protected boolean flushOnCheckpoint = true;
protected transient KafkaProducer<byte[], byte[]> producer;
protected transient Callback callback;
protected transient volatile Exception asyncException;
protected final SerializableObject pendingRecordsLock = new SerializableObject();
protected long pendingRecords; public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner) {
Objects.requireNonNull(defaultTopicId, "TopicID not set");
Objects.requireNonNull(serializationSchema, "serializationSchema not set");
Objects.requireNonNull(producerConfig, "producerConfig not set");
ClosureCleaner.clean(customPartitioner, true);
ClosureCleaner.ensureSerializable(serializationSchema);
this.defaultTopicId = defaultTopicId;
this.schema = serializationSchema;
this.producerConfig = producerConfig;
this.flinkKafkaPartitioner = customPartitioner;
if (!producerConfig.containsKey("key.serializer")) {
this.producerConfig.put("key.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "key.serializer");
} if (!producerConfig.containsKey("value.serializer")) {
this.producerConfig.put("value.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "value.serializer");
} if (!this.producerConfig.containsKey("bootstrap.servers")) {
throw new IllegalArgumentException("bootstrap.servers must be supplied in the producer config properties.");
} else {
this.topicPartitionsMap = new HashMap();
}
} public void setLogFailuresOnly(boolean logFailuresOnly) {
this.logFailuresOnly = logFailuresOnly;
} public void setFlushOnCheckpoint(boolean flush) {
this.flushOnCheckpoint = flush;
} @VisibleForTesting
protected <K, V> KafkaProducer<K, V> getKafkaProducer(Properties props) {
return new KafkaProducer(props);
} public void open(Configuration configuration) {
this.producer = this.getKafkaProducer(this.producerConfig);
RuntimeContext ctx = this.getRuntimeContext();
if (null != this.flinkKafkaPartitioner) {
if (this.flinkKafkaPartitioner instanceof FlinkKafkaDelegatePartitioner) {
((FlinkKafkaDelegatePartitioner)this.flinkKafkaPartitioner).setPartitions(getPartitionsByTopic(this.defaultTopicId, this.producer));
} this.flinkKafkaPartitioner.open(ctx.getIndexOfThisSubtask(), ctx.getNumberOfParallelSubtasks());
} LOG.info("Starting FlinkKafkaProducer ({}/{}) to produce into default topic {}", new Object[]{ctx.getIndexOfThisSubtask() + 1, ctx.getNumberOfParallelSubtasks(), this.defaultTopicId});
if (!Boolean.parseBoolean(this.producerConfig.getProperty("flink.disable-metrics", "false"))) {
Map<MetricName, ? extends Metric> metrics = this.producer.metrics();
if (metrics == null) {
LOG.info("Producer implementation does not support metrics");
} else {
MetricGroup kafkaMetricGroup = this.getRuntimeContext().getMetricGroup().addGroup("KafkaProducer");
Iterator var5 = metrics.entrySet().iterator(); while(var5.hasNext()) {
Entry<MetricName, ? extends Metric> metric = (Entry)var5.next();
kafkaMetricGroup.gauge(((MetricName)metric.getKey()).name(), new KafkaMetricWrapper((Metric)metric.getValue()));
}
}
} if (this.flushOnCheckpoint && !((StreamingRuntimeContext)this.getRuntimeContext()).isCheckpointingEnabled()) {
LOG.warn("Flushing on checkpoint is enabled, but checkpointing is not enabled. Disabling flushing.");
this.flushOnCheckpoint = false;
} if (this.logFailuresOnly) {
this.callback = new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if (e != null) {
FlinkKafkaProducerBase.LOG.error("Error while sending record to Kafka: " + e.getMessage(), e);
} FlinkKafkaProducerBase.this.acknowledgeMessage();
}
};
} else {
this.callback = new Callback() {
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (exception != null && FlinkKafkaProducerBase.this.asyncException == null) {
FlinkKafkaProducerBase.this.asyncException = exception;
} FlinkKafkaProducerBase.this.acknowledgeMessage();
}
};
} } public void invoke(IN next, Context context) throws Exception {
this.checkErroneous();
byte[] serializedKey = this.schema.serializeKey(next);
byte[] serializedValue = this.schema.serializeValue(next);
String targetTopic = this.schema.getTargetTopic(next);
if (targetTopic == null) {
targetTopic = this.defaultTopicId;
} int[] partitions = (int[])this.topicPartitionsMap.get(targetTopic);
if (null == partitions) {
partitions = getPartitionsByTopic(targetTopic, this.producer);
this.topicPartitionsMap.put(targetTopic, partitions);
} ProducerRecord record;
if (this.flinkKafkaPartitioner == null) {
record = new ProducerRecord(targetTopic, serializedKey, serializedValue);
} else {
record = new ProducerRecord(targetTopic, this.flinkKafkaPartitioner.partition(next, serializedKey, serializedValue, targetTopic, partitions), serializedKey, serializedValue);
} if (this.flushOnCheckpoint) {
synchronized(this.pendingRecordsLock) {
++this.pendingRecords;
}
} this.producer.send(record, this.callback);
} public void close() throws Exception {
if (this.producer != null) {
this.producer.close();
} this.checkErroneous();
} private void acknowledgeMessage() {
if (this.flushOnCheckpoint) {
synchronized(this.pendingRecordsLock) {
--this.pendingRecords;
if (this.pendingRecords == 0L) {
this.pendingRecordsLock.notifyAll();
}
}
} } protected abstract void flush(); public void initializeState(FunctionInitializationContext context) throws Exception {
} public void snapshotState(FunctionSnapshotContext ctx) throws Exception {
this.checkErroneous();
if (this.flushOnCheckpoint) {
this.flush();
synchronized(this.pendingRecordsLock) {
if (this.pendingRecords != 0L) {
throw new IllegalStateException("Pending record count must be zero at this point: " + this.pendingRecords);
} this.checkErroneous();
}
} } protected void checkErroneous() throws Exception {
Exception e = this.asyncException;
if (e != null) {
this.asyncException = null;
throw new Exception("Failed to send data to Kafka: " + e.getMessage(), e);
}
} public static Properties getPropertiesFromBrokerList(String brokerList) {
String[] elements = brokerList.split(",");
String[] var2 = elements;
int var3 = elements.length; for(int var4 = 0; var4 < var3; ++var4) {
String broker = var2[var4];
NetUtils.getCorrectHostnamePort(broker);
} Properties props = new Properties();
props.setProperty("bootstrap.servers", brokerList);
return props;
} protected static int[] getPartitionsByTopic(String topic, KafkaProducer<byte[], byte[]> producer) {
List<PartitionInfo> partitionsList = new ArrayList(producer.partitionsFor(topic));
Collections.sort(partitionsList, new Comparator<PartitionInfo>() {
public int compare(PartitionInfo o1, PartitionInfo o2) {
return Integer.compare(o1.partition(), o2.partition());
}
});
int[] partitions = new int[partitionsList.size()]; for(int i = 0; i < partitions.length; ++i) {
partitions[i] = ((PartitionInfo)partitionsList.get(i)).partition();
} return partitions;
} @VisibleForTesting
protected long numPendingRecords() {
synchronized(this.pendingRecordsLock) {
return this.pendingRecords;
}
}
}

2>.需要关注FlinkKafkaProducerBase类下面的构造器:

public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner) {
Objects.requireNonNull(defaultTopicId, "TopicID not set");
Objects.requireNonNull(serializationSchema, "serializationSchema not set");
Objects.requireNonNull(producerConfig, "producerConfig not set");
ClosureCleaner.clean(customPartitioner, true);
ClosureCleaner.ensureSerializable(serializationSchema);
this.defaultTopicId = defaultTopicId;
this.schema = serializationSchema;
this.producerConfig = producerConfig;
this.flinkKafkaPartitioner = customPartitioner;
if (!producerConfig.containsKey("key.serializer")) {
this.producerConfig.put("key.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "key.serializer");
} if (!producerConfig.containsKey("value.serializer")) {
this.producerConfig.put("value.serializer", ByteArraySerializer.class.getName());
} else {
LOG.warn("Overwriting the '{}' is not recommended", "value.serializer");
} if (!this.producerConfig.containsKey("bootstrap.servers")) {
throw new IllegalArgumentException("bootstrap.servers must be supplied in the producer config properties.");
} else {
this.topicPartitionsMap = new HashMap();
}
}

3>.同时关注类下面invoke()方法的以下代码

	if (this.flinkKafkaPartitioner == null) {
record = new ProducerRecord(targetTopic, serializedKey, serializedValue);
}

4>.再来分析ProducerRecord这个类

//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
// package org.apache.kafka.clients.producer; public final class ProducerRecord<K, V> {
private final String topic;
private final Integer partition;
private final K key;
private final V value; public ProducerRecord(String topic, Integer partition, K key, V value) {
if (topic == null) {
throw new IllegalArgumentException("Topic cannot be null");
} else {
this.topic = topic;
this.partition = partition;
this.key = key;
this.value = value;
}
} public ProducerRecord(String topic, K key, V value) {
this(topic, (Integer)null, key, value);
} public ProducerRecord(String topic, V value) {
this(topic, (Object)null, value);
} public String topic() {
return this.topic;
} public K key() {
return this.key;
} public V value() {
return this.value;
} public Integer partition() {
return this.partition;
} public String toString() {
String key = this.key == null ? "null" : this.key.toString();
String value = this.value == null ? "null" : this.value.toString();
return "ProducerRecord(topic=" + this.topic + ", partition=" + this.partition + ", key=" + key + ", value=" + value;
} public boolean equals(Object o) {
if (this == o) {
return true;
} else if (!(o instanceof ProducerRecord)) {
return false;
} else {
ProducerRecord that;
label56: {
that = (ProducerRecord)o;
if (this.key != null) {
if (this.key.equals(that.key)) {
break label56;
}
} else if (that.key == null) {
break label56;
} return false;
} label49: {
if (this.partition != null) {
if (this.partition.equals(that.partition)) {
break label49;
}
} else if (that.partition == null) {
break label49;
} return false;
} if (this.topic != null) {
if (!this.topic.equals(that.topic)) {
return false;
}
} else if (that.topic != null) {
return false;
} if (this.value != null) {
if (!this.value.equals(that.value)) {
return false;
}
} else if (that.value != null) {
return false;
} return true;
}
} public int hashCode() {
int result = this.topic != null ? this.topic.hashCode() : 0;
result = 31 * result + (this.partition != null ? this.partition.hashCode() : 0);
result = 31 * result + (this.key != null ? this.key.hashCode() : 0);
result = 31 * result + (this.value != null ? this.value.hashCode() : 0);
return result;
}
}

5>. 关注该类下面的构造器

	public ProducerRecord(String topic, Integer partition, K key, V value) {
if (topic == null) {
throw new IllegalArgumentException("Topic cannot be null");
} else {
this.topic = topic;
this.partition = partition;
this.key = key;
this.value = value;
}
}

6>. 从代码中可以看到获取默认的partition即所有partition,那么我们自定义的类就会很简单的

5. 编写自己的flink写入kafka的类MyFlinkKafkaProducer09

package com.run;

import org.apache.flink.api.common.serialization.SerializationSchema;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase;
import org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchemaWrapper;
import org.codehaus.commons.nullanalysis.Nullable; import java.util.Properties; public class MyFlinkKafkaProducer09<IN> extends FlinkKafkaProducerBase<IN> { public MyFlinkKafkaProducer09(String brokerList, String topicId, SerializationSchema<IN> serializationSchema) {
this(topicId, (KeyedSerializationSchema)(new KeyedSerializationSchemaWrapper(serializationSchema)), getPropertiesFromBrokerList(brokerList),null);
} public MyFlinkKafkaProducer09(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner) {
super(topicId, serializationSchema, producerConfig, customPartitioner);
} protected void flush() {
if (this.producer != null) {
this.producer.flush();
}
}
}

在这里不给FlinkKafkaPartitioner(new FlinkFixedPartitioner()),直接给一个null,FlinkKafkaProducerBase会直接写所有的Partitioner

在自己的实时计算程序应用

distributeDataStream.addSink(new FlinkKafkaProducer09<String>("localhost:9092", "my-topic", new SimpleStringSchema()));

Flink写入kafka时,只写入kafka的部分Partitioner,无法写所有的Partitioner问题的更多相关文章

  1. flink---实时项目--day02-----1. 解析参数工具类 2. Flink工具类封装 3. 日志采集架构图 4. 测流输出 5. 将kafka中数据写入HDFS 6 KafkaProducer的使用 7 练习

    1. 解析参数工具类(ParameterTool) 该类提供了从不同数据源读取和解析程序参数的简单实用方法,其解析args时,只能支持单只参数. 用来解析main方法传入参数的工具类 public c ...

  2. Flink 使用(一)——从kafka中读取数据写入到HBASE中

    1.前言 本文是在<如何计算实时热门商品>[1]一文上做的扩展,仅在功能上验证了利用Flink消费Kafka数据,把处理后的数据写入到HBase的流程,其具体性能未做调优.此外,文中并未就 ...

  3. 构建一个flink程序,从kafka读取然后写入MYSQL

    最近flink已经变得比较流行了,所以大家要了解flink并且使用flink.现在最流行的实时计算应该就是flink了,它具有了流计算和批处理功能.它可以处理有界数据和无界数据,也就是可以处理永远生产 ...

  4. Kafka权威指南 读书笔记之(三)Kafka 生产者一一向 Kafka 写入数据

    不管是把 Kafka 作为消息队列.消息总线还是数据存储平台来使用 ,总是需要有一个可以往 Kafka 写入数据的生产者和一个从 Kafka 读取数据的消费者,或者一个兼具两种角色的应用程序. 开发者 ...

  5. kafka spark steam 写入elasticsearch的部分问题

    应用版本 elasticsearch 5.5 spark 2.2.0 hadoop 2.7 依赖包版本 docker cp /Users/cclient/.ivy2/cache/org.elastic ...

  6. storm集成kafka的应用,从kafka读取,写入kafka

    storm集成kafka的应用,从kafka读取,写入kafka by 小闪电 0前言 storm的主要作用是进行流式的实时计算,对于一直产生的数据流处理是非常迅速的,然而大部分数据并不是均匀的数据流 ...

  7. 分布式消息流平台:不要只想着Kafka,还有Pulsar

    摘要:Pulsar作为一个云原生的分布式消息流平台,越来越频繁地出现在人们的视野中,大有替代Kafka江湖地位的趋势. 本文分享自华为云社区<MRS Pulsar:下一代分布式消息流平台全新发布 ...

  8. Flink与Spark Streaming在与kafka结合的区别!

    本文主要是想聊聊flink与kafka结合.当然,单纯的介绍flink与kafka的结合呢,比较单调,也没有可对比性,所以的准备顺便帮大家简单回顾一下Spark Streaming与kafka的结合. ...

  9. RFID射频卡超市购物结算系统问题记录--写入卡片时,后台php无法操作数据库

    后台管理人员要给每件商品贴上RF卡作为唯一标识,所以要先给对应的RFID卡中写入响应的信息,我这里为了便于模拟演示只写入商品编号,价格,名称这几个字段,然后要把已经写入的商品上传后台,由后台写入数据库 ...

随机推荐

  1. python并发编程之多进程基础知识点

    1.操作系统 位于硬件与应用软件之间,本质也是一种软件,由系统内核和系统接口组成 和进程之间的关系是: 进程只能由操作系统创建 和普通软件区别: 操作系统是真正的控制硬件 应用程序实际在调用操作系统提 ...

  2. 「雅礼集训 2017 Day5」珠宝

    题目描述 Miranda 准备去市里最有名的珠宝展览会,展览会有可以购买珠宝,但可惜的是只能现金支付,Miranda 十分纠结究竟要带多少的现金,假如现金带多了,就会比较危险,假如带少了,看到想买的右 ...

  3. Nginx安装及使用

    安装 设置安装位置 切换到root下安装:CentOS: #su root Ubuntu:  #sudo su  切换文件夹: #cd /usr/local/src/ 安装编译环境 ububtu平台编 ...

  4. python3 动态import

    有些情况下,需要动态的替换引入的包 1.常用的import方法 import platform import os 2.__import__ 动态引用 loop_manager = __import_ ...

  5. windows的WSl安装mysql数据库以及操作数据库

    1.更新 sudo apt-get update sudo apt-get upgrade 2.安装mysql sudo apt-get install mysql-server 3.开启服务 sud ...

  6. 树莓派设置固定IP地址

    vi /etc/dhcpcd.conf # 使用 vi 编辑文件,增加下列配置项 # 指定接口 eth0 interface eth0 # 指定静态IP,/24表示子网掩码为 255.255.255. ...

  7. iview服务不可以被访问解决办法

    一般情况是因为服务的host设置为localhsot了,修改为0.0.0.0即可. 打开...\iview-admin-dev\node_modules\webpack-dev-server\bin下 ...

  8. NPOI读取excel表,如果有公式取出的是公式,想要取数字怎么办?

    public static DataTable Import(string strFileName) { DataTable dt = new DataTable(); HSSFWorkbook hs ...

  9. node-RED

    node-RED提供了一个基于浏览器的编辑器,可以轻松地使用调色板中的广泛节点将流连接在一起,这些节点可以通过单击部署到其运行时.使用Node-RED,开发人员将输入/输出和处理节点连接起来,创建流程 ...

  10. 挖矿病毒、ddos入侵流程及溯源

    一 挖矿病毒简介  攻击者利用相关安全隐患向目标机器种植病毒的行为. 二 攻击方式 攻击者通常利用弱口令.未授权.代码执行.命令执行等漏洞进行传播.示例如下: 示例1:   POST /tmUnblo ...