用beam实现连接kafka和elasticSearch示例 在flink平台运行
示例实现beam用java编程,监听kafka的testmsg主题,然后将收取到的单词,按5秒做一次统计。结果输出到outputmessage 的kafka主题,同时同步到elasticSearch。
kafka需要运行
启动:
cd /root/kafuka/kafka_2.12-0.11.0.0
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup bin/kafka-server-start.sh config/server.properties &
创建topic:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic testmsg
bin/kafka-topics.sh --list --zookeeper localhost:2181
生产者producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
消费者consumer
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
elasticSearch
创建索引Put http://192.168.11.100:9200/myindex?pretty
查看所有索引: http://192.168.11.100:9200/_cat/indices?v 获取内容Get http://192.168.11.100:9200/myindex/_search?q=*&pretty
http://192.168.11.100:9200/myindex/_search?q=*&sort=_id:desc&pretty
用mvn自动生成项目代码:
windows在powershell中运行:
mvn archetype:generate `
-D archetypeGroupId=org.apache.beam `
-D archetypeArtifactId=beam-sdks-java-maven-archetypes-examples `
-D archetypeVersion=2.8.0 `
-D groupId=org.example `
-D artifactId=word-count-beam `
-D version="0.1" `
-D package=org.apache.beam.examples `
-D interactiveMode=false 其他参考beam官方文档: <https://beam.apache.org/get-started/quickstart-java/>
替换pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>org.example</groupId>
<artifactId>word-count-beam</artifactId>
<version>0.1</version> <packaging>jar</packaging> <properties>
<beam.version>2.8.0</beam.version> <bigquery.version>v2-rev402-1.24.1</bigquery.version>
<google-clients.version>1.24.1</google-clients.version>
<guava.version>20.0</guava.version>
<hamcrest.version>1.3</hamcrest.version>
<jackson.version>2.9.5</jackson.version>
<joda.version>2.4</joda.version>
<junit.version>4.12</junit.version>
<maven-compiler-plugin.version>3.7.0</maven-compiler-plugin.version>
<maven-exec-plugin.version>1.6.0</maven-exec-plugin.version>
<maven-jar-plugin.version>3.0.2</maven-jar-plugin.version>
<maven-shade-plugin.version>3.1.0</maven-shade-plugin.version>
<mockito.version>1.10.19</mockito.version>
<pubsub.version>v1-rev399-1.24.1</pubsub.version>
<slf4j.version>1.7.25</slf4j.version>
<spark.version>2.3.2</spark.version>
<hadoop.version>2.7.3</hadoop.version>
<maven-surefire-plugin.version>2.21.0</maven-surefire-plugin.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties> <repositories>
<repository>
<id>apache.snapshots</id>
<name>Apache Development Snapshot Repository</name>
<url>https://repository.apache.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories> <build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin> <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${maven-surefire-plugin.version}</version>
<configuration>
<parallel>all</parallel>
<threadCount>4</threadCount>
<redirectTestOutputToFile>true</redirectTestOutputToFile>
</configuration>
<dependencies>
<dependency>
<groupId>org.apache.maven.surefire</groupId>
<artifactId>surefire-junit47</artifactId>
<version>${maven-surefire-plugin.version}</version>
</dependency>
</dependencies>
</plugin> <!-- Ensure that the Maven jar plugin runs before the Maven
shade plugin by listing the plugin higher within the file. -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>${maven-jar-plugin.version}</version>
</plugin> <!--
Configures `mvn package` to produce a bundled jar ("fat jar") for runners
that require this for job submission to a cluster.
-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>${maven-shade-plugin.version}</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<finalName>${project.artifactId}-bundled-${project.version}</finalName>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/LICENSE</exclude>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins> <pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${maven-exec-plugin.version}</version>
<configuration>
<cleanupDaemonThreads>false</cleanupDaemonThreads>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build> <profiles>
<profile>
<id>direct-runner</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<!-- Makes the DirectRunner available when running a pipeline. -->
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</profile> <profile>
<id>apex-runner</id>
<!-- Makes the ApexRunner available when running a pipeline. -->
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-apex</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
<!--
Apex depends on httpclient version 4.3.6, project has a transitive dependency to httpclient 4.0.1 from
google-http-client. Apex dependency version being specified explicitly so that it gets picked up. This
can be removed when the project no longer has a dependency on a different httpclient version.
-->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.3.6</version>
<scope>runtime</scope>
<exclusions>
<exclusion>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</exclusion>
</exclusions>
</dependency>
<!--
Apex 3.6 is built against YARN 2.6. Version in the fat jar has to match
what's on the cluster, hence we need to repeat the Apex Hadoop dependencies here.
-->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-client</artifactId>
<version>${hadoop.version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</profile> <profile>
<id>dataflow-runner</id>
<!-- Makes the DataflowRunner available when running a pipeline. -->
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</profile> <profile>
<id>flink-runner</id>
<!-- Makes the FlinkRunner available when running a pipeline. -->
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-flink_2.11</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</profile> <profile>
<id>spark-runner</id>
<!-- Makes the SparkRunner available when running a pipeline. Additionally,
overrides some Spark dependencies to Beam-compatible versions. -->
<properties>
<netty.version>4.1.17.Final</netty.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-spark</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-hadoop-file-system</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
<scope>runtime</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-scala_2.11</artifactId>
<version>${jackson.version}</version>
<scope>runtime</scope>
</dependency>
<!-- [BEAM-3519] GCP IO exposes netty on its API surface, causing conflicts with runners -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
<exclusions>
<exclusion>
<groupId>io.grpc</groupId>
<artifactId>grpc-netty</artifactId>
</exclusion>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>netty-handler</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</profile>
<profile>
<id>gearpump-runner</id>
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-gearpump</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
</profile>
</profiles> <dependencies>
<!-- Adds a dependency on the Beam SDK. -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>${beam.version}</version>
</dependency> <!-- Adds a dependency on the Beam Google Cloud Platform IO module. -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
</dependency> <!-- Dependencies below this line are specific dependencies needed by the examples code. -->
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client</artifactId>
<version>${google-clients.version}</version>
<exclusions>
<!-- Exclude an old version of guava that is being pulled
in by a transitive dependency of google-api-client -->
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava-jdk5</artifactId>
</exclusion>
</exclusions>
</dependency> <dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-bigquery</artifactId>
<version>${bigquery.version}</version>
<exclusions>
<!-- Exclude an old version of guava that is being pulled
in by a transitive dependency of google-api-client -->
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava-jdk5</artifactId>
</exclusion>
</exclusions>
</dependency> <dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client</artifactId>
<version>${google-clients.version}</version>
<exclusions>
<!-- Exclude an old version of guava that is being pulled
in by a transitive dependency of google-api-client -->
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava-jdk5</artifactId>
</exclusion>
</exclusions>
</dependency> <dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-pubsub</artifactId>
<version>${pubsub.version}</version>
<exclusions>
<!-- Exclude an old version of guava that is being pulled
in by a transitive dependency of google-api-client -->
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava-jdk5</artifactId>
</exclusion>
</exclusions>
</dependency> <dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>${joda.version}</version>
</dependency> <dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>${guava.version}</version>
</dependency> <!-- Add slf4j API frontend binding with JUL backend -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency> <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<version>${slf4j.version}</version>
<!-- When loaded at runtime this will wire up slf4j to the JUL backend -->
<scope>runtime</scope>
</dependency> <!-- Hamcrest and JUnit are required dependencies of PAssert,
which is used in the main code of DebuggingWordCount example. -->
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-core</artifactId>
<version>${hamcrest.version}</version>
</dependency> <dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
<version>${hamcrest.version}</version>
</dependency> <dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
</dependency> <!-- The DirectRunner is needed for unit tests. -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
<scope>test</scope>
</dependency> <dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>${mockito.version}</version>
<scope>test</scope>
</dependency> <!-- kafka -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-kafka</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.0</version>
</dependency> <!-- kafka -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-elasticsearch</artifactId>
<version>${beam.version}</version>
</dependency> </dependencies>
</project>
将如下代码加入java目录 src/main/java/org.apache.beam.examples
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.beam.examples; import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.elasticsearch.ElasticsearchIO;
import org.apache.beam.sdk.io.kafka.KafkaIO;
import org.apache.beam.sdk.io.kafka.KafkaRecord;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.Validation.Required;
import org.apache.beam.sdk.transforms.Count;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.transforms.windowing.FixedWindows;
import org.apache.beam.sdk.transforms.windowing.Window;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.joda.time.Duration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import com.google.common.collect.ImmutableMap;
//import org.apache.beam.runners.flink.FlinkRunner; public class KafkaSample { public static void main(String[] args) {
String hosts = "211.100.75.227:9092";// 192.168.1.110:11092,192.168.1.119:11092,192.168.1.120:11092 String sourceTopic = "testmsg";
// 创建管道工厂
WordCountOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(WordCountOptions.class); // 设置相关管道
Pipeline pipeline = Pipeline.create(options); // 这里 kV 后说明 kafka 中的 key 和 value 均为 String 类型
PCollection<KafkaRecord<String, String>> lines = pipeline.apply(KafkaIO.<String, String> read().withBootstrapServers(hosts)// 必需设置 kafka的服务器地址和端口
.withTopic(sourceTopic)// 必需设置要读取的 kafka 的 topic 名称
.withKeyDeserializer(StringDeserializer.class)// 必需序列化 key
.withValueDeserializer(StringDeserializer.class)
// 必需序列化 value
.updateConsumerProperties(ImmutableMap.<String, Object> of("auto.offset.reset", "latest")));// 这个属性
// kafka
// 最常见的.earliest
// 为输出的消息类型。或者进行处理后返回的消息类型
PCollection<String> kafkadata = lines.apply("Remove Kafka Metadata", ParDo.of(new DoFn<KafkaRecord<String, String>, String>() {
private static final long serialVersionUID = 1L; @ProcessElement
public void processElement(ProcessContext ctx) {
System.out.println("get from topic:" + ctx.element().getKV());
ctx.output(ctx.element().getKV().getValue());// 对kafka收到的消息处理
}
}));
PCollection<String> windowedEvents = kafkadata.apply(Window.<String> into(FixedWindows.of(Duration.standardSeconds(5))));
PCollection<KV<String, Long>> wordcount = windowedEvents.apply(Count.<String> perElement()); // 统计每一个
// kafka
// 消息的
// Count
PCollection<String> wordtj = wordcount.apply("ConcatResultKVs", MapElements.via( // 拼接最后的格式化输出(Key
// 为
// Word,Value
// 为
// Count)
new SimpleFunction<KV<String, Long>, String>() {
private static final long serialVersionUID = 1L; @Override
public String apply(KV<String, Long> input) {
System.out.println("key=" + input.getKey());
System.out.println("value=" + input.getValue());
String ret = " {\"" + input.getKey() + "\":\"" + input.getValue() + "\"}";
System.out.println(ret);
return ret;
}
})); /* sink to kafka*/
wordtj.apply(KafkaIO.<Void, String> write().withBootstrapServers(hosts)// 设置写会
// kafka
// 的集群配置地址
.withTopic("outputmessage")// 设置返回 kafka 的消息主题
// .withKeySerializer(StringSerializer.class)// 这里不用设置了,因为上面
// Void
.withValueSerializer(StringSerializer.class)
// Dataflow runner and Spark 兼容, Flink 对 kafka0.11 才支持。我的版本是
// 0.10 不兼容
// .withEOS(20, "eos-sink-group-id")
.values() // 只需要在此写入默认的 key 就行了,默认为 null 值
); // 输出结果 /* sink to elasticsearch */
String[] addresses = { "http://192.168.11.100:9200" };
wordtj.apply(ElasticsearchIO.write().withConnectionConfiguration(ElasticsearchIO.ConnectionConfiguration.create(addresses, "myindex", "testdoc"))); pipeline.run().waitUntilFinish();
} public interface WordCountOptions extends PipelineOptions { /**
* By default, this example reads from a public dataset containing the
* text of King Lear. Set this option to choose a different input file
* or glob.
*/
@Description("Path of the file to read from")
@Default.String("gs://apache-beam-samples/shakespeare/kinglear.txt")
String getInputFile(); void setInputFile(String value); /** Set this required option to specify where to write the output. */
@Description("Path of the file to write to")
@Required
String getOutput(); void setOutput(String value);
} private static final Logger logger = LoggerFactory.getLogger(KafkaSample.class); /**
* Options supported by {@link WordCount}.
*
* <p>
* Concept #4: Defining your own configuration options. Here, you can add
* your own arguments to be processed by the command-line parser, and
* specify default values for them. You can then access the options values
* in your pipeline code.
*
* <p>
* Inherits standard configuration options.
*/
public interface KFOptions extends PipelineOptions { /**
* By default, this example reads from a public dataset containing the
* text of King Lear. Set this option to choose a different input file
* or glob.
*/
@Description("Path of the file to read from")
@Default.String("211.100.75.227:9092")
String getBrokers(); void setBrokers(String value); }
}
修改里面kafka地址,elasticSearch地址。大功告成,可以执行了!
beam平台直接运行:
Direct-Local runner
mvn compile exec:java -D exec.mainClass=org.apache.beam.examples.KafkaSample `
-D exec.args="--inputFile=pom.xml --output=counts" -P direct-runner
自启动Flink local平台上运行:
mvn compile exec:java -D exec.mainClass=org.apache.beam.examples.KafkaSample `
-D exec.args="--runner=FlinkRunner --inputFile=pom.xml --output=counts" -P flink-runner
打包放入已经运行的flink local平台上运行:
mvn package -Pflink-runner
这样可以打包后,上传到flink,指定启动类:
--runner=FlinkRunner --inputFile=C:\path\to\quickstart\pom.xml --output=C:\tmp\counts --filesToStage=.\target\word-count-beam-bundled-0.1.jar org.apache.beam.examples.KafkaSample


遇到的问题:
1,kafka收到的json,普通的可以导入elasticSearch。比如
{
“field":"value",
"filed1:"value"
}
但是如果字串里面带有冒号等字符,会报错,后来发现写在一行可以通过。比如
{"mytime": "2018-12-13T06:44:41.460Z","carColor":"blue"}
可能和elasticSearch的_bulk批量插入有关。
用beam实现连接kafka和elasticSearch示例 在flink平台运行的更多相关文章
- 使用Akka、Kafka和ElasticSearch等构建分析引擎 -- good
本文翻译自Building Analytics Engine Using Akka, Kafka & ElasticSearch,已获得原作者Satendra Kumar和网站授权. 在这篇文 ...
- Flink SQL结合Kafka、Elasticsearch、Kibana实时分析电商用户行为
body { margin: 0 auto; font: 13px / 1 Helvetica, Arial, sans-serif; color: rgba(68, 68, 68, 1); padd ...
- 一个非常标准的Java连接Oracle数据库的示例代码
最基本的Oracle数据库连接代码(只针对Oracle11g): 1.右键项目->构建路径->配置构建路径,选择第三项“库”,然后点击“添加外部Jar”,选择“D:\Oracle\app\ ...
- Java连接Oracle数据库的示例代码
最基本的Oracle数据库连接代码(只针对Oracle11g): 1.右键项目->构建路径 ->配置构建路径,选择第三项“库”,然后点击“添加外部Jar”,选择 “D:\Oracle\ap ...
- 一个非常标准的连接Mysql数据库的示例代码
一.About Mysql 1.Mysql 优点 体积小.速度快.开放源码.免费 一般中小型网站的开发都选择 MySQL ,最流行的关系型数据库 LAMP / LNMP Linux作为操作系统 Apa ...
- 物联网架构成长之路(8)-EMQ-Hook了解、连接Kafka发送消息
1. 前言 按照我自己设计的物联网框架,对于MQTT集群中的所有消息,是要持久化到磁盘的,这里采用一个消息队列中间件Kafka作为数据缓冲,缓冲结果存到数据仓库中,以供后续作为数据分析.由于MQTT集 ...
- java实现Kafka的消费者示例
使用java实现Kafka的消费者 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 3 ...
- SpringBoot 连接kafka ssl 报 CertificateException: No subject alternative names present 异常解决
当使用较新版本SpringBoot时,对应的 kafka-client 版本也比较新,如果使用了 2.x 以上的 kafka-client ,并且配置了 kafka ssl 连接方式时,可能会报如下异 ...
- Java连接简单使用ElasticSearch
目录 1. 添加依赖 2. 代码,无账号密码 3. 代码,有账号密码,并且是https方式 4. 参考文章 1. 添加依赖 <!-- https://mvnrepository.com/arti ...
随机推荐
- 前后端交互实现(nginx,json,以及datatable的问题相关)
1.同源问题解决 首先,在同一个域下搭建网络域名访问,需要nginx软件,下载之后修改部分配置 然后再终端下cmd nginx.exe命令,或者打开nginx.exe文件,会运行nginx一闪而过, ...
- DVWA 黑客攻防演练(九) SQL 盲注 SQL Injection (Blind)
上一篇文章谈及了 dvwa 中的SQL注入攻击,而这篇和上一篇内容很像,都是关于SQL注入攻击.和上一篇相比,上一篇的注入成功就马上得到所有用户的信息,这部分页面上不会返回一些很明显的信息供你调试,就 ...
- 总结XSS与CSRF两种跨站攻击
XSS:跨站脚本(Cross-site scripting),实际应是"CSS",但由于和层叠样式表CSS名称冲突,故改为"XSS" CSRF:跨站请求伪造(C ...
- java每日一总结
一, 1.安装jdk时路径中不能有空格或者中文. 二, 1.进入文件夹:cd+文件夹名称. 2.进入多级文件夹:cd+文件夹1\文件夹2\文件夹3. 3.返回上一级:cd 空格+... 4.返回根路径 ...
- Ubuntu系统分配存储空间的建议以及给Ubuntu系统根目录扩容方法(从20GB追加100GB)
当初准备装双系统时,也思考了很久分配多少空间给Ubuntu16.04系统,查了许多资料,大多意思是‘/’目录总共给20GB,其他的给/home.网上资料推荐的大多跟这篇文章一样:https://blo ...
- Windows Service 学习系列(一):建立简单的Windows service
参考:https://www.cnblogs.com/cncc/p/7170951.html 一.开发环境 操作系统:Windows 7 X64 开发环境:VS2017 编程语言:C# .NET版本: ...
- kernel笔记——中断
cpu与磁盘.网卡.键盘等外围设备(相对于cpu和内存而言)交互时,cpu下发I/O请求到这些设备后,相对cpu的处理能力而言,磁盘.网卡等设备需要较长时间完成请求处理. 那么在请求发出到处理完成这段 ...
- c# 正则验证
1.验证百分数 bool tempBool = Regex.IsMatch(str, @"[1-9]{0,1}[0-9](\\.[0-9])?%");
- java 下载word freemaker
网上有很多优质的博文了,这里这篇博客就是记录一下字自己,写demo的历程,坑和收获 在java程序中下载word 有6中方式,此处省略(嘻嘻),不过大家公认的是 freemaker 和 PageOff ...
- python基础-小练习
三级菜单 要求: 打印省.市.县三级菜单 可返回上一级 可随时退出程序 购物车程序 要求: 用户名和密码存放于文件中,格式为:egon|egon123 启动程序后,先登录,登录成功则让用户输入工资,然 ...