最近把一些sql执行从hive改到spark,发现执行更慢,sql主要是一些insert overwrite操作,从执行计划看到,用到InsertIntoHiveTable

spark-sql> explain insert overwrite table test2 select * from test1;
== Physical Plan ==
InsertIntoHiveTable MetastoreRelation temp, test2, true, false
+- HiveTableScan [id#20], MetastoreRelation temp, test1
Time taken: 0.404 seconds, Fetched 1 row(s)

跟进代码
org.apache.spark.sql.hive.execution.InsertIntoHiveTable

  protected override def doExecute(): RDD[InternalRow] = {
sqlContext.sparkContext.parallelize(sideEffectResult.asInstanceOf[Seq[InternalRow]], 1)
} /**
* Inserts all the rows in the table into Hive. Row objects are properly serialized with the
* `org.apache.hadoop.hive.serde2.SerDe` and the
* `org.apache.hadoop.mapred.OutputFormat` provided by the table definition.
*
* Note: this is run once and then kept to avoid double insertions.
*/
protected[sql] lazy val sideEffectResult: Seq[InternalRow] = {
// Have to pass the TableDesc object to RDD.mapPartitions and then instantiate new serializer
// instances within the closure, since Serializer is not serializable while TableDesc is.
val tableDesc = table.tableDesc
val tableLocation = table.hiveQlTable.getDataLocation
val tmpLocation = getExternalTmpPath(tableLocation)
val fileSinkConf = new FileSinkDesc(tmpLocation.toString, tableDesc, false)
val isCompressed = hadoopConf.get("hive.exec.compress.output", "false").toBoolean if (isCompressed) {
// Please note that isCompressed, "mapred.output.compress", "mapred.output.compression.codec",
// and "mapred.output.compression.type" have no impact on ORC because it uses table properties
// to store compression information.
hadoopConf.set("mapred.output.compress", "true")
fileSinkConf.setCompressed(true)
fileSinkConf.setCompressCodec(hadoopConf.get("mapred.output.compression.codec"))
fileSinkConf.setCompressType(hadoopConf.get("mapred.output.compression.type"))
} val numDynamicPartitions = partition.values.count(_.isEmpty)
val numStaticPartitions = partition.values.count(_.nonEmpty)
val partitionSpec = partition.map {
case (key, Some(value)) => key -> value
case (key, None) => key -> ""
} // All partition column names in the format of "<column name 1>/<column name 2>/..."
val partitionColumns = fileSinkConf.getTableInfo.getProperties.getProperty("partition_columns")
val partitionColumnNames = Option(partitionColumns).map(_.split("/")).getOrElse(Array.empty) // By this time, the partition map must match the table's partition columns
if (partitionColumnNames.toSet != partition.keySet) {
throw new SparkException(
s"""Requested partitioning does not match the ${table.tableName} table:
|Requested partitions: ${partition.keys.mkString(",")}
|Table partitions: ${table.partitionKeys.map(_.name).mkString(",")}""".stripMargin)
} // Validate partition spec if there exist any dynamic partitions
if (numDynamicPartitions > 0) {
// Report error if dynamic partitioning is not enabled
if (!hadoopConf.get("hive.exec.dynamic.partition", "true").toBoolean) {
throw new SparkException(ErrorMsg.DYNAMIC_PARTITION_DISABLED.getMsg)
} // Report error if dynamic partition strict mode is on but no static partition is found
if (numStaticPartitions == 0 &&
hadoopConf.get("hive.exec.dynamic.partition.mode", "strict").equalsIgnoreCase("strict")) {
throw new SparkException(ErrorMsg.DYNAMIC_PARTITION_STRICT_MODE.getMsg)
} // Report error if any static partition appears after a dynamic partition
val isDynamic = partitionColumnNames.map(partitionSpec(_).isEmpty)
if (isDynamic.init.zip(isDynamic.tail).contains((true, false))) {
throw new AnalysisException(ErrorMsg.PARTITION_DYN_STA_ORDER.getMsg)
}
} val jobConf = new JobConf(hadoopConf)
val jobConfSer = new SerializableJobConf(jobConf) // When speculation is on and output committer class name contains "Direct", we should warn
// users that they may loss data if they are using a direct output committer.
val speculationEnabled = sqlContext.sparkContext.conf.getBoolean("spark.speculation", false)
val outputCommitterClass = jobConf.get("mapred.output.committer.class", "")
if (speculationEnabled && outputCommitterClass.contains("Direct")) {
val warningMessage =
s"$outputCommitterClass may be an output committer that writes data directly to " +
"the final location. Because speculation is enabled, this output committer may " +
"cause data loss (see the case in SPARK-10063). If possible, please use an output " +
"committer that does not have this behavior (e.g. FileOutputCommitter)."
logWarning(warningMessage)
} val writerContainer = if (numDynamicPartitions > 0) {
val dynamicPartColNames = partitionColumnNames.takeRight(numDynamicPartitions)
new SparkHiveDynamicPartitionWriterContainer(
jobConf,
fileSinkConf,
dynamicPartColNames,
child.output)
} else {
new SparkHiveWriterContainer(
jobConf,
fileSinkConf,
child.output)
} @transient val outputClass = writerContainer.newSerializer(table.tableDesc).getSerializedClass
saveAsHiveFile(child.execute(), outputClass, fileSinkConf, jobConfSer, writerContainer) val outputPath = FileOutputFormat.getOutputPath(jobConf)
// TODO: Correctly set holdDDLTime.
// In most of the time, we should have holdDDLTime = false.
// holdDDLTime will be true when TOK_HOLD_DDLTIME presents in the query as a hint.
val holdDDLTime = false
if (partition.nonEmpty) {
if (numDynamicPartitions > 0) {
externalCatalog.loadDynamicPartitions(
db = table.catalogTable.database,
table = table.catalogTable.identifier.table,
outputPath.toString,
partitionSpec,
overwrite,
numDynamicPartitions,
holdDDLTime = holdDDLTime)
} else {
// scalastyle:off
// ifNotExists is only valid with static partition, refer to
// https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingdataintoHiveTablesfromqueries
// scalastyle:on
val oldPart =
externalCatalog.getPartitionOption(
table.catalogTable.database,
table.catalogTable.identifier.table,
partitionSpec) var doHiveOverwrite = overwrite if (oldPart.isEmpty || !ifNotExists) {
// SPARK-18107: Insert overwrite runs much slower than hive-client.
// Newer Hive largely improves insert overwrite performance. As Spark uses older Hive
// version and we may not want to catch up new Hive version every time. We delete the
// Hive partition first and then load data file into the Hive partition.
if (oldPart.nonEmpty && overwrite) {
oldPart.get.storage.locationUri.foreach { uri =>
val partitionPath = new Path(uri)
val fs = partitionPath.getFileSystem(hadoopConf)
if (fs.exists(partitionPath)) {
if (!fs.delete(partitionPath, true)) {
throw new RuntimeException(
"Cannot remove partition directory '" + partitionPath.toString)
}
// Don't let Hive do overwrite operation since it is slower.
doHiveOverwrite = false
}
}
} // inheritTableSpecs is set to true. It should be set to false for an IMPORT query
// which is currently considered as a Hive native command.
val inheritTableSpecs = true
externalCatalog.loadPartition(
table.catalogTable.database,
table.catalogTable.identifier.table,
outputPath.toString,
partitionSpec,
isOverwrite = doHiveOverwrite,
holdDDLTime = holdDDLTime,
inheritTableSpecs = inheritTableSpecs)
}
}
} else {
externalCatalog.loadTable(
table.catalogTable.database,
table.catalogTable.identifier.table,
outputPath.toString, // TODO: URI
overwrite,
holdDDLTime)
} // Attempt to delete the staging directory and the inclusive files. If failed, the files are
// expected to be dropped at the normal termination of VM since deleteOnExit is used.
try {
createdTempDir.foreach { path => path.getFileSystem(hadoopConf).delete(path, true) }
} catch {
case NonFatal(e) =>
logWarning(s"Unable to delete staging directory: $stagingDir.\n" + e)
} // un-cache this table.
sqlContext.sparkSession.catalog.uncacheTable(table.catalogTable.identifier.quotedString)
sqlContext.sessionState.catalog.refreshTable(table.catalogTable.identifier) // It would be nice to just return the childRdd unchanged so insert operations could be chained,
// however for now we return an empty list to simplify compatibility checks with hive, which
// does not return anything for insert operations.
// TODO: implement hive compatibility as rules.
Seq.empty[InternalRow]
}

insert overwrite 执行分为三步,一个是select,一个是write,一个是load,前边两步没什么问题,主要是最后一步load,以loadPartition为例看下执行过程:

org.apache.spark.sql.hive.HiveExternalCatalog

  override def loadPartition(
db: String,
table: String,
loadPath: String,
partition: TablePartitionSpec,
isOverwrite: Boolean,
holdDDLTime: Boolean,
inheritTableSpecs: Boolean): Unit = withClient {
requireTableExists(db, table) val orderedPartitionSpec = new util.LinkedHashMap[String, String]()
getTable(db, table).partitionColumnNames.foreach { colName =>
// Hive metastore is not case preserving and keeps partition columns with lower cased names,
// and Hive will validate the column names in partition spec to make sure they are partition
// columns. Here we Lowercase the column names before passing the partition spec to Hive
// client, to satisfy Hive.
orderedPartitionSpec.put(colName.toLowerCase, partition(colName))
} client.loadPartition(
loadPath,
db,
table,
orderedPartitionSpec,
isOverwrite,
holdDDLTime,
inheritTableSpecs)
}

这里会调用HiveClientImpl.loadPartition

org.apache.spark.sql.hive.client.HiveClientImpl

  def loadPartition(
loadPath: String,
dbName: String,
tableName: String,
partSpec: java.util.LinkedHashMap[String, String],
replace: Boolean,
holdDDLTime: Boolean,
inheritTableSpecs: Boolean): Unit = withHiveState {
val hiveTable = client.getTable(dbName, tableName, true /* throw exception */)
shim.loadPartition(
client,
new Path(loadPath), // TODO: Use URI
s"$dbName.$tableName",
partSpec,
replace,
holdDDLTime,
inheritTableSpecs,
isSkewedStoreAsSubdir = hiveTable.isStoredAsSubDirectories)
}

这里会调用Shim_v0_12.loadPartition

org.apache.spark.sql.hive.client.Shim_v0_12

  override def loadPartition(
hive: Hive,
loadPath: Path,
tableName: String,
partSpec: JMap[String, String],
replace: Boolean,
holdDDLTime: Boolean,
inheritTableSpecs: Boolean,
isSkewedStoreAsSubdir: Boolean): Unit = {
loadPartitionMethod.invoke(hive, loadPath, tableName, partSpec, replace: JBoolean,
holdDDLTime: JBoolean, inheritTableSpecs: JBoolean, isSkewedStoreAsSubdir: JBoolean)
} private lazy val loadPartitionMethod =
findMethod(
classOf[Hive],
"loadPartition",
classOf[Path],
classOf[String],
classOf[JMap[String, String]],
JBoolean.TYPE,
JBoolean.TYPE,
JBoolean.TYPE,
JBoolean.TYPE)

这里会反射调用hive的类Hive.loadPartition

org.apache.hadoop.hive.ql.metadata.Hive (1.2版本)

    public void loadPartition(Path loadPath, String tableName, Map<String, String> partSpec, boolean replace, boolean holdDDLTime, boolean inheritTableSpecs, boolean isSkewedStoreAsSubdir, boolean isSrcLocal, boolean isAcid) throws HiveException {
Table tbl = this.getTable(tableName);
this.loadPartition(loadPath, tbl, partSpec, replace, holdDDLTime, inheritTableSpecs, isSkewedStoreAsSubdir, isSrcLocal, isAcid);
} public Partition loadPartition(Path loadPath, Table tbl, Map<String, String> partSpec, boolean replace, boolean holdDDLTime, boolean inheritTableSpecs, boolean isSkewedStoreAsSubdir, boolean isSrcLocal, boolean isAcid) throws HiveException {
Path tblDataLocationPath = tbl.getDataLocation();
Partition newTPart = null; try {
Partition oldPart = this.getPartition(tbl, partSpec, false);
Path oldPartPath = null;
if (oldPart != null) {
oldPartPath = oldPart.getDataLocation();
} Path newPartPath = null;
FileSystem oldPartPathFS;
if (inheritTableSpecs) {
Path partPath = new Path(tbl.getDataLocation(), Warehouse.makePartPath(partSpec));
newPartPath = new Path(tblDataLocationPath.toUri().getScheme(), tblDataLocationPath.toUri().getAuthority(), partPath.toUri().getPath());
if (oldPart != null) {
oldPartPathFS = oldPartPath.getFileSystem(this.getConf());
FileSystem loadPathFS = loadPath.getFileSystem(this.getConf());
if (FileUtils.equalsFileSystem(oldPartPathFS, loadPathFS)) {
newPartPath = oldPartPath;
}
}
} else {
newPartPath = oldPartPath;
} List<Path> newFiles = null;
if (replace) {
replaceFiles(tbl.getPath(), loadPath, newPartPath, oldPartPath, this.getConf(), isSrcLocal);
} else {
newFiles = new ArrayList();
oldPartPathFS = tbl.getDataLocation().getFileSystem(this.conf);
copyFiles(this.conf, loadPath, newPartPath, oldPartPathFS, isSrcLocal, isAcid, newFiles);
} boolean forceCreate = !holdDDLTime;
newTPart = this.getPartition(tbl, partSpec, forceCreate, newPartPath.toString(), inheritTableSpecs, newFiles);
if (!holdDDLTime && isSkewedStoreAsSubdir) {
org.apache.hadoop.hive.metastore.api.Partition newCreatedTpart = newTPart.getTPartition();
SkewedInfo skewedInfo = newCreatedTpart.getSd().getSkewedInfo();
Map<List<String>, String> skewedColValueLocationMaps = this.constructListBucketingLocationMap(newPartPath, skewedInfo);
skewedInfo.setSkewedColValueLocationMaps(skewedColValueLocationMaps);
newCreatedTpart.getSd().setSkewedInfo(skewedInfo);
this.alterPartition(tbl.getDbName(), tbl.getTableName(), new Partition(tbl, newCreatedTpart));
this.getPartition(tbl, partSpec, true, newPartPath.toString(), inheritTableSpecs, newFiles);
return new Partition(tbl, newCreatedTpart);
} else {
return newTPart;
}
} catch (IOException var20) {
LOG.error(StringUtils.stringifyException(var20));
throw new HiveException(var20);
} catch (MetaException var21) {
LOG.error(StringUtils.stringifyException(var21));
throw new HiveException(var21);
} catch (InvalidOperationException var22) {
LOG.error(StringUtils.stringifyException(var22));
throw new HiveException(var22);
}
} protected static void replaceFiles(Path tablePath, Path srcf, Path destf, Path oldPath, HiveConf conf, boolean isSrcLocal) throws HiveException {
try {
FileSystem destFs = destf.getFileSystem(conf);
boolean inheritPerms = HiveConf.getBoolVar(conf, ConfVars.HIVE_WAREHOUSE_SUBDIR_INHERIT_PERMS); FileSystem srcFs;
FileStatus[] srcs;
try {
srcFs = srcf.getFileSystem(conf);
srcs = srcFs.globStatus(srcf);
} catch (IOException var20) {
throw new HiveException("Getting globStatus " + srcf.toString(), var20);
} if (srcs == null) {
LOG.info("No sources specified to move: " + srcf);
} else {
List<List<Path[]>> result = checkPaths(conf, destFs, srcs, srcFs, destf, true);
if (oldPath != null) {
try {
FileSystem fs2 = oldPath.getFileSystem(conf);
if (fs2.exists(oldPath)) {
if (FileUtils.isSubDir(oldPath, destf, fs2)) {
FileUtils.trashFilesUnderDir(fs2, oldPath, conf);
} if (inheritPerms) {
inheritFromTable(tablePath, destf, conf, destFs);
}
}
} catch (Exception var19) {
LOG.warn("Directory " + oldPath.toString() + " cannot be removed: " + var19, var19);
}
} if (srcs.length == 1 && srcs[0].isDir()) {
Path destfp = destf.getParent();
if (!destFs.exists(destfp)) {
boolean success = destFs.mkdirs(destfp);
if (!success) {
LOG.warn("Error creating directory " + destf.toString());
} if (inheritPerms && success) {
inheritFromTable(tablePath, destfp, conf, destFs);
}
} Iterator i$ = result.iterator(); while(i$.hasNext()) {
List<Path[]> sdpairs = (List)i$.next();
Iterator i$ = sdpairs.iterator(); while(i$.hasNext()) {
Path[] sdpair = (Path[])i$.next();
Path destParent = sdpair[1].getParent();
FileSystem destParentFs = destParent.getFileSystem(conf);
if (!destParentFs.isDirectory(destParent)) {
boolean success = destFs.mkdirs(destParent);
if (!success) {
LOG.warn("Error creating directory " + destParent);
} if (inheritPerms && success) {
inheritFromTable(tablePath, destParent, conf, destFs);
}
} if (!moveFile(conf, sdpair[0], sdpair[1], destFs, true, isSrcLocal)) {
throw new IOException("Unable to move file/directory from " + sdpair[0] + " to " + sdpair[1]);
}
}
}
} else {
if (!destFs.exists(destf)) {
boolean success = destFs.mkdirs(destf);
if (!success) {
LOG.warn("Error creating directory " + destf.toString());
} if (inheritPerms && success) {
inheritFromTable(tablePath, destf, conf, destFs);
}
} Iterator i$ = result.iterator(); while(i$.hasNext()) {
List<Path[]> sdpairs = (List)i$.next();
Iterator i$ = sdpairs.iterator(); while(i$.hasNext()) {
Path[] sdpair = (Path[])i$.next();
if (!moveFile(conf, sdpair[0], sdpair[1], destFs, true, isSrcLocal)) {
throw new IOException("Error moving: " + sdpair[0] + " into: " + sdpair[1]);
}
}
}
} }
} catch (IOException var21) {
throw new HiveException(var21.getMessage(), var21);
}
} public static boolean trashFilesUnderDir(FileSystem fs, Path f, Configuration conf) throws FileNotFoundException, IOException {
FileStatus[] statuses = fs.listStatus(f, HIDDEN_FILES_PATH_FILTER);
boolean result = true;
FileStatus[] arr$ = statuses;
int len$ = statuses.length; for(int i$ = 0; i$ < len$; ++i$) {
FileStatus status = arr$[i$];
result &= moveToTrash(fs, status.getPath(), conf);
} return result;
}

hive在执行loadPartition的时候,如果分区目录已经存在,会调用replaceFiles,replaceFiles会调用trashFilesUnderDir,trashFilesUnderDir里会逐个将文件放到回收站;

spark执行loadPartition的时候,直接反射调用hive的逻辑,为什么还会比hive执行慢很多呢?

这时注意到hive用的版本是2.1,spark2.1.1里依赖的hive版本是1.2,对比hive1.2和hive2.1之间的代码发现,确实有差别,以下是hive2.1的代码:

org.apache.hadoop.hive.ql.metadata.Hive(2.1版本)

  /**
* Trashes or deletes all files under a directory. Leaves the directory as is.
* @param fs FileSystem to use
* @param f path of directory
* @param conf hive configuration
* @param forceDelete whether to force delete files if trashing does not succeed
* @return true if deletion successful
* @throws IOException
*/
private boolean trashFilesUnderDir(final FileSystem fs, Path f, final Configuration conf)
throws IOException {
FileStatus[] statuses = fs.listStatus(f, FileUtils.HIDDEN_FILES_PATH_FILTER);
boolean result = true;
final List<Future<Boolean>> futures = new LinkedList<>();
final ExecutorService pool = conf.getInt(ConfVars.HIVE_MOVE_FILES_THREAD_COUNT.varname, 25) > 0 ?
Executors.newFixedThreadPool(conf.getInt(ConfVars.HIVE_MOVE_FILES_THREAD_COUNT.varname, 25),
new ThreadFactoryBuilder().setDaemon(true).setNameFormat("Delete-Thread-%d").build()) : null;
final SessionState parentSession = SessionState.get();
for (final FileStatus status : statuses) {
if (null == pool) {
result &= FileUtils.moveToTrash(fs, status.getPath(), conf);
} else {
futures.add(pool.submit(new Callable<Boolean>() {
@Override
public Boolean call() throws Exception {
SessionState.setCurrentSessionState(parentSession);
return FileUtils.moveToTrash(fs, status.getPath(), conf);
}
}));
}
}
if (null != pool) {
pool.shutdown();
for (Future<Boolean> future : futures) {
try {
result &= future.get();
} catch (InterruptedException | ExecutionException e) {
LOG.error("Failed to delete: ",e);
pool.shutdownNow();
throw new IOException(e);
}
}
}
return result;
}

可以看到在hive2.1里删除文件用到了线程池,而在hive1.2里是在for循环里串行删除,所以当文件很多时,hive2.1比hive1.2(即spark2.1.1)就会快非常多;

spark依赖hive的方式是直接反射调用,由于hive1.2和hive2.1很多类的方法接口都有调整,很难升级,所以遇到这个问题只能通过修改spark里Hive.trashFilesUnderDir代码,同样改为线程池的方式来删除文件,问题解决;

【原创】大叔问题定位分享(21)spark执行insert overwrite非常慢,比hive还要慢的更多相关文章

  1. 【原创】大叔问题定位分享(18)beeline连接spark thrift有时会卡住

    spark 2.1.1 beeline连接spark thrift之后,执行use database有时会卡住,而use database 在server端对应的是 setCurrentDatabas ...

  2. 【原创】大叔问题定位分享(15)spark写parquet数据报错ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead

    spark 2.1.1 spark里执行sql报错 insert overwrite table test_parquet_table select * from dummy 报错如下: org.ap ...

  3. 【原创】大叔问题定位分享(22)hive同时执行多个insert overwrite table只有1个可以执行

    hive 2.1 一 问题 最近有一个场景,要向一个表的多个分区写数据,为了缩短执行时间,采用并发的方式,多个sql同时执行,分别写不同的分区,同时开启动态分区: set hive.exec.dyna ...

  4. 【原创】大叔问题定位分享(27)spark中rdd.cache

    spark 2.1.1 spark应用中有一些task非常慢,持续10个小时,有一个task日志如下: 2019-01-24 21:38:56,024 [dispatcher-event-loop-2 ...

  5. 【原创】大叔问题定位分享(2)spark任务一定几率报错java.lang.NoSuchFieldError: HIVE_MOVE_FILES_THREAD_COUNT

    最近用yarn cluster方式提交spark任务时,有时会报错,报错几率是40%,报错如下: 18/03/15 21:50:36 116 ERROR ApplicationMaster91: Us ...

  6. 【原创】大叔问题定位分享(19)spark task在executors上分布不均

    最近提交一个spark应用之后发现执行非常慢,点开spark web ui之后发现卡在一个job的一个stage上,这个stage有100000个task,但是绝大部分task都分配到两个execut ...

  7. 【原创】大叔问题定位分享(11)Spark中对大表子查询加limit为什么会报Broadcast超时错误

    当两个表需要join时,如果一个是大表,一个是小表,正常的map-reduce流程需要shuffle,这会导致大表数据在节点间网络传输,常见的优化方式是将小表读到内存中并广播到大表处理,避免shuff ...

  8. 【原创】大叔问题定位分享(10)提交spark任务偶尔报错 org.apache.spark.SparkException: A master URL must be set in your configuration

    spark 2.1.1 一 问题重现 问题代码示例 object MethodPositionTest { val sparkConf = new SparkConf().setAppName(&qu ...

  9. 【原创】大叔问题定位分享(7)Spark任务中Job进度卡住不动

    Spark2.1.1 最近运行spark任务时会发现任务经常运行很久,具体job如下: Job Id  ▾ Description Submitted Duration Stages: Succeed ...

随机推荐

  1. 微服务领域是不是要变天了?Spring Cloud Alibaba正式入驻Spring Cloud官方孵化器!

    引言 微服务这个词的热度自它出现以后,就一直是高烧不退,而微服务之所以这么火,其实和近几年互联网的创业氛围是分不开的. 与传统行业不同,互联网企业有一个特点,那就是市场扩张速度非常之快,可能也就是几天 ...

  2. VisualStudio2017下ASP.NET CORE的TagHelper智能提示不能使用的解决办法

    之前在VS2017RC中就发现该问题,安装了依赖,但是前段一直点不出来asp-for,后来查了发行说明, 才知道在VS2017rc中暂时无法解决,所以一直等到VS2017正式版的发布,急冲冲的装好, ...

  3. Spring+SpringMVC+Hibernate小案例(实现Spring对Hibernate的事务管理)

    原文地址:https://blog.csdn.net/jiegegeaa1/article/details/81975286 一.工作环境 编辑器用的是MyEclipse,用Mysql数据库,mave ...

  4. java基础-开发工具IDEA

    常用快捷键 查找 查找:Ctrl + F Find In Path: Ctrl + F + Shift (比普通查找多了一个shift) Search EveryWhere : 双击Shift 视图 ...

  5. 基于idea的springcloud的helloworld项目搭建过程整理

    Springcloud的搭建主要包括三个部分:服务注册中心.服务提供者.服务消费者.每一个部分都是一个springboot项目,它们通过配置文件(application.properties或appl ...

  6. 近期学习docker遇到的一些问题

    最近看某谷的springboot视频,看到了docker部分,在实践过程中遇到了一些问题 默认国外镜像,下载软件很慢 linux内核版本过低,与docker运行不匹配 这里记录一下解决方案 第一个问题 ...

  7. Linux 学习 (七) 挂载命令 & 用户登陆查看

    Linux达人养成计划 I 学习笔记 挂载命令 mount:查询系统中已经挂载的设备 mount -a:根据配置文件 /etc/fstab 的内容,自动挂载 mount [-t 文件系统] [-o 特 ...

  8. 基于JavaCv并发读取本地视频流并提取每帧32位dhash特征

    1.读取本地视频流,pom依赖 依赖于 org.bytedeco下的javacv/opencv/ffmpeg 包 <dependency> <groupId>org.byted ...

  9. kubernetes 报错汇总

    一. pod的报错: 1. pod的容器无法启动报错: 报错信息: Normal SandboxChanged 4m9s (x12 over 5m18s) kubelet, k8sn1 Pod san ...

  10. FastDFS分布式文件系统客户端安装

    软件安装前提:服务器已配置好LNMP环境安装libfastcommon见FastDFS服务器安装文档(http://www.cnblogs.com/Mrhuangrui/p/8316481.html) ...