hdfs小文件合并
HDFS small file merge
1.hive
Settings
There are 3 settings that should be configured before archiving is used. (Example values are shown.)
hive> set hive.archive.enabled=``true``;``hive> set hive.archive.har.parentdir.settable=``true``;``hive> set har.partfile.size=``1099511627776``;
hive.archive.enabled controls whether archiving operations are enabled.
hive.archive.har.parentdir.settable` informs Hive whether the parent directory can be set while creating the archive. In recent versions of Hadoop the option can specify the root directory of the archive. For example, if is archived with as the parent directory, then the resulting archive file will contain the directory structure . In older versions of Hadoop (prior to 2011), this option was not available and therefore Hive must be configured to accommodate this limitation.`-p``/dir1/dir2/file``/dir1``dir2/file
har.partfile.size` controls the size of the files that make up the archive. The archive will contain `/` files, rounded up. Higher values mean fewer files, but will result in longer archiving times due to the reduced number of mappers.`*size_of_partition*``har.partfile.size
Usage
Archive
Once the configuration values are set, a partition can be archived with the command:
ALTER TABLE table_name ARCHIVE PARTITION (partition_col = partition_col_value, partition_col = partiton_col_value, ...)
For example:
ALTER TABLE srcpart ARCHIVE PARTITION(ds=``'2008-04-08'``, hr=``'12'``)
Once the command is issued, a mapreduce job will perform the archiving. Unlike Hive queries, there is no output on the CLI to indicate process.
Unarchive
The partition can be reverted back to its original files with the unarchive command:
ALTER TABLE srcpart UNARCHIVE PARTITION(ds=``'2008-04-08'``, hr=``'12'``)
2.hdfs(Apache Hadoop Archives – Hadoop Archives Guide)
Overview
Hadoop archives are special format archives. A Hadoop archive maps to a file system directory. A Hadoop archive always has a .har extension. A Hadoop archive directory contains metadata (in the form of _index and _masterindex) and data (part-) files. The _index file contains the name of the files that are part of the archive and the location within the part files.
How to Create an Archive
Usage: hadoop archive -archiveName name -p <parent> [-r <replication factor>] <src>* <dest>
-archiveName is the name of the archive you would like to create. An example would be foo.har. The name should have a *.har extension. The parent argument is to specify the relative path to which the files should be archived to. Example would be :
-p /foo/bar a/b/c e/f/g
Here /foo/bar is the parent path and a/b/c, e/f/g are relative paths to parent. Note that this is a Map/Reduce job that creates the archives. You would need a map reduce cluster to run this. For a detailed example the later sections.
-r indicates the desired replication factor; if this optional argument is not specified, a replication factor of 3 will be used.
If you just want to archive a single directory /foo/bar then you can just use
hadoop archive -archiveName zoo.har -p /foo/bar -r 3 /outputdir
If you specify source files that are in an encryption zone, they will be decrypted and written into the archive. If the har file is not located in an encryption zone, then they will be stored in clear (decrypted) form. If the har file is located in an encryption zone they will stored in encrypted form.
How to Look Up Files in Archives
The archive exposes itself as a file system layer. So all the fs shell commands in the archives work but with a different URI. Also, note that archives are immutable. So, rename’s, deletes and creates return an error. URI for Hadoop Archives is
har://scheme-hostname:port/archivepath/fileinarchive
If no scheme is provided it assumes the underlying filesystem. In that case the URI would look like
har:///archivepath/fileinarchive
How to Unarchive an Archive
Since all the fs shell commands in the archives work transparently, unarchiving is just a matter of copying.
To unarchive sequentially:
hdfs dfs -cp har:///user/zoo/foo.har/dir1 hdfs:/user/zoo/newdir
To unarchive in parallel, use DistCp:
hadoop distcp har:///user/zoo/foo.har/dir1 hdfs:/user/zoo/newdir
Archives Examples
Creating an Archive
hadoop archive -archiveName foo.har -p /user/hadoop -r 3 dir1 dir2 /user/zoo
The above example is creating an archive using /user/hadoop as the relative archive directory. The directories /user/hadoop/dir1 and /user/hadoop/dir2 will be archived in the following file system directory – /user/zoo/foo.har. Archiving does not delete the input files. If you want to delete the input files after creating the archives (to reduce namespace), you will have to do it on your own. In this example, because -r 3 is specified, a replication factor of 3 will be used.
Looking Up Files
Looking up files in hadoop archives is as easy as doing an ls on the filesystem. After you have archived the directories /user/hadoop/dir1 and /user/hadoop/dir2 as in the example above, to see all the files in the archives you can just run:
hdfs dfs -ls -R har:///user/zoo/foo.har/
To understand the significance of the -p argument, lets go through the above example again. If you just do an ls (not lsr) on the hadoop archive using
hdfs dfs -ls har:///user/zoo/foo.har
The output should be:
har:///user/zoo/foo.har/dir1
har:///user/zoo/foo.har/dir2
As you can recall the archives were created with the following command
hadoop archive -archiveName foo.har -p /user/hadoop dir1 dir2 /user/zoo
If we were to change the command to:
hadoop archive -archiveName foo.har -p /user/ hadoop/dir1 hadoop/dir2 /user/zoo
then a ls on the hadoop archive using
hdfs dfs -ls har:///user/zoo/foo.har
would give you
har:///user/zoo/foo.har/hadoop/dir1
har:///user/zoo/foo.har/hadoop/dir2
Notice that the archived files have been archived relative to /user/ rather than /user/hadoop.
3.practice (demo by Internal table stored PARQUET file )
create internal table
CREATE TABLE xx.a(
original_test_value DOUBLE,
flag STRING
)
PARTITIONED BY (
stat_date STRING,
parametric_hash STRING
)
WITH SERDEPROPERTIES ('serialization.format'='1')
STORED AS PARQUET
LOCATION 'hdfs://nameservice1/user/hive/warehouse/xx.db/a'
copy the parquet file to new dir
sudo -u hdfs hdfs dfs -mkdir -p /user/hive/warehouse/xx.db/a/stat_date=20220125/parametric_hash=0
sudo -u hdfs hdfs dfs -cp /user/hive/warehouse/xx.db/a/stat_date=20220125/parametric_hash=0/* /user/hive/warehouse/xx.db/a/stat_date=20220125/parametric_hash=0
current hdfs file list:
| Permission | Owner | Group | Size | Last Modified | Replication | Block Size | Name | ||
|---|---|---|---|---|---|---|---|---|---|
| -rwxr-xr-x | airflow | hive | 40.08 KB | Feb 09 11:21 | 3 | 128 MB | part-00000-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 29.07 KB | Feb 09 11:21 | 3 | 128 MB | part-00002-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 27.97 KB | Feb 09 11:21 | 3 | 128 MB | part-00003-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 27.03 KB | Feb 09 11:21 | 3 | 128 MB | part-00004-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 39.71 KB | Feb 09 11:21 | 3 | 128 MB | part-00006-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 32.54 KB | Feb 09 11:21 | 3 | 128 MB | part-00007-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 23.99 KB | Feb 09 11:21 | 3 | 128 MB | part-00011-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 23.62 KB | Feb 09 11:21 | 3 | 128 MB | part-00012-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 30.02 KB | Feb 09 11:21 | 3 | 128 MB | part-00014-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 22.91 KB | Feb 09 11:21 | 3 | 128 MB | part-00015-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 28.79 KB | Feb 09 11:21 | 3 | 128 MB | part-00016-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 19.11 KB | Feb 09 11:21 | 3 | 128 MB | part-00018-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 |
add hive partition and archive
#add partition
ALTER TABLE xx.a add PARTITION(stat_date='20220125'parametric_hash='0') location '/user/hive/warehouse/xx.db/a/stat_date=20220125/parametric_hash=0/';
#archive
ALTER TABLE xx.a ARCHIVE PARTITION(stat_date='20220125',parametric_hash='0');
archived HDFS file list
hive merge the 12 file to 1 file as data.har
the data.har have _SUCCESS,_index,_masterindex,part-0,The actual file is stored in part-0,select the actual file by index and masterindex.
| Permission | Owner | Group | Size | Last Modified | Replication | Block Size | Name | ||
|---|---|---|---|---|---|---|---|---|---|
| drwxr-xr-x | hdfs | hive | 0 B | Mar 01 12:36 | 0 | 0 B | data.har |
| Permission | Owner | Group | Size | Last Modified | Replication | Block Size | Name | ||
|---|---|---|---|---|---|---|---|---|---|
| -rw-r--r-- | hdfs | hive | 0 B | Mar 01 12:36 | 3 | 128 MB | _SUCCESS | ||
| -rw-r--r-- | hdfs | hive | 3.38 KB | Mar 01 12:36 | 3 | 128 MB | _index | ||
| -rw-r--r-- | hdfs | hive | 24 B | Mar 01 12:36 | 3 | 128 MB | _masterindex | ||
| -rw-r--r-- | hdfs | hive | 561.31 KB | Mar 01 12:36 | 3 | 512 MB | part-0 |
effect
Positive
12 small file with 12 block merge to 1 file with 1 block.
cut down 11 metadata to offload the namenode.
nagative
##### select * from no archive and archived table with same partition by hiveServer2
# no archive
SELECT * from xx.b where parametric_hash =0;
20000 行 - 93ms (+5.754s)
#archive
SELECT * from xx.a where parametric_hash =0;
20000 行 - 102ms (+8.972s)
error: select table by impala
select * from xx.a;
#can not select by impala due to Failed to connect to FS: har://hdfs-nameservice1/
SQL 错误 [500051] [HY000]: [Cloudera][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: Failed to connect to FS: har://hdfs-nameservice1/
Error(255): Unknown error 255
Root cause: IOException: Invalid path for the Har Filesystem. har://hdfs-nameservice1/
, Query: SELECT `a`.`original_test_value`, `a`.`flag`, `a`.`stat_date`,`a`.`parametric_hash` FROM `xx`.`a`.
practice (demo by EXTERNAL table stored PARQUET file )
create external table
CREATE EXTERNAL TABLE xx.b(
original_test_value DOUBLE,
flag STRING
)
PARTITIONED BY (
stat_date STRING,
parametric_hash STRING
)
WITH SERDEPROPERTIES ('serialization.format'='1')
STORED AS PARQUET
LOCATION 'hdfs://nameservice1/user/hive/warehouse/xx.db/b'
copy the parquet file to new dir
sudo -u hdfs hdfs dfs -mkdir -p /user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0
sudo -u hdfs hdfs dfs -cp /user/hive/warehouse/xx.db/c/stat_date=20220125/parametric_hash=0/* /user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0
current hdfs file list:
| Permission | Owner | Group | Size | Last Modified | Replication | Block Size | Name | ||
|---|---|---|---|---|---|---|---|---|---|
| -rwxr-xr-x | airflow | hive | 40.08 KB | Feb 09 11:21 | 3 | 128 MB | part-00000-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 29.07 KB | Feb 09 11:21 | 3 | 128 MB | part-00002-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 27.97 KB | Feb 09 11:21 | 3 | 128 MB | part-00003-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 27.03 KB | Feb 09 11:21 | 3 | 128 MB | part-00004-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 39.71 KB | Feb 09 11:21 | 3 | 128 MB | part-00006-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 32.54 KB | Feb 09 11:21 | 3 | 128 MB | part-00007-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 23.99 KB | Feb 09 11:21 | 3 | 128 MB | part-00011-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 23.62 KB | Feb 09 11:21 | 3 | 128 MB | part-00012-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 30.02 KB | Feb 09 11:21 | 3 | 128 MB | part-00014-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 22.91 KB | Feb 09 11:21 | 3 | 128 MB | part-00015-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 28.79 KB | Feb 09 11:21 | 3 | 128 MB | part-00016-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 | ||
| -rwxr-xr-x | airflow | hive | 19.11 KB | Feb 09 11:21 | 3 | 128 MB | part-00018-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000 |
add hive partition and archive
# add partition
ALTER TABLE xx.b add PARTITION(stat_date='20220125',parametric_hash='0') location '/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/';
# archive
ALTER TABLE xx.b ARCHIVE PARTITION(stat_date='20220125',parametric_hash='0');
error:
## external table don't support hive archive
ALTER TABLE xx.b ARCHIVE PARTITION(stat_date='20220125', parametric_hash='0');
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. ARCHIVE can only be performed on managed tables
## drop the partition
ALTER TABLE xx.b drop PARTITION(stat_date='20220125' ,parametric_hash='0') ;
merge the small file with HDFS command
# bulid the har of all file in source dir output same dir
sudo -u hdfs hadoop archive -archiveName data.har -p /user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/ /user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/
# delete the original file by alreday build the har
hdfs dfs -rmr /user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/*.c000
archived HDFS file list
hive merge the 12 file to 1 file as data.har
the data.har have _SUCCESS,_index,_masterindex,part-0,The actual file is stored in part-0,select the actual file by index and masterindex.
| Permission | Owner | Group | Size | Last Modified | Replication | Block Size | Name | ||
|---|---|---|---|---|---|---|---|---|---|
| drwxr-xr-x | hdfs | hive | 0 B | Mar 01 12:36 | 0 | 0 B | data.har |
| Permission | Owner | Group | Size | Last Modified | Replication | Block Size | Name | ||
|---|---|---|---|---|---|---|---|---|---|
| -rw-r--r-- | hdfs | hive | 0 B | Mar 01 12:36 | 3 | 128 MB | _SUCCESS | ||
| -rw-r--r-- | hdfs | hive | 3.38 KB | Mar 01 12:36 | 3 | 128 MB | _index | ||
| -rw-r--r-- | hdfs | hive | 24 B | Mar 01 12:36 | 3 | 128 MB | _masterindex | ||
| -rw-r--r-- | hdfs | hive | 561.31 KB | Mar 01 12:36 | 3 | 512 MB | part-0 |
rebuild the partition
# rebuild the partition by location har file path
ALTER TABLE xx.b add PARTITION(stat_date='20220125',parametric_hash='0') location 'har:///user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har';
effect
Positive
12 small file with 12 block merge to 1 file with 1 block.
cut down 11 metadata to offload the namenode.
negative
##### select * from no archive and archived table with same partition by hiveServer2
# no archive
SELECT * from xx.b where parametric_hash =0;
20000 行 - 93ms (+5.754s)
#archive
SELECT * from xx.b where parametric_hash =0;
20000 行 - 102ms (+8.972s)
error: select table by impala
select * from xx.b;
#can not select by impala due to Failed to connect to FS: har://hdfs-nameservice1/
SQL 错误 [500312] [HY000]: [Cloudera][ImpalaJDBCDriver](500312) Error in fetching data rows: Disk I/O error on impala03-dev:22000: Failed to open HDFS file har:/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har/part-00006-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000
Error(22): Invalid argument
Root cause: IllegalArgumentException: Wrong FS: har:/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har/part-00006-e5bd2f13-a955-4920-a8ac-bd19ec031843.c000, expected: hdfs://nameservice1
[Not achieved]how to fix the problem by select the archived table by impala
1.designate the HDFS HA name service when location the HAR file into partition
drop the partition location by har://path
ALTER TABLE xx.b drop PARTITION(stat_date='20220125',parametric_hash='0') ;
Dropped the partition stat_date=20220125/parametric_hash=0
add the new partition location by hdfs://nameservice/path
ALTER TABLE xx.b add PARTITION(stat_date='20220125',parametric_hash='0') location 'hdfs://nameservice1/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har';
failed : select from the new partition
Failed with exception java.io.IOException:org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://nameservice1/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har/part-0
2.designate the HDFS namenode:port when location the HAR file into partition
drop the partition location by hdfs://nameservice/path
ALTER TABLE xx.b drop PARTITION(stat_date='20220125',parametric_hash='0') ;
Dropped the partition stat_date=20220125/parametric_hash=0
add the new partition location by hdfs://namenode:port/path
ALTER TABLE xx.b add PARTITION(stat_date='20220125',parametric_hash='0') location 'hdfs://192.168.1.170:8020/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har';
OK
Time taken: 0.088 seconds
failed : select from the new partition
hive> select * from xx.b limit 10;
OK
Failed with exception java.io.IOException:org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://192.168.1.170:8020/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har/part-0
3.designate the har://nameservice/path when location the HAR file into partition
drop the partition location by hdfs://path
ALTER TABLE xx.b drop PARTITION(stat_date='20220125',parametric_hash='0') ;
Dropped the partition stat_date=20220125/parametric_hash=0
add the new partition location by har://nameservice/path
ALTER TABLE xx.b add PARTITION(stat_date='20220125',parametric_hash='0') location 'har://hdfs-nameservice1/user/hive/warehouse/xx.db/b/stat_date=20220125/parametric_hash=0/data.har';
success hive : select from the new partition
select * from xx.b limit 10;
Time taken: 1.056 seconds, Fetched: 10 row(s)
failed impala : select from the new partition
SQL 错误 [500051] [HY000]: [Cloudera][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: Failed to connect to FS: har://hdfs-nameservice1/
Error(255): Unknown error 255
Root cause: IOException: Invalid path for the Har Filesystem. har://hdfs-nameservice1/
, Query: SELECT `b`.`original_test_value`, `b`.`flag` `b`.`stat_date`, `b`.`parametric_hash` FROM `test_xac_dws`.`b` LIMIT 10.
hdfs小文件合并的更多相关文章
- 合并hive/hdfs小文件
磁盘: heads/sectors/cylinders,分别就是磁头/扇区/柱面,每个扇区512byte(现在新的硬盘每个扇区有4K) 文件系统: 文件系统不是一个扇区一个扇区的来读数据,太慢了,所以 ...
- HDFS操作及小文件合并
小文件合并是针对文件上传到HDFS之前 这些文件夹里面都是小文件 参考代码 package com.gong.hadoop2; import java.io.IOException; import j ...
- Hadoop MapReduce编程 API入门系列之小文件合并(二十九)
不多说,直接上代码. Hadoop 自身提供了几种机制来解决相关的问题,包括HAR,SequeueFile和CombineFileInputFormat. Hadoop 自身提供的几种小文件合并机制 ...
- hive小文件合并设置参数
Hive的后端存储是HDFS,它对大文件的处理是非常高效的,如果合理配置文件系统的块大小,NameNode可以支持很大的数据量.但是在数据仓库中,越是上层的表其汇总程度就越高,数据量也就越小.而且这些 ...
- MR案例:小文件合并SequeceFile
SequeceFile是Hadoop API提供的一种二进制文件支持.这种二进制文件直接将<key, value>对序列化到文件中.可以使用这种文件对小文件合并,即将文件名作为key,文件 ...
- 第3节 mapreduce高级:5、6、通过inputformat实现小文件合并成为sequenceFile格式
1.1 需求 无论hdfs还是mapreduce,对于小文件都有损效率,实践中,又难免面临处理大量小文件的场景,此时,就需要有相应解决方案 1.2 分析 小文件的优化无非以下几种方式: 1. 在数据 ...
- 解决HDFS小文件带来的计算问题
hive优化 一.小文件简述 1.1. HDFS上什么是小文件? HDFS存储文件时的最小单元叫做Block,Hadoop1.x时期Block大小为64MB,Hadoop2.x时期Block大小为12 ...
- hive优化之小文件合并
文件数目过多,会给HDFS带来压力,并且会影响处理效率,可以通过合并Map和Reduce的结果文件来消除这样的影响: set hive.merge.mapfiles = true ##在 map on ...
- Hadoop经典案例(排序&Join&topk&小文件合并)
①自定义按某列排序,二次排序 writablecomparable中的compareto方法 ②topk a利用treemap,缺点:map中的key不允许重复:https://blog.csdn.n ...
- Hive merge(小文件合并)
当Hive的输入由非常多个小文件组成时.假设不涉及文件合并的话.那么每一个小文件都会启动一个map task. 假设文件过小.以至于map任务启动和初始化的时间大于逻辑处理的时间,会造成资源浪费.甚至 ...
随机推荐
- Java中方法的定义及注意事项
一.方法 什么是方法: 方法(method)是程序中最小的执行单元 实际开发中,什么时候用到方法: 重复的代码.具有独立功能的代码可以抽取到方法中 实际开发中,方法有什么好处: 可以提高代码的复用性 ...
- C# Nuget版本号排序
Nuget包版本号和我们软件应用版本号一样,不过因为稳定性等的考虑,组件版本有更高的要求.预发布版本使用频率更高 版本号介绍,详见我朋友胡承老司机的博客:Nuget包的版本规范 (qq.com) 我这 ...
- 如何实现Spring中服务关闭时对象销毁执行代码
spring提供了两种方式用于实现对象销毁时去执行操作 1.实现DisposableBean接口的destroy 2.在bean类的方法上增加@PreDestroy方法,那么这个方法会在Disposa ...
- 云原生时代崛起的编程语言Go基础实战
@ 目录 概述 定义 使用场景 Go 安全 使用须知 搜索工具 Go基础命令 标准库 基础语法 Effective Go 概览 命名规范 注释 变量 常量(const) 控制结构 数据类型 迭代(ra ...
- vivo积分任务体系的架构演进-平台产品系列05
作者:vivo 互联网平台产品研发团队- Mu JunFeng 积分体系作为一种常见营销工具,几乎是每一家企业会员营销的必备功能之一,在生活中随处可见,随着vivo互联网业务发展,vivo积分体系的能 ...
- MySQL百万数据深度分页优化思路分析
业务场景 一般在项目开发中会有很多的统计数据需要进行上报分析,一般在分析过后会在后台展示出来给运营和产品进行分页查看,最常见的一种就是根据日期进行筛选.这种统计数据随着时间的推移数据量会慢慢的变大,达 ...
- 音视频八股文(9)-- flv的h264六层结构和aac六层结构
flv介绍 FLV(Flash Video)是Adobe公司推出的⼀种流媒体格式,由于其封装后的⾳视频⽂件体积⼩.封装简单等特点,⾮常适合于互联⽹上使⽤.⽬前主流的视频⽹站基本都⽀持FLV.采⽤FLV ...
- 2023-04-28:将一个给定字符串 s 根据给定的行数 numRows 以从上往下、从左到右进行 Z 字形排列 比如输入字符串为 “PAYPALISHIRING“ 行数为 3 时,排列如下 P
2023-04-28:将一个给定字符串 s 根据给定的行数 numRows 以从上往下.从左到右进行 Z 字形排列 比如输入字符串为 "PAYPALISHIRING" 行数为 3 ...
- 园子的商业化努力-AI人才服务:招募AI导师
各位园子的小伙伴: 感谢大家对园子的支持,园子差不多接近20年的历程,一直是最低配模式生存和发展,感谢大家对于前段时间的困局给予了商业化的各种建议!在大家的鼓励与支持之下,园子的商业化努力正在以更快的 ...
- drf——Request源码分析、序列化组件、序列化类的使用(字段类和参数)、反序列化校验和保存
1.Request类源码分析 # APIView+Response写个接口 # 总结: 1.新的request有个data属性,以后只要是在请求body体中的数据,无论什么编码格式,无论什么请求方式 ...