Oracle数据泵导入的时候创建索引是否会使用并行?
一、疑问,Oracle数据泵导入的时候创建索引是否会使用并行?
某客户需要使用数据泵进行迁移,客户咨询导入的时间能不能加快一点。
那么如何加快导入的速度呢? 多加一些并行,那么创建索引内部的索引并行度是否会调整呢?
带着这些疑问看看Oracle数据泵并行参数与导入后创建索引的并行度是否有关系!
二、实验测试
2.1测试数据准备
Oracel11.2.0.4
--分区表创建
create user yz identified by yz;
grant dba to yz;
conn yz/yz
create table a1(id number,
deal_date date, area_code number, contents varchar2(4000))
partition by range(deal_date)
(
partition p1 values less than(to_date('2019-02-01','yyyy-mm-dd')),
partition p2 values less than(to_date('2019-03-01','yyyy-mm-dd')),
partition p3 values less than(to_date('2019-04-01','yyyy-mm-dd')),
partition p4 values less than(to_date('2019-05-01','yyyy-mm-dd')),
partition p5 values less than(to_date('2019-06-01','yyyy-mm-dd')),
partition p6 values less than(to_date('2019-07-01','yyyy-mm-dd')),
partition p7 values less than(to_date('2019-08-01','yyyy-mm-dd')),
partition p8 values less than(to_date('2019-09-01','yyyy-mm-dd')),
partition p9 values less than(to_date('2019-10-01','yyyy-mm-dd')),
partition p10 values less than(to_date('2019-11-01','yyyy-mm-dd')),
partition p11 values less than(to_date('2019-12-01','yyyy-mm-dd')),
partition p12 values less than(to_date('2020-01-01','yyyy-mm-dd')),
partition p13 values less than(to_date('2020-02-01','yyyy-mm-dd')),
partition p14 values less than(to_date('2020-03-01','yyyy-mm-dd')),
partition p15 values less than(to_date('2020-04-01','yyyy-mm-dd')),
partition p16 values less than(to_date('2020-05-01','yyyy-mm-dd')),
partition p17 values less than(to_date('2020-06-01','yyyy-mm-dd')),
partition p18 values less than(to_date('2020-07-01','yyyy-mm-dd')),
partition p19 values less than(to_date('2020-08-01','yyyy-mm-dd')),
partition p20 values less than(to_date('2020-09-01','yyyy-mm-dd')),
partition p31 values less than(to_date('2020-10-01','yyyy-mm-dd')),
partition p32 values less than(to_date('2020-11-01','yyyy-mm-dd')),
partition p33 values less than(to_date('2020-12-01','yyyy-mm-dd')),
partition p34 values less than(to_date('2021-01-01','yyyy-mm-dd')),
partition p35 values less than(to_date('2021-02-01','yyyy-mm-dd')),
partition p36 values less than(to_date('2021-03-01','yyyy-mm-dd')),
partition p37 values less than(to_date('2021-04-01','yyyy-mm-dd')),
partition p38 values less than(to_date('2021-05-01','yyyy-mm-dd')),
partition p39 values less than(to_date('2021-06-01','yyyy-mm-dd')),
partition p40 values less than(to_date('2021-07-01','yyyy-mm-dd'))
); insert into a1 (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-900,'J')+ trunc(dbms_random.value(0,200)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum <= 100000;
commit; create table a2(id number,
deal_date date, area_code number, contents varchar2(4000))
partition by range(deal_date)
(
partition p1 values less than(to_date('2019-02-01','yyyy-mm-dd')),
partition p2 values less than(to_date('2019-03-01','yyyy-mm-dd')),
partition p3 values less than(to_date('2019-04-01','yyyy-mm-dd')),
partition p4 values less than(to_date('2019-05-01','yyyy-mm-dd')),
partition p5 values less than(to_date('2019-06-01','yyyy-mm-dd')),
partition p6 values less than(to_date('2019-07-01','yyyy-mm-dd')),
partition p7 values less than(to_date('2019-08-01','yyyy-mm-dd')),
partition p8 values less than(to_date('2019-09-01','yyyy-mm-dd')),
partition p9 values less than(to_date('2019-10-01','yyyy-mm-dd')),
partition p10 values less than(to_date('2019-11-01','yyyy-mm-dd')),
partition p11 values less than(to_date('2019-12-01','yyyy-mm-dd')),
partition p12 values less than(to_date('2020-01-01','yyyy-mm-dd')),
partition p13 values less than(to_date('2020-02-01','yyyy-mm-dd')),
partition p14 values less than(to_date('2020-03-01','yyyy-mm-dd')),
partition p15 values less than(to_date('2020-04-01','yyyy-mm-dd')),
partition p16 values less than(to_date('2020-05-01','yyyy-mm-dd')),
partition p17 values less than(to_date('2020-06-01','yyyy-mm-dd')),
partition p18 values less than(to_date('2020-07-01','yyyy-mm-dd')),
partition p19 values less than(to_date('2020-08-01','yyyy-mm-dd')),
partition p20 values less than(to_date('2020-09-01','yyyy-mm-dd')),
partition p31 values less than(to_date('2020-10-01','yyyy-mm-dd')),
partition p32 values less than(to_date('2020-11-01','yyyy-mm-dd')),
partition p33 values less than(to_date('2020-12-01','yyyy-mm-dd')),
partition p34 values less than(to_date('2021-01-01','yyyy-mm-dd')),
partition p35 values less than(to_date('2021-02-01','yyyy-mm-dd')),
partition p36 values less than(to_date('2021-03-01','yyyy-mm-dd')),
partition p37 values less than(to_date('2021-04-01','yyyy-mm-dd')),
partition p38 values less than(to_date('2021-05-01','yyyy-mm-dd')),
partition p39 values less than(to_date('2021-06-01','yyyy-mm-dd')),
partition p40 values less than(to_date('2021-07-01','yyyy-mm-dd'))
); insert into a2 (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-900,'J')+ trunc(dbms_random.value(0,200)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum <= 200000;
commit; alter table a1 add constraint pk_id primary key (id);
alter table a2 add constraint pk_id_time primary key(id,deal_date); SQL> create index cc_id on a1(id);
create index cc_id on a1(id)
*
ERROR at line 1:
ORA-01408: such column list already indexed SQL> select index_name,status from user_indexes where table_name in('A1','A2');
INDEX_NAME STATUS
------------------------------ --------
PK_ID_TIME VALID
PK_ID VALID Alter table a1 drop constraint pk_id;
Alter table a2 drop constraint pk_id_time;
create index cc_id on a1(id) LOCAL;
alter table a1 add constraint pk_id primary key (id) USING INDEX cc_id ;
ORA-14196: Specified index cannot be used to enforce the constraint.
DROP INDEX CC_ID;
create index cc_id on a1(id) ;
alter table a1 add constraint pk_id primary key (id) USING INDEX cc_id ;
create index cc_id_DATE on a2(id,DEAL_DATE) LOCAL;
alter table a2 add constraint pk_id_DATE primary key (id,DEAL_DATE) USING INDEX cc_id_DATE ; https://www.cnblogs.com/lvcha001/p/10218318.html
索引可以认为分3种,非分区索引,全局XX索引,可以是全局范围分区、全局哈希分区,这种情况会根据规则将数据打散,而不是根据实际表的数据进行打散!
本地索引,完全根据分区表的子分区,一个子分区一个索引!但是我们观察重建索引其实是一样的操作,本次测试为了后续重建索引,创建不同的分区类型,非分区索引使用主键! create index ind_hash on a1(id,0) global partition by hash (id) partitions 8 online; SQL> select index_name,status from user_indexes where table_name in('A1','A2');
INDEX_NAME STATUS
------------------------------ --------
CC_ID VALID
CC_ID_DATE N/A
IND_HASH N/A
select index_name,PARTITION_NAME,HIGH_VALUE,STATUS,TABLESPACE_NAME from dba_ind_partitions
where index_owner='YZ' and index_name IN('CC_ID_DATE','IND_HASH');
2.2 导入SQL文件测试
nohup time expdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 FLASHBACK_SCN=1017463 parallel=2 &
情况一、导出并行度2,导入并行度2,观察SQL脚本
nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 parallel=2 sqlfile=table01.sql &
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYS"."SYS_SQL_FILE_TABLE_01" successfully completed at Wed Aug 11 07:00:04 2021 elapsed 0 00:00:03 -- new object type path: TABLE_EXPORT/TABLE/TABLE
-- CONNECT SYS
CREATE TABLE "YZ"."A2"
( "ID" NUMBER,
······
-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
-- CONNECT YZ
CREATE INDEX "YZ"."CC_ID_DATE" ON "YZ"."A2" ("ID", "DEAL_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) LOCAL
(PARTITION "P1"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "P2"
······
PARTITION "P40"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
CREATE INDEX "YZ"."CC_ID" ON "YZ"."A1" ("ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
CREATE INDEX "YZ"."IND_HASH" ON "YZ"."A1" ("ID", 0)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" GLOBAL PARTITION BY HASH ("ID")
(PARTITION "SYS_P41"
TABLESPACE "USERS" ,
PARTITION "SYS_P42"
TABLESPACE "USERS" ,
PARTITION "SYS_P43"
TABLESPACE "USERS" ,
PARTITION "SYS_P44"
TABLESPACE "USERS" ,
PARTITION "SYS_P45"
TABLESPACE "USERS" ,
PARTITION "SYS_P46"
TABLESPACE "USERS" ,
PARTITION "SYS_P47"
TABLESPACE "USERS" ,
PARTITION "SYS_P48"
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
-- CONNECT SYS
ALTER TABLE "YZ"."A1" ADD CONSTRAINT "PK_ID" PRIMARY KEY ("ID")
USING INDEX "YZ"."CC_ID" ENABLE;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG GROUP "GGS_87350" ("ID") ALWAYS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
ALTER TABLE "YZ"."A2" ADD CONSTRAINT "PK_ID_DATE" PRIMARY KEY ("ID", "DEAL_DATE")
USING INDEX "YZ"."CC_ID_DATE" ENABLE;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG GROUP "GGS_87381" ("ID", "DEAL_DATE") ALWAYS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
······
并行度1!
情况二、导出并行度2,导入并行度4,观察SQL脚本
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 parallel=4 sqlfile=table02.sql &
$ cat dump/table02.sql |grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
通过测试,我们可以得知数据泵导入创建索引并行度就是1!!! 这种情况除非数据库参数设置对象是AUTO选择并行,如果是Manual的情况则无法使用并行加快速度!
2.3 如何对数据泵导入使用并行创建索引加快速度!
参考
https://blog.51cto.com/wyzwl/2333565
?为什么脚本要排除约束? 感兴趣的小伙伴可以测试一下。
目标端创建用户、授权后,导入表数据!
*****************************************************数据导入**************************************
cat >imp_data.par <<EOF
userid='/ as sysdba'
directory=dump
dumpfile=yang%u.dmp
logfile=imp_data.log
cluster=no
parallel=2
exclude= index,constraint
EOF
--排除索引和约束,执行导入
nohup impdp parfile=imp_data.par > imp_data.out &
*****************************************************索引及约束导入**************************************
--通过sqlfile参数生成创建索引语句
cat >imp_ind_con.par <<EOF
userid='/ as sysdba'
directory=dump
dumpfile=yang%u.dmp
sqlfile=imp_ind_con.sql
logfile=imp_ind_con.log
cluster=no
parallel=2
tables=yz.a1,yz.a2
include=index,constraint
EOF
--执行生成创建索引语句(实际并不会导入)
nohup impdp parfile= imp_ind_con.par > imp_ind_con.out &
--修改创建索引的并行度,并行度建议不超过CPU核数的1.5倍
--LINUX环境使用
sed -i 's/PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
--因AIX环境sed没有-i参数,可以使用如下两种方法:
perl -pi -e 's/ PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
或者
vi imp_ind_con.sql << EOF
:%s/ PARALLEL 1/PARALLEL 16/g
:wq
EOF
*****************************************************替换效果***************************************
[oracle@t2 dump]$ cat imp_ind_con.sql|grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
[oracle@t2 dump]$ sed -i 's/PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
[oracle@t2 dump]$ cat imp_ind_con.sql|grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 16 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 16 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 16 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
***************************************************************************************************
$more 观察SQL脚本
-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
-- CONNECT YZ
CREATE INDEX "YZ"."CC_ID_DATE" ON "YZ"."A2" ("ID", "DEAL_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255
······
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
-- CONNECT SYS
ALTER TABLE "YZ"."A1" ADD CONSTRAINT "PK_ID" PRIMARY KEY ("ID")
USING INDEX "YZ"."CC_ID" ENABLE;
先创建索引在整约束!
--等导入完数据之后,执行建索引的SQL:
$vi imp_ind_con.sh
sqlplus / as sysdba <<EOF
set timing on
set echo on
set verify on
spool imp_ind_con.log
@imp_ind_con.sql
spool off
exit
EOF
--执行建索引的SQL
nohup sh imp_ind_con.sh> imp_ind_con.out &
疑问一、导出表的dump有创建用户的语句吗? 如何导出创建用户的SQL语句
--从只导出表的dump,导入create user 提示Not found user
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log include=user parallel=1 sqlfile=user01.sql &
ORA-39168: Object path USER was not found.
--从导出整个schema dump再次测试!
$nohup time expdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yang.log SCHEMAS=yz parallel=2
$ scp /home/oracle/script/dump/yanga*.dmp t2:/home/oracle/script/dump/.
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yang.log include=user parallel=1 sqlfile=user02.sql &
$ cat user02.sql
-- CONNECT SYSTEM
CREATE USER "YZ" IDENTIFIED BY VALUES 'S:C9A5297B9802EBB85A3BE800929ECE1BFCCB00146E58E0FBB055A937869F;86EF13A1088170F5'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";
疑问二、数据泵导出用户后,导入这个用户,这个用户之前在DB不存在,Oracle会自动创建这个用户吗?
SQL> select username,account_status from dba_users where username='YZ';
no rows selected
SQL>
SQL> r
1* select username,account_status from dba_users where username='YZ'
USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
YZ OPEN
是可以自动创建用户的!
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "YZ"."A2":"P3" 12.53 MB 30997 rows
. . imported "YZ"."A2":"P40" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Wed Aug 11 12:47:16 2021 elapsed 0 00:00:17
Oracle数据泵导入的时候创建索引是否会使用并行?的更多相关文章
- Oracle 数据泵导入导出总结
Oracle 数据泵(IMPDP/EXPDP)导入导出总结 Oracle数据泵导入导出是日常工作中常用的基本技术之一,它相对传统的逻辑导入导出要高效,这种特性更适合数据库对象数量巨大的情形,因为我日常 ...
- 【EXPDP/IMPDP】ORACLE数据泵导入导出案例(expdp & impdp)
概要: 因项目需要,通常需要将生产库下的部分数据抽取并恢复到测试库上 本文主要介绍数据泵导入导出的几种情况以及错误处理 案例环境: rhel-server-6.5-x86_64 oracle 11.2 ...
- Oracle数据泵导入导出数据,建立表空
Oracle11g 数据导入到oracle10g 中:1.在oracle11g 服务器命令行中用expdp 导出数据expdp ts/ts@orcl directory=expdp_dir dumpf ...
- Oracle数据泵导入dmp文件,报UDI-12154、ORA-12154错误解决办法
1. 数据泵导入dmp文件,报UDI-12154.ORA-12154 1.1 导入命令 impdp cwy_init/init@orcl directory=DATA_PUMP_DIR dumpfil ...
- 基于多用户的Oracle数据泵导入导出数据
登陆SqlPlus: SqlPlus sys/syspwd@MyOrcl AS sysdba 其中:syspwd:sys的登陆密码:MyOrcl:所创建的数据库服务名. 创建数据泵: create o ...
- oracle数据泵导入导出数据
expdp 导出 1.管理员用户登入sqlplus sqlplus system/manger@pdb1 2.创建逻辑导出目录 create directory dpdata as '/home/or ...
- Oracle数据泵导入dmp文件,报UDI-00013、UDI-00019错误原因
这个问题挺简单,想了想,还是记录下吧. [root@ufdb165 bin]# ./impdp cwy_init0914/cwy_123456789@ufgovdb1 directory=DATA_P ...
- oracle数据泵导入导出部分用户
问题描述:需要将140服务器中的tbomnew实例下的部分用户导入到118服务器下的tbompx实例中,本次导入导出的两个数据库均为19C 部分用户名:CORE,MSTDATA,BOMMGMT,CFG ...
- oracle数据泵导入导出命令
1.在PL/SQL的界面,找到Directories文件夹,找到目录文件的路径 2.通过SSH进入服务器 找到相应的路径 cd /u01/oracle/dpdir 输入指令 df -h 查看资源使 ...
随机推荐
- Blazor Server 和 WebAssembly 应用程序入门指南
翻译自 Waqas Anwar 2021年3月12日的文章 <A Beginner's Guide To Blazor Server and WebAssembly Applications&g ...
- 13 Nginx访问日志分析
#!/bin/bash export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin # Nginx 日志格式: # ...
- hdu 4686 Arc of Dream 自己推 矩阵快速幂
A.mat[0][0] = 1, A.mat[0][1] = 1, A.mat[0][2] = 0, A.mat[0][3] = 0, A.mat[0][4] = 0; A.mat[1][0] = 0 ...
- linux安装subversion
原文: https://www.cnblogs.com/liuxianan/p/linux_install_svn_server.html 安装 使用yum安装非常简单: yum install su ...
- Spring中如何使用自定义注解搭配@Import引入内外部配置并完成某一功能的启用
文章背景 有一个封装 RocketMq 的 client 的需求,用来提供给各项目收.发消息,但是项目当中常常只使用收或者发消息的单一功能,而且不同的项目 group 等并不相同而且不会变化,可以在项 ...
- Doris开发手记3:利用CoreDump文件快速定位Doris的查询问题
Apache Doris的BE部分是由C++编写,当出现一些内存越界,非法访问的问题时会导致BE进程的Crash.这部分的问题常常较难排查,同时也很难快速定位到对应的触发SQL,给使用者带来较大的困扰 ...
- 以太网MAC地址组成与交换机基本知识
以太网MAC地址 MAC地址由48位二进制组成,通常分为六段,用十六进制表示,工作在数据链路层. 数据链路层功能: 链路的建立,维护与拆除 帧包装,帧传输,帧同步 帧的差错恢复 简单的流量控制 第八位 ...
- Serverless与Web后端天生不合?
Serverless/Faas/BaaS 等概念在这几年的技术圈中是绝对的热点词汇之一,国内外众多云厂商也纷纷推出自家的 Serverless 和函数计算产品,微信也依托腾讯云推出了基于 Server ...
- C语言:九宫格
#include <stdio.h> /* 如下排列表示 A00 A01 A02 A10 A11 A12 A20 A21 A22 */ int main() { unsigned char ...
- C语言:模拟密码输入显示星号
一个安全的程序在用户输入密码时不应该显示密码本身,而应该回显星号或者点号,例如······或******,这在网页.PC软件.ATM机.POS机上经常看到.但是C语言没有提供类似的功能,控制台上只能原 ...