一、疑问,Oracle数据泵导入的时候创建索引是否会使用并行?

某客户需要使用数据泵进行迁移,客户咨询导入的时间能不能加快一点。

那么如何加快导入的速度呢? 多加一些并行,那么创建索引内部的索引并行度是否会调整呢?

带着这些疑问看看Oracle数据泵并行参数与导入后创建索引的并行度是否有关系!

二、实验测试

2.1测试数据准备

Oracel11.2.0.4

--分区表创建
create user yz identified by yz;
grant dba to yz;
conn yz/yz
create table a1(id number,
deal_date date, area_code number, contents varchar2(4000))
partition by range(deal_date)
(
partition p1 values less than(to_date('2019-02-01','yyyy-mm-dd')),
partition p2 values less than(to_date('2019-03-01','yyyy-mm-dd')),
partition p3 values less than(to_date('2019-04-01','yyyy-mm-dd')),
partition p4 values less than(to_date('2019-05-01','yyyy-mm-dd')),
partition p5 values less than(to_date('2019-06-01','yyyy-mm-dd')),
partition p6 values less than(to_date('2019-07-01','yyyy-mm-dd')),
partition p7 values less than(to_date('2019-08-01','yyyy-mm-dd')),
partition p8 values less than(to_date('2019-09-01','yyyy-mm-dd')),
partition p9 values less than(to_date('2019-10-01','yyyy-mm-dd')),
partition p10 values less than(to_date('2019-11-01','yyyy-mm-dd')),
partition p11 values less than(to_date('2019-12-01','yyyy-mm-dd')),
partition p12 values less than(to_date('2020-01-01','yyyy-mm-dd')),
partition p13 values less than(to_date('2020-02-01','yyyy-mm-dd')),
partition p14 values less than(to_date('2020-03-01','yyyy-mm-dd')),
partition p15 values less than(to_date('2020-04-01','yyyy-mm-dd')),
partition p16 values less than(to_date('2020-05-01','yyyy-mm-dd')),
partition p17 values less than(to_date('2020-06-01','yyyy-mm-dd')),
partition p18 values less than(to_date('2020-07-01','yyyy-mm-dd')),
partition p19 values less than(to_date('2020-08-01','yyyy-mm-dd')),
partition p20 values less than(to_date('2020-09-01','yyyy-mm-dd')),
partition p31 values less than(to_date('2020-10-01','yyyy-mm-dd')),
partition p32 values less than(to_date('2020-11-01','yyyy-mm-dd')),
partition p33 values less than(to_date('2020-12-01','yyyy-mm-dd')),
partition p34 values less than(to_date('2021-01-01','yyyy-mm-dd')),
partition p35 values less than(to_date('2021-02-01','yyyy-mm-dd')),
partition p36 values less than(to_date('2021-03-01','yyyy-mm-dd')),
partition p37 values less than(to_date('2021-04-01','yyyy-mm-dd')),
partition p38 values less than(to_date('2021-05-01','yyyy-mm-dd')),
partition p39 values less than(to_date('2021-06-01','yyyy-mm-dd')),
partition p40 values less than(to_date('2021-07-01','yyyy-mm-dd'))
); insert into a1 (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-900,'J')+ trunc(dbms_random.value(0,200)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum <= 100000;
commit; create table a2(id number,
deal_date date, area_code number, contents varchar2(4000))
partition by range(deal_date)
(
partition p1 values less than(to_date('2019-02-01','yyyy-mm-dd')),
partition p2 values less than(to_date('2019-03-01','yyyy-mm-dd')),
partition p3 values less than(to_date('2019-04-01','yyyy-mm-dd')),
partition p4 values less than(to_date('2019-05-01','yyyy-mm-dd')),
partition p5 values less than(to_date('2019-06-01','yyyy-mm-dd')),
partition p6 values less than(to_date('2019-07-01','yyyy-mm-dd')),
partition p7 values less than(to_date('2019-08-01','yyyy-mm-dd')),
partition p8 values less than(to_date('2019-09-01','yyyy-mm-dd')),
partition p9 values less than(to_date('2019-10-01','yyyy-mm-dd')),
partition p10 values less than(to_date('2019-11-01','yyyy-mm-dd')),
partition p11 values less than(to_date('2019-12-01','yyyy-mm-dd')),
partition p12 values less than(to_date('2020-01-01','yyyy-mm-dd')),
partition p13 values less than(to_date('2020-02-01','yyyy-mm-dd')),
partition p14 values less than(to_date('2020-03-01','yyyy-mm-dd')),
partition p15 values less than(to_date('2020-04-01','yyyy-mm-dd')),
partition p16 values less than(to_date('2020-05-01','yyyy-mm-dd')),
partition p17 values less than(to_date('2020-06-01','yyyy-mm-dd')),
partition p18 values less than(to_date('2020-07-01','yyyy-mm-dd')),
partition p19 values less than(to_date('2020-08-01','yyyy-mm-dd')),
partition p20 values less than(to_date('2020-09-01','yyyy-mm-dd')),
partition p31 values less than(to_date('2020-10-01','yyyy-mm-dd')),
partition p32 values less than(to_date('2020-11-01','yyyy-mm-dd')),
partition p33 values less than(to_date('2020-12-01','yyyy-mm-dd')),
partition p34 values less than(to_date('2021-01-01','yyyy-mm-dd')),
partition p35 values less than(to_date('2021-02-01','yyyy-mm-dd')),
partition p36 values less than(to_date('2021-03-01','yyyy-mm-dd')),
partition p37 values less than(to_date('2021-04-01','yyyy-mm-dd')),
partition p38 values less than(to_date('2021-05-01','yyyy-mm-dd')),
partition p39 values less than(to_date('2021-06-01','yyyy-mm-dd')),
partition p40 values less than(to_date('2021-07-01','yyyy-mm-dd'))
); insert into a2 (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-900,'J')+ trunc(dbms_random.value(0,200)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum <= 200000;
commit; alter table a1 add constraint pk_id primary key (id);
alter table a2 add constraint pk_id_time primary key(id,deal_date); SQL> create index cc_id on a1(id);
create index cc_id on a1(id)
*
ERROR at line 1:
ORA-01408: such column list already indexed SQL> select index_name,status from user_indexes where table_name in('A1','A2');
INDEX_NAME STATUS
------------------------------ --------
PK_ID_TIME VALID
PK_ID VALID Alter table a1 drop constraint pk_id;
Alter table a2 drop constraint pk_id_time;
create index cc_id on a1(id) LOCAL;
alter table a1 add constraint pk_id primary key (id) USING INDEX cc_id ;
ORA-14196: Specified index cannot be used to enforce the constraint.
DROP INDEX CC_ID;
create index cc_id on a1(id) ;
alter table a1 add constraint pk_id primary key (id) USING INDEX cc_id ;
create index cc_id_DATE on a2(id,DEAL_DATE) LOCAL;
alter table a2 add constraint pk_id_DATE primary key (id,DEAL_DATE) USING INDEX cc_id_DATE ; https://www.cnblogs.com/lvcha001/p/10218318.html
索引可以认为分3种,非分区索引,全局XX索引,可以是全局范围分区、全局哈希分区,这种情况会根据规则将数据打散,而不是根据实际表的数据进行打散!
本地索引,完全根据分区表的子分区,一个子分区一个索引!但是我们观察重建索引其实是一样的操作,本次测试为了后续重建索引,创建不同的分区类型,非分区索引使用主键! create index ind_hash on a1(id,0) global partition by hash (id) partitions 8 online; SQL> select index_name,status from user_indexes where table_name in('A1','A2');
INDEX_NAME STATUS
------------------------------ --------
CC_ID VALID
CC_ID_DATE N/A
IND_HASH N/A
select index_name,PARTITION_NAME,HIGH_VALUE,STATUS,TABLESPACE_NAME from dba_ind_partitions
where index_owner='YZ' and index_name IN('CC_ID_DATE','IND_HASH');

2.2 导入SQL文件测试

nohup time expdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 FLASHBACK_SCN=1017463 parallel=2 &
情况一、导出并行度2,导入并行度2,观察SQL脚本
nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 parallel=2 sqlfile=table01.sql &
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYS"."SYS_SQL_FILE_TABLE_01" successfully completed at Wed Aug 11 07:00:04 2021 elapsed 0 00:00:03 -- new object type path: TABLE_EXPORT/TABLE/TABLE
-- CONNECT SYS
CREATE TABLE "YZ"."A2"
( "ID" NUMBER,
······
-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
-- CONNECT YZ
CREATE INDEX "YZ"."CC_ID_DATE" ON "YZ"."A2" ("ID", "DEAL_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) LOCAL
(PARTITION "P1"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "P2"
······
PARTITION "P40"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
CREATE INDEX "YZ"."CC_ID" ON "YZ"."A1" ("ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
CREATE INDEX "YZ"."IND_HASH" ON "YZ"."A1" ("ID", 0)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" GLOBAL PARTITION BY HASH ("ID")
(PARTITION "SYS_P41"
TABLESPACE "USERS" ,
PARTITION "SYS_P42"
TABLESPACE "USERS" ,
PARTITION "SYS_P43"
TABLESPACE "USERS" ,
PARTITION "SYS_P44"
TABLESPACE "USERS" ,
PARTITION "SYS_P45"
TABLESPACE "USERS" ,
PARTITION "SYS_P46"
TABLESPACE "USERS" ,
PARTITION "SYS_P47"
TABLESPACE "USERS" ,
PARTITION "SYS_P48"
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
-- CONNECT SYS
ALTER TABLE "YZ"."A1" ADD CONSTRAINT "PK_ID" PRIMARY KEY ("ID")
USING INDEX "YZ"."CC_ID" ENABLE;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG GROUP "GGS_87350" ("ID") ALWAYS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
ALTER TABLE "YZ"."A2" ADD CONSTRAINT "PK_ID_DATE" PRIMARY KEY ("ID", "DEAL_DATE")
USING INDEX "YZ"."CC_ID_DATE" ENABLE;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG GROUP "GGS_87381" ("ID", "DEAL_DATE") ALWAYS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
······

并行度1!
情况二、导出并行度2,导入并行度4,观察SQL脚本

$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 parallel=4 sqlfile=table02.sql &

$ cat dump/table02.sql |grep PARALLEL

TABLESPACE "USERS" ) PARALLEL 1 ;

ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;

TABLESPACE "USERS" PARALLEL 1 ;

ALTER INDEX "YZ"."CC_ID" NOPARALLEL;

TABLESPACE "USERS" ) PARALLEL 1 ;

ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;

通过测试,我们可以得知数据泵导入创建索引并行度就是1!!!  这种情况除非数据库参数设置对象是AUTO选择并行,如果是Manual的情况则无法使用并行加快速度!

2.3 如何对数据泵导入使用并行创建索引加快速度!

参考

https://blog.51cto.com/wyzwl/2333565

?为什么脚本要排除约束?    感兴趣的小伙伴可以测试一下。
目标端创建用户、授权后,导入表数据!

*****************************************************数据导入**************************************

cat >imp_data.par <<EOF
userid='/ as sysdba'
directory=dump
dumpfile=yang%u.dmp
logfile=imp_data.log
cluster=no
parallel=2
exclude= index,constraint
EOF
--排除索引和约束,执行导入
nohup impdp parfile=imp_data.par > imp_data.out &

*****************************************************索引及约束导入**************************************

--通过sqlfile参数生成创建索引语句
cat >imp_ind_con.par <<EOF
userid='/ as sysdba'
directory=dump
dumpfile=yang%u.dmp
sqlfile=imp_ind_con.sql
logfile=imp_ind_con.log
cluster=no
parallel=2
tables=yz.a1,yz.a2
include=index,constraint
EOF

--执行生成创建索引语句(实际并不会导入)
nohup impdp parfile= imp_ind_con.par > imp_ind_con.out &
--修改创建索引的并行度,并行度建议不超过CPU核数的1.5倍
--LINUX环境使用
sed -i 's/PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
--因AIX环境sed没有-i参数,可以使用如下两种方法:
perl -pi -e 's/ PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
或者
vi imp_ind_con.sql << EOF
:%s/ PARALLEL 1/PARALLEL 16/g
:wq
EOF

*****************************************************替换效果***************************************

[oracle@t2 dump]$ cat imp_ind_con.sql|grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
[oracle@t2 dump]$ sed -i 's/PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
[oracle@t2 dump]$ cat imp_ind_con.sql|grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 16 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 16 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 16 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;

***************************************************************************************************

$more 观察SQL脚本

-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
-- CONNECT YZ
CREATE INDEX "YZ"."CC_ID_DATE" ON "YZ"."A2" ("ID", "DEAL_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255
······
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
-- CONNECT SYS
ALTER TABLE "YZ"."A1" ADD CONSTRAINT "PK_ID" PRIMARY KEY ("ID")
USING INDEX "YZ"."CC_ID" ENABLE;

先创建索引在整约束!

--等导入完数据之后,执行建索引的SQL:
$vi imp_ind_con.sh
sqlplus / as sysdba <<EOF
set timing on
set echo on
set verify on
spool imp_ind_con.log
@imp_ind_con.sql
spool off
exit
EOF
--执行建索引的SQL
nohup sh imp_ind_con.sh> imp_ind_con.out &

疑问一、导出表的dump有创建用户的语句吗? 如何导出创建用户的SQL语句

--从只导出表的dump,导入create user 提示Not found user
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log include=user parallel=1 sqlfile=user01.sql &
ORA-39168: Object path USER was not found.
--从导出整个schema dump再次测试!
$nohup time expdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yang.log SCHEMAS=yz parallel=2
$ scp /home/oracle/script/dump/yanga*.dmp t2:/home/oracle/script/dump/.
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yang.log include=user parallel=1 sqlfile=user02.sql &
$ cat user02.sql
-- CONNECT SYSTEM
CREATE USER "YZ" IDENTIFIED BY VALUES 'S:C9A5297B9802EBB85A3BE800929ECE1BFCCB00146E58E0FBB055A937869F;86EF13A1088170F5'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";

疑问二、数据泵导出用户后,导入这个用户,这个用户之前在DB不存在,Oracle会自动创建这个用户吗?

 Starting "SYS"."SYS_EXPORT_SCHEMA_01":  "/******** AS SYSDBA" directory=dump dumpfile=yanga%u.dmp logfile=yang.log SCHEMAS=yz parallel=2 
nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yanga.log parallel=4 &

SQL> select username,account_status from dba_users where username='YZ';

no rows selected

SQL>
SQL> r
1* select username,account_status from dba_users where username='YZ'

USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
YZ OPEN

是可以自动创建用户的!

Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "YZ"."A2":"P3" 12.53 MB 30997 rows

. . imported "YZ"."A2":"P40" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Wed Aug 11 12:47:16 2021 elapsed 0 00:00:17

Oracle数据泵导入的时候创建索引是否会使用并行?的更多相关文章

  1. Oracle 数据泵导入导出总结

    Oracle 数据泵(IMPDP/EXPDP)导入导出总结 Oracle数据泵导入导出是日常工作中常用的基本技术之一,它相对传统的逻辑导入导出要高效,这种特性更适合数据库对象数量巨大的情形,因为我日常 ...

  2. 【EXPDP/IMPDP】ORACLE数据泵导入导出案例(expdp & impdp)

    概要: 因项目需要,通常需要将生产库下的部分数据抽取并恢复到测试库上 本文主要介绍数据泵导入导出的几种情况以及错误处理 案例环境: rhel-server-6.5-x86_64 oracle 11.2 ...

  3. Oracle数据泵导入导出数据,建立表空

    Oracle11g 数据导入到oracle10g 中:1.在oracle11g 服务器命令行中用expdp 导出数据expdp ts/ts@orcl directory=expdp_dir dumpf ...

  4. Oracle数据泵导入dmp文件,报UDI-12154、ORA-12154错误解决办法

    1. 数据泵导入dmp文件,报UDI-12154.ORA-12154 1.1 导入命令 impdp cwy_init/init@orcl directory=DATA_PUMP_DIR dumpfil ...

  5. 基于多用户的Oracle数据泵导入导出数据

    登陆SqlPlus: SqlPlus sys/syspwd@MyOrcl AS sysdba 其中:syspwd:sys的登陆密码:MyOrcl:所创建的数据库服务名. 创建数据泵: create o ...

  6. oracle数据泵导入导出数据

    expdp 导出 1.管理员用户登入sqlplus sqlplus system/manger@pdb1 2.创建逻辑导出目录 create directory dpdata as '/home/or ...

  7. Oracle数据泵导入dmp文件,报UDI-00013、UDI-00019错误原因

    这个问题挺简单,想了想,还是记录下吧. [root@ufdb165 bin]# ./impdp cwy_init0914/cwy_123456789@ufgovdb1 directory=DATA_P ...

  8. oracle数据泵导入导出部分用户

    问题描述:需要将140服务器中的tbomnew实例下的部分用户导入到118服务器下的tbompx实例中,本次导入导出的两个数据库均为19C 部分用户名:CORE,MSTDATA,BOMMGMT,CFG ...

  9. oracle数据泵导入导出命令

    1.在PL/SQL的界面,找到Directories文件夹,找到目录文件的路径 2.通过SSH进入服务器 找到相应的路径 cd /u01/oracle/dpdir 输入指令 df -h   查看资源使 ...

随机推荐

  1. win7旗舰版任务栏窗口不合并显示,鼠标移至窗口时可预览应用内容

    1.鼠标移至任务栏--右键--属性: 2.选择"当任务栏被占满时合并"或"从不合并",第一个选项更优: 3.右键桌面"计算机"的" ...

  2. Redis配置统计字典

    本章将对Redis的系统状态信息(info命令结果)和Redis的所有配置(包括Standalone.Sentinel.Cluster三种模式)做一个全面的梳理,希望本章能够成为Redis配置统计字典 ...

  3. 4、saltstack的使用

    官方文档地址:http://repo.saltstack.com/#rhel 4.1.saltstatck介绍: 用户要一致,这里使用的是root用户: 用于批量管理成百上千的服务器: 并行的分发,使 ...

  4. 26、mysqlsla慢查询日志分析工具

    1.介绍: mysqlsla是hackmysql.com推出的一款MySQL的日志分析工具,可以分析mysql的慢查询日志.分析慢查询非常好用,能针 对库分析慢查询语句的执行频率.扫描的数据量.消耗时 ...

  5. PowerMock 支持gRPC的Mock Server实现

    PowerMock是一个Mock Server的实现,它同时支持HTTP与gRPC协议接口的Mock,并提供了灵活的插件功能. 这个工具面向于前后端.测试等对有接口Mock需求的开发人员,也可以作为一 ...

  6. 为什么要鼓励小型企业使用CRM系统

    如果你是一家小公司的管理者,我相信你必须对工作流程.客户.市场销售.市场营销推广等业务流程进行总体规划和管理方法,这往往会使你的心有馀而力不足,引起 繁忙.心有馀而力不足.交流受到阻碍.管理方法和这样 ...

  7. 关于windows11的0x800f0950语言包安装失败

    最近windows11的风头很热,作为爱折腾的人,当然要去搞一搞啦.搞好了以后我发现中文语言的拓展包是无法安装的,于是我找到了3个办法,当然如果想100%成功的话我建议直接跳到第三个,如果你不嫌累,指 ...

  8. Reactive Spring实战 -- 响应式MySql交互

    本文与大家探讨Spring中如何实现MySql响应式交互. Spring Data R2DBC项目是Spring提供的数据库响应式编程框架. R2DBC是Reactive Relational Dat ...

  9. Docker:docker部署redis

    docker镜像库拉取镜像 # 下载镜像 docker pull redis:4.0 查看镜像 # 查看下载镜像 docker images 启动镜像 # 启动镜像   docker run --na ...

  10. 整理C#获取日期显示格式

    C#获取当前日期的几种显示格式 有时候需要用一些不常用的日期格式时,总是要去网上查找,很多都是复制粘贴,还不完整.就整理一下. DatetimeTextBox.Text += DateTime.Now ...