Oracle数据泵导入的时候创建索引是否会使用并行?
一、疑问,Oracle数据泵导入的时候创建索引是否会使用并行?
某客户需要使用数据泵进行迁移,客户咨询导入的时间能不能加快一点。
那么如何加快导入的速度呢? 多加一些并行,那么创建索引内部的索引并行度是否会调整呢?
带着这些疑问看看Oracle数据泵并行参数与导入后创建索引的并行度是否有关系!
二、实验测试
2.1测试数据准备
Oracel11.2.0.4
--分区表创建
create user yz identified by yz;
grant dba to yz;
conn yz/yz
create table a1(id number,
deal_date date, area_code number, contents varchar2(4000))
partition by range(deal_date)
(
partition p1 values less than(to_date('2019-02-01','yyyy-mm-dd')),
partition p2 values less than(to_date('2019-03-01','yyyy-mm-dd')),
partition p3 values less than(to_date('2019-04-01','yyyy-mm-dd')),
partition p4 values less than(to_date('2019-05-01','yyyy-mm-dd')),
partition p5 values less than(to_date('2019-06-01','yyyy-mm-dd')),
partition p6 values less than(to_date('2019-07-01','yyyy-mm-dd')),
partition p7 values less than(to_date('2019-08-01','yyyy-mm-dd')),
partition p8 values less than(to_date('2019-09-01','yyyy-mm-dd')),
partition p9 values less than(to_date('2019-10-01','yyyy-mm-dd')),
partition p10 values less than(to_date('2019-11-01','yyyy-mm-dd')),
partition p11 values less than(to_date('2019-12-01','yyyy-mm-dd')),
partition p12 values less than(to_date('2020-01-01','yyyy-mm-dd')),
partition p13 values less than(to_date('2020-02-01','yyyy-mm-dd')),
partition p14 values less than(to_date('2020-03-01','yyyy-mm-dd')),
partition p15 values less than(to_date('2020-04-01','yyyy-mm-dd')),
partition p16 values less than(to_date('2020-05-01','yyyy-mm-dd')),
partition p17 values less than(to_date('2020-06-01','yyyy-mm-dd')),
partition p18 values less than(to_date('2020-07-01','yyyy-mm-dd')),
partition p19 values less than(to_date('2020-08-01','yyyy-mm-dd')),
partition p20 values less than(to_date('2020-09-01','yyyy-mm-dd')),
partition p31 values less than(to_date('2020-10-01','yyyy-mm-dd')),
partition p32 values less than(to_date('2020-11-01','yyyy-mm-dd')),
partition p33 values less than(to_date('2020-12-01','yyyy-mm-dd')),
partition p34 values less than(to_date('2021-01-01','yyyy-mm-dd')),
partition p35 values less than(to_date('2021-02-01','yyyy-mm-dd')),
partition p36 values less than(to_date('2021-03-01','yyyy-mm-dd')),
partition p37 values less than(to_date('2021-04-01','yyyy-mm-dd')),
partition p38 values less than(to_date('2021-05-01','yyyy-mm-dd')),
partition p39 values less than(to_date('2021-06-01','yyyy-mm-dd')),
partition p40 values less than(to_date('2021-07-01','yyyy-mm-dd'))
); insert into a1 (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-900,'J')+ trunc(dbms_random.value(0,200)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum <= 100000;
commit; create table a2(id number,
deal_date date, area_code number, contents varchar2(4000))
partition by range(deal_date)
(
partition p1 values less than(to_date('2019-02-01','yyyy-mm-dd')),
partition p2 values less than(to_date('2019-03-01','yyyy-mm-dd')),
partition p3 values less than(to_date('2019-04-01','yyyy-mm-dd')),
partition p4 values less than(to_date('2019-05-01','yyyy-mm-dd')),
partition p5 values less than(to_date('2019-06-01','yyyy-mm-dd')),
partition p6 values less than(to_date('2019-07-01','yyyy-mm-dd')),
partition p7 values less than(to_date('2019-08-01','yyyy-mm-dd')),
partition p8 values less than(to_date('2019-09-01','yyyy-mm-dd')),
partition p9 values less than(to_date('2019-10-01','yyyy-mm-dd')),
partition p10 values less than(to_date('2019-11-01','yyyy-mm-dd')),
partition p11 values less than(to_date('2019-12-01','yyyy-mm-dd')),
partition p12 values less than(to_date('2020-01-01','yyyy-mm-dd')),
partition p13 values less than(to_date('2020-02-01','yyyy-mm-dd')),
partition p14 values less than(to_date('2020-03-01','yyyy-mm-dd')),
partition p15 values less than(to_date('2020-04-01','yyyy-mm-dd')),
partition p16 values less than(to_date('2020-05-01','yyyy-mm-dd')),
partition p17 values less than(to_date('2020-06-01','yyyy-mm-dd')),
partition p18 values less than(to_date('2020-07-01','yyyy-mm-dd')),
partition p19 values less than(to_date('2020-08-01','yyyy-mm-dd')),
partition p20 values less than(to_date('2020-09-01','yyyy-mm-dd')),
partition p31 values less than(to_date('2020-10-01','yyyy-mm-dd')),
partition p32 values less than(to_date('2020-11-01','yyyy-mm-dd')),
partition p33 values less than(to_date('2020-12-01','yyyy-mm-dd')),
partition p34 values less than(to_date('2021-01-01','yyyy-mm-dd')),
partition p35 values less than(to_date('2021-02-01','yyyy-mm-dd')),
partition p36 values less than(to_date('2021-03-01','yyyy-mm-dd')),
partition p37 values less than(to_date('2021-04-01','yyyy-mm-dd')),
partition p38 values less than(to_date('2021-05-01','yyyy-mm-dd')),
partition p39 values less than(to_date('2021-06-01','yyyy-mm-dd')),
partition p40 values less than(to_date('2021-07-01','yyyy-mm-dd'))
); insert into a2 (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-900,'J')+ trunc(dbms_random.value(0,200)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum <= 200000;
commit; alter table a1 add constraint pk_id primary key (id);
alter table a2 add constraint pk_id_time primary key(id,deal_date); SQL> create index cc_id on a1(id);
create index cc_id on a1(id)
*
ERROR at line 1:
ORA-01408: such column list already indexed SQL> select index_name,status from user_indexes where table_name in('A1','A2');
INDEX_NAME STATUS
------------------------------ --------
PK_ID_TIME VALID
PK_ID VALID Alter table a1 drop constraint pk_id;
Alter table a2 drop constraint pk_id_time;
create index cc_id on a1(id) LOCAL;
alter table a1 add constraint pk_id primary key (id) USING INDEX cc_id ;
ORA-14196: Specified index cannot be used to enforce the constraint.
DROP INDEX CC_ID;
create index cc_id on a1(id) ;
alter table a1 add constraint pk_id primary key (id) USING INDEX cc_id ;
create index cc_id_DATE on a2(id,DEAL_DATE) LOCAL;
alter table a2 add constraint pk_id_DATE primary key (id,DEAL_DATE) USING INDEX cc_id_DATE ; https://www.cnblogs.com/lvcha001/p/10218318.html
索引可以认为分3种,非分区索引,全局XX索引,可以是全局范围分区、全局哈希分区,这种情况会根据规则将数据打散,而不是根据实际表的数据进行打散!
本地索引,完全根据分区表的子分区,一个子分区一个索引!但是我们观察重建索引其实是一样的操作,本次测试为了后续重建索引,创建不同的分区类型,非分区索引使用主键! create index ind_hash on a1(id,0) global partition by hash (id) partitions 8 online; SQL> select index_name,status from user_indexes where table_name in('A1','A2');
INDEX_NAME STATUS
------------------------------ --------
CC_ID VALID
CC_ID_DATE N/A
IND_HASH N/A
select index_name,PARTITION_NAME,HIGH_VALUE,STATUS,TABLESPACE_NAME from dba_ind_partitions
where index_owner='YZ' and index_name IN('CC_ID_DATE','IND_HASH');
2.2 导入SQL文件测试
nohup time expdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 FLASHBACK_SCN=1017463 parallel=2 &
情况一、导出并行度2,导入并行度2,观察SQL脚本
nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 parallel=2 sqlfile=table01.sql &
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYS"."SYS_SQL_FILE_TABLE_01" successfully completed at Wed Aug 11 07:00:04 2021 elapsed 0 00:00:03 -- new object type path: TABLE_EXPORT/TABLE/TABLE
-- CONNECT SYS
CREATE TABLE "YZ"."A2"
( "ID" NUMBER,
······
-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
-- CONNECT YZ
CREATE INDEX "YZ"."CC_ID_DATE" ON "YZ"."A2" ("ID", "DEAL_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) LOCAL
(PARTITION "P1"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "P2"
······
PARTITION "P40"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
CREATE INDEX "YZ"."CC_ID" ON "YZ"."A1" ("ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
CREATE INDEX "YZ"."IND_HASH" ON "YZ"."A1" ("ID", 0)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" GLOBAL PARTITION BY HASH ("ID")
(PARTITION "SYS_P41"
TABLESPACE "USERS" ,
PARTITION "SYS_P42"
TABLESPACE "USERS" ,
PARTITION "SYS_P43"
TABLESPACE "USERS" ,
PARTITION "SYS_P44"
TABLESPACE "USERS" ,
PARTITION "SYS_P45"
TABLESPACE "USERS" ,
PARTITION "SYS_P46"
TABLESPACE "USERS" ,
PARTITION "SYS_P47"
TABLESPACE "USERS" ,
PARTITION "SYS_P48"
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
-- CONNECT SYS
ALTER TABLE "YZ"."A1" ADD CONSTRAINT "PK_ID" PRIMARY KEY ("ID")
USING INDEX "YZ"."CC_ID" ENABLE;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG GROUP "GGS_87350" ("ID") ALWAYS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "YZ"."A1" ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
ALTER TABLE "YZ"."A2" ADD CONSTRAINT "PK_ID_DATE" PRIMARY KEY ("ID", "DEAL_DATE")
USING INDEX "YZ"."CC_ID_DATE" ENABLE;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG GROUP "GGS_87381" ("ID", "DEAL_DATE") ALWAYS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "YZ"."A2" ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
······
并行度1!
情况二、导出并行度2,导入并行度4,观察SQL脚本
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log tables=yz.a1,yz.a2 parallel=4 sqlfile=table02.sql &
$ cat dump/table02.sql |grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
通过测试,我们可以得知数据泵导入创建索引并行度就是1!!! 这种情况除非数据库参数设置对象是AUTO选择并行,如果是Manual的情况则无法使用并行加快速度!
2.3 如何对数据泵导入使用并行创建索引加快速度!
参考
https://blog.51cto.com/wyzwl/2333565
?为什么脚本要排除约束? 感兴趣的小伙伴可以测试一下。
目标端创建用户、授权后,导入表数据!
*****************************************************数据导入**************************************
cat >imp_data.par <<EOF
userid='/ as sysdba'
directory=dump
dumpfile=yang%u.dmp
logfile=imp_data.log
cluster=no
parallel=2
exclude= index,constraint
EOF
--排除索引和约束,执行导入
nohup impdp parfile=imp_data.par > imp_data.out &
*****************************************************索引及约束导入**************************************
--通过sqlfile参数生成创建索引语句
cat >imp_ind_con.par <<EOF
userid='/ as sysdba'
directory=dump
dumpfile=yang%u.dmp
sqlfile=imp_ind_con.sql
logfile=imp_ind_con.log
cluster=no
parallel=2
tables=yz.a1,yz.a2
include=index,constraint
EOF
--执行生成创建索引语句(实际并不会导入)
nohup impdp parfile= imp_ind_con.par > imp_ind_con.out &
--修改创建索引的并行度,并行度建议不超过CPU核数的1.5倍
--LINUX环境使用
sed -i 's/PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
--因AIX环境sed没有-i参数,可以使用如下两种方法:
perl -pi -e 's/ PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
或者
vi imp_ind_con.sql << EOF
:%s/ PARALLEL 1/PARALLEL 16/g
:wq
EOF
*****************************************************替换效果***************************************
[oracle@t2 dump]$ cat imp_ind_con.sql|grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 1 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 1 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
[oracle@t2 dump]$ sed -i 's/PARALLEL 1/PARALLEL 16/g' imp_ind_con.sql
[oracle@t2 dump]$ cat imp_ind_con.sql|grep PARALLEL
TABLESPACE "USERS" ) PARALLEL 16 ;
ALTER INDEX "YZ"."CC_ID_DATE" NOPARALLEL;
TABLESPACE "USERS" PARALLEL 16 ;
ALTER INDEX "YZ"."CC_ID" NOPARALLEL;
TABLESPACE "USERS" ) PARALLEL 16 ;
ALTER INDEX "YZ"."IND_HASH" NOPARALLEL;
***************************************************************************************************
$more 观察SQL脚本
-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
-- CONNECT YZ
CREATE INDEX "YZ"."CC_ID_DATE" ON "YZ"."A2" ("ID", "DEAL_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255
······
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
-- CONNECT SYS
ALTER TABLE "YZ"."A1" ADD CONSTRAINT "PK_ID" PRIMARY KEY ("ID")
USING INDEX "YZ"."CC_ID" ENABLE;
先创建索引在整约束!
--等导入完数据之后,执行建索引的SQL:
$vi imp_ind_con.sh
sqlplus / as sysdba <<EOF
set timing on
set echo on
set verify on
spool imp_ind_con.log
@imp_ind_con.sql
spool off
exit
EOF
--执行建索引的SQL
nohup sh imp_ind_con.sh> imp_ind_con.out &
疑问一、导出表的dump有创建用户的语句吗? 如何导出创建用户的SQL语句
--从只导出表的dump,导入create user 提示Not found user
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log include=user parallel=1 sqlfile=user01.sql &
ORA-39168: Object path USER was not found.
--从导出整个schema dump再次测试!
$nohup time expdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yang.log SCHEMAS=yz parallel=2
$ scp /home/oracle/script/dump/yanga*.dmp t2:/home/oracle/script/dump/.
$ nohup time impdp \'/ as sysdba\' directory=dump dumpfile=yanga%u.dmp logfile=yang.log include=user parallel=1 sqlfile=user02.sql &
$ cat user02.sql
-- CONNECT SYSTEM
CREATE USER "YZ" IDENTIFIED BY VALUES 'S:C9A5297B9802EBB85A3BE800929ECE1BFCCB00146E58E0FBB055A937869F;86EF13A1088170F5'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";
疑问二、数据泵导出用户后,导入这个用户,这个用户之前在DB不存在,Oracle会自动创建这个用户吗?
SQL> select username,account_status from dba_users where username='YZ';
no rows selected
SQL>
SQL> r
1* select username,account_status from dba_users where username='YZ'
USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
YZ OPEN
是可以自动创建用户的!
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "YZ"."A2":"P3" 12.53 MB 30997 rows
. . imported "YZ"."A2":"P40" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Wed Aug 11 12:47:16 2021 elapsed 0 00:00:17
Oracle数据泵导入的时候创建索引是否会使用并行?的更多相关文章
- Oracle 数据泵导入导出总结
Oracle 数据泵(IMPDP/EXPDP)导入导出总结 Oracle数据泵导入导出是日常工作中常用的基本技术之一,它相对传统的逻辑导入导出要高效,这种特性更适合数据库对象数量巨大的情形,因为我日常 ...
- 【EXPDP/IMPDP】ORACLE数据泵导入导出案例(expdp & impdp)
概要: 因项目需要,通常需要将生产库下的部分数据抽取并恢复到测试库上 本文主要介绍数据泵导入导出的几种情况以及错误处理 案例环境: rhel-server-6.5-x86_64 oracle 11.2 ...
- Oracle数据泵导入导出数据,建立表空
Oracle11g 数据导入到oracle10g 中:1.在oracle11g 服务器命令行中用expdp 导出数据expdp ts/ts@orcl directory=expdp_dir dumpf ...
- Oracle数据泵导入dmp文件,报UDI-12154、ORA-12154错误解决办法
1. 数据泵导入dmp文件,报UDI-12154.ORA-12154 1.1 导入命令 impdp cwy_init/init@orcl directory=DATA_PUMP_DIR dumpfil ...
- 基于多用户的Oracle数据泵导入导出数据
登陆SqlPlus: SqlPlus sys/syspwd@MyOrcl AS sysdba 其中:syspwd:sys的登陆密码:MyOrcl:所创建的数据库服务名. 创建数据泵: create o ...
- oracle数据泵导入导出数据
expdp 导出 1.管理员用户登入sqlplus sqlplus system/manger@pdb1 2.创建逻辑导出目录 create directory dpdata as '/home/or ...
- Oracle数据泵导入dmp文件,报UDI-00013、UDI-00019错误原因
这个问题挺简单,想了想,还是记录下吧. [root@ufdb165 bin]# ./impdp cwy_init0914/cwy_123456789@ufgovdb1 directory=DATA_P ...
- oracle数据泵导入导出部分用户
问题描述:需要将140服务器中的tbomnew实例下的部分用户导入到118服务器下的tbompx实例中,本次导入导出的两个数据库均为19C 部分用户名:CORE,MSTDATA,BOMMGMT,CFG ...
- oracle数据泵导入导出命令
1.在PL/SQL的界面,找到Directories文件夹,找到目录文件的路径 2.通过SSH进入服务器 找到相应的路径 cd /u01/oracle/dpdir 输入指令 df -h 查看资源使 ...
随机推荐
- 重新整理 .net core 实践篇————熔断与限流[三十五]
前言 简单整理一下熔断与限流,跟上一节息息相关. 正文 polly 的策略类型分为两类: 被动策略(异常处理.结果处理) 主动策略(超时处理.断路器.舱壁隔离.缓存) 熔断和限流通过下面主动策略来实现 ...
- C#调用JAVA(二)调用方法
上期我们创建了jar包并放到了unity中,那么我们继续 如果您还没有看上一期请先看上一期,这是链接 C#调用JAVA(一)制作jar包 - 执著GodShadow - 博客园 (cnblogs.co ...
- 3、mysql的多实例配置(1)
3.1.什么是mysql多实例: 3.2.mysql多实例的作用和问题: 3.3.mysql多实例生产应用的场景: 1.资金紧张的公司: 2.并发访问并不是很大的业务: 3.门户网站应用mysql多实 ...
- 关于PHP导出数据超时的优化
一般情况下,导出超时可能都是以下三种情况: 一.sql语句复杂,查询时间过长: 二.处理查询后数据逻辑冗余: 三.数据量过大导致响应超时. 接下来分别给出这三种情况的优化建议. 一.sql语句复杂,查 ...
- 柔性数组(Redis源码学习)
柔性数组(Redis源码学习) 1. 问题背景 在阅读Redis源码中的字符串有如下结构,在sizeof(struct sdshdr)得到结果为8,在后续内存申请和计算中也用到.其实在工作中有遇到过这 ...
- Docker安装rabbitMQ主从
环境准备 Centos 7.5虚拟机三台: 192.168.102.128 192.168.102.130 192.168.102.131 以上虚拟机统一安装docker环境 三台机器分别配置如下所示 ...
- Docker:Linux离线安装docker-compose
1)首先访问 docker-compose 的 GitHub 版本发布页面 https://github.com/docker/compose/releases 2)由于服务器是 CentOS 系统, ...
- SpringCloud:feign默认jackson解析'yyyy-MM-ddTHH:mm:ssZ'时间格式报错
Feign默认的使用jackson解析,所以时间传值时会报错,时间格式错误 解决办法: 修改feign解析方式为fastjson方式: @Configuration public class CxfC ...
- docker安装和配置nginx
配置nginx docker配置nginx 本机ip是192.168.0.200 docker pull nginx 配置nginx主机 vi /root/docker/nginx/nginx01.c ...
- pxe+kickstart部署多个版本的Linux操作系统(下)---实践篇
我们在企业运维环境中,难免会遇到使用多个Linux操作系统的情况,如果每天都需要安装不同版本的Linux系统的话,那么使用Kickstart只能安装一种版本的Linux系统的方法则显得有些捉襟 ...