Oracle 11g RAC 卸载CRS步骤
Oracle 11g之后提供了卸载grid和database的脚本,可以卸载的比较干净,不需要手动删除crs
##########如果要卸载RAC,需要先使用dbca删除数据库,在执行下面的操作###############
1.root用户进入到grid的ORACLE_HOME(在最后一个节点以外的所有节点执行,如果只有两个节点的RAC那么只在主节点上执行即可)
说明:(如果没有从官网上下载) 1.从oracle官方网站上下载的deinstall工具 11GR2有7个下载包,deinstall放在第7个下载包,如11.2.0.2的下载包为p10098816_112020_AIX64-5L_7of7.zip 2.解压到本地目录,并执行deinstall 需要输入$ORACLE_HOME bash-3.00$ ./deinstall -home /oracle/grid/product/11.2.0/grid Location of logs /ora11g/server_install/deinstall/./logs/ |
主节点root用户执行:
[root@testdb11a ~]# cd /u01/app/11.2./grid/crs/install/ ----Grid用户的ORACLE_HOME目录/crs/install
[root@testdb11a install]# ./rootcrs.pl -verbose -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: /192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /testdb11a-vip/192.168.1.22/192.168.1.0/255.255.255.0/eth0, hosting node testdb11a
GSD exists
ONS exists: Local port , remote port , EM port
CRS-: Attempting to stop 'ora.registry.acfs' on 'testdb11a'
CRS-: Stop of 'ora.registry.acfs' on 'testdb11a' succeeded
CRS-: Starting shutdown of Oracle High Availability Services-managed resources on 'testdb11a'
CRS-: Attempting to stop 'ora.crsd' on 'testdb11a'
CRS-: Starting shutdown of Cluster Ready Services-managed resources on 'testdb11a'
CRS-: Attempting to stop 'ora.oc4j' on 'testdb11a'
CRS-: Attempting to stop 'ora.DATA.dg' on 'testdb11a'
CRS-: Stop of 'ora.oc4j' on 'testdb11a' succeeded
CRS-: Stop of 'ora.DATA.dg' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.asm' on 'testdb11a'
CRS-: Stop of 'ora.asm' on 'testdb11a' succeeded
CRS-: Shutdown of Cluster Ready Services-managed resources on 'testdb11a' has completed
CRS-: Stop of 'ora.crsd' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.mdnsd' on 'testdb11a'
CRS-: Attempting to stop 'ora.drivers.acfs' on 'testdb11a'
CRS-: Attempting to stop 'ora.crf' on 'testdb11a'
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11a'
CRS-: Attempting to stop 'ora.evmd' on 'testdb11a'
CRS-: Attempting to stop 'ora.asm' on 'testdb11a'
CRS-: Stop of 'ora.crf' on 'testdb11a' succeeded
CRS-: Stop of 'ora.evmd' on 'testdb11a' succeeded
CRS-: Stop of 'ora.mdnsd' on 'testdb11a' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11a' succeeded
CRS-: Stop of 'ora.drivers.acfs' on 'testdb11a' succeeded
CRS-: Stop of 'ora.asm' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11a'
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11a'
CRS-: Stop of 'ora.cssd' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.gipcd' on 'testdb11a'
CRS-: Stop of 'ora.gipcd' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.gpnpd' on 'testdb11a'
CRS-: Stop of 'ora.gpnpd' on 'testdb11a' succeeded
CRS-: Shutdown of Oracle High Availability Services-managed resources on 'testdb11a' has completed
CRS-: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@testdb11a install]# ./crsconfig_params
最后一个节点执行:(下面的命令将清空OCR配置和Voting disk)
[root@testdb11b install]# ./rootcrs.pl -verbose -deconfig -force -lastnode
Using configuration parameter file: ./crsconfig_params
Network exists: /192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /testdb11b-vip/192.168.1.23/192.168.1.0/255.255.255.0/eth0, hosting node testdb11b
GSD exists
ONS exists: Local port , remote port , EM port
CRS-: Attempting to stop 'ora.registry.acfs' on 'testdb11b'
CRS-: Stop of 'ora.registry.acfs' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.crsd' on 'testdb11b'
CRS-: Starting shutdown of Cluster Ready Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.DATA.dg' on 'testdb11b'
CRS-: Attempting to stop 'ora.oc4j' on 'testdb11b'
CRS-: Stop of 'ora.oc4j' on 'testdb11b' succeeded
CRS-: Stop of 'ora.DATA.dg' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Shutdown of Cluster Ready Services-managed resources on 'testdb11b' has completed
CRS-: Stop of 'ora.crsd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11b'
CRS-: Attempting to stop 'ora.evmd' on 'testdb11b'
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.evmd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11b'
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11b'
CRS-: Stop of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Attempting to start 'ora.cssdmonitor' on 'testdb11b'
CRS-: Start of 'ora.cssdmonitor' on 'testdb11b' succeeded
CRS-: Attempting to start 'ora.cssd' on 'testdb11b'
CRS-: Attempting to start 'ora.diskmon' on 'testdb11b'
CRS-: Start of 'ora.diskmon' on 'testdb11b' succeeded
CRS-: Start of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Successful deletion of voting disk +DATA.
ASM de-configuration trace file location: /tmp/asmcadc_clean2014--17_10---AM.log
ASM Clean Configuration START
ASM Clean Configuration END ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2014--17_10---AM.log for details. CRS-: Starting shutdown of Oracle High Availability Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.mdnsd' on 'testdb11b'
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11b'
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11b'
CRS-: Stop of 'ora.mdnsd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11b' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11b'
CRS-: Stop of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.crf' on 'testdb11b'
CRS-: Stop of 'ora.crf' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gipcd' on 'testdb11b'
CRS-: Stop of 'ora.gipcd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gpnpd' on 'testdb11b'
CRS-: Stop of 'ora.gpnpd' on 'testdb11b' succeeded
CRS-: Shutdown of Oracle High Availability Services-managed resources on 'testdb11b' has completed
CRS-: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
执行完成上面的两个命令后,oracle的数据库软件也就删除了,就可以重新执行root.sh脚本了~~~
如果使用了asm磁盘,需要先清理asm磁盘,因为尝试了一次安装,你的ASM磁盘就被标记为used,不能再作为候选磁盘,要想再次使用,需要执行下面的操作
a.使用dd命令覆盖分区的头部信息
dd if=/dev/zero of=/dev/sdb1 bs= count=
dd if=/dev/zero of=/dev/sdc1 bs= count=
dd if=/dev/zero of=/dev/sdd1 bs= count=
dd if=/dev/zero of=/dev/sde1 bs= count=
b.删除并重建asm磁盘
/etc/init.d/oracleasm deletedisk DISK1 /dev/sdb1
/etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
/etc/init.d/oracleasm deletedisk DISK2 /dev/sdc1
/etc/init.d/oracleasm createdisk DISK2 /dev/sdc1
/etc/init.d/oracleasm deletedisk DISK3 /dev/sdd1
/etc/init.d/oracleasm createdisk DISK3 /dev/sdd1
/etc/init.d/oracleasm deletedisk DISK4 /dev/sde1
/etc/init.d/oracleasm createdisk DISK4 /dev/sde1
执行完上面的命令后,asm磁盘就可以用了
如果要彻底卸载Grid Infrastructure软件,执行下面的命令
2.grid用户执行deinstall进行卸载(两个节点)
主节点执行结果:
+ASM1@testdb11a /u01/app/11.2./grid/deinstall$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2014--16_01--52PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START #########################
## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2./grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: testdb11a,testdb11b
Checking for sufficient temp space availability on node(s) : 'testdb11a,testdb11b' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2014--16_01--52PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "testdb11a"[testdb11a-vip]
> The following information can be collected by running "/sbin/ifconfig -a" on node "testdb11a"
Enter the IP netmask of Virtual IP "192.168.1.22" on node "testdb11a"[255.255.255.0]
> Enter the network interface name on which the virtual IP address "192.168.1.22" is active
> Enter an address or the name of the virtual IP used on node "testdb11b"[testdb11b-vip]
> The following information can be collected by running "/sbin/ifconfig -a" on node "testdb11b"
Enter the IP netmask of Virtual IP "192.168.1.23" on node "testdb11b"[255.255.255.0]
> Enter the network interface name on which the virtual IP address "192.168.1.23" is active
> Enter an address or the name of the virtual IP[]
> Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/netdc_check2014--16_01---PM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER_SCAN1]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/asmcadc_check2014--16_01---PM.log ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: y Enter the OCR/Voting Disk diskgroup name []:
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []: ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:testdb11a,testdb11b
Oracle Home selected for deinstall is: /u01/app/11.2./grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2014-09-16_01-45-52PM/logs/deinstall_deconfig2014-09-16_01-46-31-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2014-09-16_01-45-52PM/logs/deinstall_deconfig2014-09-16_01-46-31-PM.err' ######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/asmcadc_clean2014--16_01---PM.log
ASM Clean Configuration START
ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/netdc_clean2014--16_01---PM.log De-configuring RAC listener(s): LISTENER_SCAN1 De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully. De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully. De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully. De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully. De-configuring backup files on all nodes...
Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "testdb11b". /tmp/deinstall2014-09-16_01-45-52PM/perl/bin/perl -I/tmp/deinstall2014-09-16_01-45-52PM/perl/lib -I/tmp/deinstall2014-09-16_01-45-52PM/crs/install /tmp/deinstall2014-09-16_01-45-52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Run the following command as the root user or the administrator on node "testdb11a". /tmp/deinstall2014-09-16_01-45-52PM/perl/bin/perl -I/tmp/deinstall2014-09-16_01-45-52PM/perl/lib -I/tmp/deinstall2014-09-16_01-45-52PM/crs/install /tmp/deinstall2014-09-16_01-45-52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode Press Enter after you finish running the above commands <----------------------------------------
打开另外一个终端窗口,在节点1和节点二下root用户执行上面的两条命令
节点一
[root@testdb11a ~]# /tmp/deinstall2014--16_01--52PM/perl/bin/perl -I/tmp/deinstall2014--16_01--52PM/perl/lib -I/tmp/deinstall2014--16_01--52PM/crs/install /tmp/deinstall2014--16_01--52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Using configuration parameter file: /tmp/deinstall2014--16_01--52PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Adding Clusterware entries to inittab
/crs/install/inittab does not exist.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Stop failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Delete failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Stop failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Modify failed, or completed with errors.
Adding Clusterware entries to inittab
/crs/install/inittab does not exist.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Delete failed, or completed with errors.
crsctl delete for vds in DATA ... failed
CRS-: No Oracle Clusterware components configured.
CRS-: Command Delete failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Stop failed, or completed with errors.
ACFS-: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-, , No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
节点二
[root@testdb11b ~]# /tmp/deinstall2014--16_01--52PM/perl/bin/perl -I/tmp/deinstall2014--16_01--52PM/perl/lib -I/tmp/deinstall2014--16_01--52PM/crs/install /tmp/deinstall2014--16_01--52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2014--16_01--52PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: /192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /testdb11b-vip/192.168.1.23/192.168.1.0/255.255.255.0/eth0, hosting node testdb11b
GSD exists
ONS exists: Local port , remote port , EM port
CRS-: Starting shutdown of Oracle High Availability Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.crsd' on 'testdb11b'
CRS-: Starting shutdown of Cluster Ready Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Shutdown of Cluster Ready Services-managed resources on 'testdb11b' has completed
CRS-: Stop of 'ora.crsd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.mdnsd' on 'testdb11b'
CRS-: Attempting to stop 'ora.drivers.acfs' on 'testdb11b'
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11b'
CRS-: Attempting to stop 'ora.evmd' on 'testdb11b'
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.evmd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.mdnsd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.drivers.acfs' on 'testdb11b' succeeded
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11b'
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11b'
CRS-: Stop of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.crf' on 'testdb11b'
CRS-: Stop of 'ora.crf' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gipcd' on 'testdb11b'
CRS-: Stop of 'ora.gipcd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gpnpd' on 'testdb11b'
CRS-: Stop of 'ora.gpnpd' on 'testdb11b' succeeded
CRS-: Shutdown of Oracle High Availability Services-managed resources on 'testdb11b' has completed
CRS-: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
执行完成之后在原窗口按回车继续
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done Delete directory '/u01/app/grid' on the local node : Done Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'testdb11b' : Done Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'testdb11b' : Done Delete directory '/u01/app/oraInventory' on the remote nodes 'testdb11b' : Done Delete directory '/u01/app/grid' on the remote nodes 'testdb11b' : Done Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2014-09-16_01-45-52PM' on node 'testdb11a'
Clean install operation removing temporary directory '/tmp/deinstall2014-09-16_01-45-52PM' on node 'testdb11b' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "testdb11b"
Oracle Clusterware is stopped and successfully de-configured on node "testdb11a"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'testdb11b'.
Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'testdb11b'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'testdb11b'.
Successfully deleted directory '/u01/app/grid' on the remote nodes 'testdb11b'.
Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'testdb11a,testdb11b' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'testdb11a,testdb11b' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
删除
[root@testdb11a deinstall]# rm -rf /etc/oraInst.loc
显示成功,现在把所有的crs服务停止,所有有关的文件都已经删除。
Oracle 11g RAC 卸载CRS步骤的更多相关文章
- oracle 11g RAC安装节点二执行结果错误CRS-5005: IP Address: 192.168.1.24 is already in use in the network
[root@testdb11b ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInvento ...
- Oracle 11g RAC停止和启动步骤
关闭前备份控制文件/参数文件: sqlplus / as sysdba alter database backup controlfile to '/home/oracle/control.ctl ...
- Oracle 11g RAC运维总结
转至:https://blog.csdn.net/qq_41944882/article/details/103560879 1 术语解释1.1 高可用(HA)什么是高可用?顾名思义我们能轻松地理解是 ...
- Oracle 11g RAC 应用补丁简明版
之前总结过<Oracle 11.2.0.4 RAC安装最新PSU补丁>, 这次整理为简明版,忽略一切输出的显示,引入一些官方的说明,增加OJVM PSU的补丁应用. 环境:RHEL6.5 ...
- Oracle 11g RAC 修改各类IP地址
Oracle 11g RAC 修改各类IP地址 首先,我们都知道Oracle 11g RAC中的IP主要有:Public IP.VIP.SCAN VIP.Private IP这几种. 一般这类改IP地 ...
- [转帖]Oracle 11G RAC For Windows 2008 R2部署手册
Oracle 11G RAC For Windows 2008 R2部署手册(亲测,成功实施多次) https://www.cnblogs.com/yhfssp/p/7821593.html 总体规划 ...
- 【Oracle 集群】Oracle 11G RAC教程之集群安装(七)
Oracle 11G RAC集群安装(七) 概述:写下本文档的初衷和动力,来源于上篇的<oracle基本操作手册>.oracle基本操作手册是作者研一假期对oracle基础知识学习的汇总. ...
- Oracle 11g RAC环境下Private IP修改方法及异常处理
Oracle 11g RAC环境下Private IP修改方法及异常处理 Oracle 11g RAC环境下Private IP修改方法及异常处理 一. 修改方法 1. 确认所有节点CRS服务以启动 ...
- Oracle 11g RAC 第二节点root.sh执行失败后再次执行root.sh
Oracle 11g RAC 第二节点root.sh执行失败后再次执行root.sh前,要先清除之前的crs配置信息 # /u01/app/11.2.0/grid/crs/install/rootcr ...
随机推荐
- TP-LINK WR941 DD-WRT刷回OpenWRT及OpenWRT刷回原厂固件
1.DD-Wrt 刷回 OpenWrt A.从官网下载固件: root@TL-DDWRT:/tmp# wget http://downloads.openwrt.org/barrier_breaker ...
- jvm垃圾回收机制
http://blog.csdn.net/zsuguangh/article/details/6429592 原文地址
- Mac Sublime Text complie python .py error /bin/bash: shell_session_update: command not found
1.get the rvm version rvm -v 2.make sure the version at least 1.26 above. 3.then go ahead rvm get he ...
- 简单的STM32 汇编程序—闪烁LED
要移植操作系统,汇编是道不得不跨过去的坎.所以承接上篇的思路,我准备用汇编写一个简单的闪烁LED灯的程式.以此练习汇编,为操作系统做准备. 第一步,还是和上篇一样,建立一个空的文件夹. 第二步,因为是 ...
- Python 读写文件中数据
1 需求 在文件 h264.txt 中的数据如图1,读入该文件中的数据,然后将第1列的地址删除,然后将数据输出到h264_out.txt中: 图1 h264.txt 数据截图 ...
- Tomcat SSL的安装及配置中遇到问题
配置tomcat服务器利用SSL进行加密. 一.生成密钥库 具体生成方式就不讲了,tomcat支持的keystore的格式有JKS,PKCS11和PKCS12 JKS是jdk /bin目录下keyto ...
- 诸城模拟赛 dvd的逆序对
[题目描述] dvd是一个爱序列的孩子. 他对序列的热爱以至于他每天都在和序列度过 但是有一个问题他却一直没能解决 给你n,k求1~n有多少排列有恰好k个逆序对 [输入格式] 一行两个整数n,k [输 ...
- itrator控制迭代次数
<s:iterator value="diys" status="d" begin="0" end="10" st ...
- 【Alpha】Daily Scrum Meeting第一次
一.本次Daily Scrum Meeting主要内容 代码任务细分 服务器搭建 每个人时间分配及安排 二.项目进展 学号尾数 今天做的任务 任务完成度 明天要做的任务 612 写代码框架 30% 主 ...
- centos 安装 rabbitMQ
此类文章一大堆,本文主要站在开发角度保证基本rabbitmq的基本访问. 系统:centos6 64bit 官方指引:https://www.rabbitmq.com/install-rpm.html ...