uninstall 11.2.0.3.0 grid & database in linux 5.7
OS:
Oracle Linux Server release 5.7
DB:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
1、以oracle用户登录启动dbca
[root@rac ~]# su - oracle
[oracle@rac ~]$ dbca
会逐一删除所有节点的database
2、用oracle用户登录并在cd $ORACLE_HOME/deinstall目录下执行deinstall脚本
[root@rac ~]# su - oracle
[oracle@rac ~]$ cd $ORACLE_HOME/deinstall
[oracle@rac ~]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/oracle/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/grid/11.2.0
The following nodes are part of this cluster: rac,rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac,rac1,rac2'
## [END] Install check configuration ##
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2013-08-14_02-22-03-AM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2013-08-14_02-22-09-AM.log
Use comma as separator when specifying list of values as input
Specify the list of database names that are configured in this Oracle home []:
Database Check Configuration END
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2013-08-14_02-31-07-AM.log
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check1556.log
Oracle Configuration Manager check END
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/grid/11.2.0
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac,rac1,rac2
Oracle Home selected for deinstall is: /u01/app/oracle/11.2.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
rac : Oracle Home exists with CCR directory, but CCR is not configured
rac1 : Oracle Home exists with CCR directory, but CCR is not configured
rac2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-08-14_02-21-45-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-08-14_02-21-45-AM.err'
######################## CLEAN OPERATION START ########################
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2013-08-14_02-31-07-AM.log
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2013-08-14_02-31-42-AM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2013-08-14_02-31-42-AM.log
De-configuring Listener configuration file on all nodes...
Listener configuration file de-configured successfully.
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean1556.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the local node : Done
Delete directory '/u01/app/oracle/11.2.0/db_1' on the local node : Done
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/grid/11.2.0'.
Detach Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the remote nodes 'rac2,rac1' : Done
Delete directory '/u01/app/oracle/11.2.0/db_1' on the remote nodes 'rac1,rac2' : Done
The Oracle Base directory '/u01/app/oracle' will not be removed on node 'rac2'. The directory is in use by Oracle Home '/u01/app/grid/11.2.0'.
The Oracle Base directory '/u01/app/oracle' will not be removed on node 'rac1'. The directory is in use by Oracle Home '/u01/app/grid/11.2.0'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_02-15-33AM' on node 'rac'
Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_02-15-33AM' on node 'rac1,rac2'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/11.2.0/db_1' on the local node.
Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the remote nodes 'rac2,rac1'.
Successfully deleted directory '/u01/app/oracle/11.2.0/db_1' on the remote nodes 'rac1,rac2'.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
3、用root用户登录所有节点,运行/u01/app/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig
But,如果有3个节点,在前2个节点运行,最后一个节点不运行.
[root@rac ~]# cd /u01/app/grid/11.2.0/crs/install
[root@rac install]# ./rootcrs.pl -verbose -deconfig
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.12.0/255.255.255.0/eth0, type static
VIP exists: /rac-vip/192.168.12.4/192.168.12.0/255.255.255.0/eth0, hosting node rac
VIP exists: /rac1-vip/192.168.12.5/192.168.12.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.12.9/192.168.12.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
PRCR-1065 : Failed to stop resource ora.rac.vip
CRS-2529: Unable to act on 'ora.rac.vip' because that would require stopping or relocating 'ora.LISTENER.lsnr', but the force option was not specified
PRCR-1014 : Failed to stop resource ora.net1.network
PRCR-1065 : Failed to stop resource ora.net1.network
CRS-2529: Unable to act on 'ora.net1.network' because that would require stopping or relocating 'ora.scan1.vip', but the force option was not specified
PRKO-2380 : VIP rac is still running on node: rac
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac'
CRS-2673: Attempting to stop 'ora.CRM.dg' on 'rac'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac'
CRS-2673: Attempting to stop 'ora.FLUSH.dg' on 'rac'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac'
CRS-2677: Stop of 'ora.cvu' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.rac.vip' on 'rac'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac'
CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac.vip' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.rac.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2676: Start of 'ora.rac.vip' on 'rac1' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.FLUSH.dg' on 'rac' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'
CRS-2677: Stop of 'ora.CRM.dg' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac'
CRS-2677: Stop of 'ora.asm' on 'rac' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac'
CRS-2677: Stop of 'ora.net1.network' on 'rac' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac'
CRS-2673: Attempting to stop 'ora.asm' on 'rac'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac'
CRS-2677: Stop of 'ora.mdnsd' on 'rac' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac'
CRS-2677: Stop of 'ora.cssd' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac'
CRS-2677: Stop of 'ora.gipcd' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac'
CRS-2677: Stop of 'ora.gpnpd' on 'rac' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
4、在最后一个节点以root用户运行"/u01/app/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode" 该命令会清空OCR和votedisk
[root@rac2 install]# ./rootcrs.pl -verbose -deconfig -force -lastnode
Using configuration parameter file: ./crsconfig_params
CRS resources for listeners are still configured
Network exists: 1/192.168.12.0/255.255.255.0/eth0, type static
VIP exists: /rac-vip/192.168.12.4/192.168.12.0/255.255.255.0/eth0, hosting node rac
VIP exists: /rac1-vip/192.168.12.5/192.168.12.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.12.9/192.168.12.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.CRM.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.FLUSH.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.FLUSH.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.CRM.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 2, and is terminating
Unable to communicate with the Cluster Synchronization Services daemon.
CRS-4000: Command Delete failed, or completed with errors.
crsctl delete for vds in CRM ... failed
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
5、在任意节点以Grid Infrastructure拥有者执行deinstall脚本
[root@rac ~]# su - grid
[grid@rac ~]$ cd /u01/app/grid/11.2.0/deinstall
[grid@rac deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2013-08-14_03-42-20AM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/grid/11.2.0
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac,rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac,rac1,rac2'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2013-08-14_03-42-20AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac"[rac-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac"
Enter the IP netmask of Virtual IP "192.168.12.4" on node "rac"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.12.4" is active
>
Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"
Enter the IP netmask of Virtual IP "192.168.12.5" on node "rac1"[255.255.255.0]
>
Enter the IP netmask of Virtual IP "192.168.12.5" on node "rac1"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.12.5" is active[rac-vip]
>
Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"
Enter the IP netmask of Virtual IP "192.168.12.9" on node "rac2"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.12.9" is active[rac-vip]
>
Enter an address or the name of the virtual IP[]
>
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/netdc_check2013-08-14_04-31-25-AM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/asmcadc_check2013-08-14_04-31-31-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: y
Enter the OCR/Voting Disk diskgroup name []:
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac,rac1,rac2
Oracle Home selected for deinstall is: /u01/app/grid/11.2.0
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2013-08-14_03-42-20AM/logs/deinstall_deconfig2013-08-14_03-43-56-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2013-08-14_03-42-20AM/logs/deinstall_deconfig2013-08-14_03-43-56-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/asmcadc_clean2013-08-14_04-32-20-AM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/netdc_clean2013-08-14_04-32-26-AM.log
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl -I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib -I/tmp/deinstall2013-08-14_03-42-20AM/crs/install /tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "rac2".
/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl -I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib -I/tmp/deinstall2013-08-14_03-42-20AM/crs/install /tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "rac".
/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl -I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib -I/tmp/deinstall2013-08-14_03-42-20AM/crs/install /tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands
<----------------------------------------
在所有节点以root用户执行上面的提示
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_03-42-20AM' on node 'rac'
Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_03-42-20AM' on node 'rac1,rac2'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/grid/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/u01/app/grid/11.2.0' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully detached Oracle home '/u01/app/grid/11.2.0' from the central inventory on the remote nodes 'rac2,rac1'.
Successfully deleted directory '/u01/app/grid/11.2.0' on the remote nodes 'rac1,rac2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'rac2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'rac1'.
Failed to delete directory '/u01/app/oracle' on the remote nodes 'rac1'.
Failed to delete directory '/u01/app/oracle' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup completed with errors.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac,rac2,rac1' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac,rac1,rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
deinstall运行完以后会提示在必要的节点运行'rm -rf /etc/oraInst.loc' 和'rm -rf /opt/ORCLfmap'
uninstall 11.2.0.3.0 grid & database in linux 5.7的更多相关文章
- Oracle Database 11g Release 2(11.2.0.3.0) RAC On Redhat Linux 5.8 Using Vmware Workstation 9.0
一,简介 二,配置虚拟机 1,创建虚拟机 (1)添加三块儿网卡: 主节点 二节点 eth0: 公网 192.168.1.20/24 NAT eth0: 公网 192.168.1 ...
- 单机Oracle+asm(11.2.0.3.0) Patch Set Update(11.2.0.3.7 )
之前写过一篇关于PSU升级的案例,参考如下: http://blog.csdn.net/jyjxs/article/details/8983880 但是,感觉有些地方理解的不是很透彻明白,照猫画虎的比 ...
- Oracle 11.2.0.3.0 RAC GI_DB升级到11.2.0.4.0
转载: http://blog.csdn.net/frank0521/article/details/18226199 前言 还是大家常说的那句:生产环境千万记得备份哈~~~ 以下的环境,是我的测试 ...
- Downgrade ASM DATABASE_COMPATIBILITY (from 11.2.0.4.0 to 11.2.0.0.0) on 12C CRS stack.
使用Onecommand完成快速Oracle 12c RAC部署后 发现ASM database compatibilty无法设置,默认为11.2.0.4.0. 由于我们还有些数据库低于这个版本,所以 ...
- Discoverer 11.1.1.3.0以Oracle Application用户登录的必要配置
客户这边要使用Discoverer来出报表, 就从OTN上下载安装了11.1.1.3.0版本的, 安装很简单, 一路Next, 使用的EBS版本是12.1.1.3, 结果发现用Oracle Appli ...
- Oracle_RAC数据库GI的PSU升级(11.2.0.4.0到11.2.0.4.8)
Oracle_RAC数据库GI的PSU升级(11.2.0.4.0到11.2.0.4.8) 本次演示为升级oracle rac数据库,用GI的psu升级,从11.2.0.4.0升级到11.2.0.4.8 ...
- PSU 离11.2.0.3.0 -> 11.2.0.3.11 如果解决冲突的整个
Oracle rdbms 扑灭psu离11.2.0.3.0升级到11.2.0.3.11 参考patch :18522512 停止应用,停止听音乐并DB,将db的oracle_home在下面OPatch ...
- AIX 7.1 RAC 11.2.0.4.0升级至11.2.0.4.6(一个patch跑了3个小时)
1.环境 DB:两节点RAC 11.2.0.4.0升级至11.2.0.4.6 OS:AIX 7.1(205G内存 16C) 2.节点1.节点2(未建库) 2.1.patch 20420937居然用了3 ...
- CentOS6.9 安装Oracle 11G 版本11.2.0.1.0
安装实例与数据库 CentOS6.9 安装Oracle 11G 版本11.2.0.1.0 一.检查系统类别. 查看 系统的类别,这里是 64位系统:[root@localhost ~]# uname ...
随机推荐
- DoTween使用
官网:http://dotween.demigiant.com/ 1.step 这里使用lamda表达式,通过dotween的to方法将其移动到 Vector3(348, 196, 0)的值返回到Ve ...
- Android开发-API指南-<compatible-screens>
<compatible-screens> 英文原文:http://developer.android.com/guide/topics/manifest/compatible-screen ...
- dumpsys命令的使用及telephony.registry解读
adb shell dumpsys,默认打印出当前系统所有的service信息,通常情况下我们并不想看那么多信息,可以在后面加上具体的服务名,比如想获取关于设备电池的信息,就可以使用以下命令: > ...
- 【小错误】起归档是遇到ORA-00265: instance recovery required, cannot set ARCHIVELOG mode
今天在起归档时遇到ORA-00265: instance recovery required, cannot set ARCHIVELOG mode的错误 从错误我们能够看到是由于datafile,c ...
- notepad++ tab键用空格缩进
从工作那天开始到现在,写python代码一直用notepad++来写,尝试几次都改不回eclipse.o(╯□╰)o python脚本中,如果用制表符缩进,经常会报错,必须改用空格缩进代替. 之前设置 ...
- Java基础——序列化
Java的“对象序列化”能将一个实现了Serialiable接口(标记接口,没有任何方法)的对象转化为一组byte,这样日后要用到这个对象的时候,就能把这些byte数据恢复出来,并据此重新构建那个对象 ...
- php 调用.net的webservice 需要注意的
首先 SoapClient类这个类用来使用Web services.SoapClient类可以作为给定Web services的客户端.它有两种操作形式:* WSDL 模式* Non-WSDL 模式在 ...
- iOS百度地图探索
新建工程后,几项准备: 1.工程中一个文件设为.mm后缀 2.在Xcode的Project -> Edit Active Target -> Build -> Linking -&g ...
- 解决Linq第一次调用存储过程时速度慢的问题
最近做项目,发现linq调用存储过程,第一次时会速度慢,但之后速度都很快,过一阵子又会慢一下,以实际的操作为例子: using (FruitDbDataContext dbo = new FruitD ...
- Windbg 内存命令 《第四篇》
内存是存储数据.代码的地方,通过内存查看命令可以分析很多问题.相关命令可以分为:内存查看命令和内存统计命令.内存统计命令用来分析内存的使用状况. 一.查看内存 有非常丰富的内存查看命令,它们被容易为d ...