Oracle 11.2 RAC 删除节点
软硬件环境:与上一篇文章一致;
一般对 CRS 层面数据结构做重要操作之前一定要先备份 OCR
[root@vastdata4 ~]# ocrconfig -manualbackup
vastdata4 2019/02/25 00:04:20 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000420.ocr
vastdata4 2019/02/25 00:00:08 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000008.ocr
RAC删除节点简单为分 3 个步骤:
1. 删除实例;
2. 删除 DB 软件;
3. 删除 GI 软件;
1.删除实例
1.1关闭计划删除的目标实例
[root@vastdata4 ~]# srvctl status database -d PROD -help
Displays the current state of the database.
Usage: srvctl status database -d <db_unique_name> [-f] [-v]
-d <db_unique_name> Unique name for the database
-f Include disabled applications
-v Verbose output
-h Print usage [root@vastdata4 ~]# srvctl status database -d PROD -f
Instance PROD1 is running on node vastdata3
Instance PROD2 is running on node vastdata4 [root@vastdata4 ~]# srvctl stop instance -d PROD -n vastdata3 [root@vastdata4 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.prod.db
1 OFFLINE OFFLINE Instance Shutdown
2 ONLINE ONLINE vastdata4 Open [root@vastdata4 ~]# srvctl status database -d PROD -f
Instance PROD1 is not running on node vastdata3
Instance PROD2 is running on node vastdata4
1.2删除实例
[oracle@vastdata4 ~]$ dbca -silent -deleteInstance -nodeList vastdata3.us.oracle.com -gdbName PROD -instanceName PROD1 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/PROD.log" for further details.
1.3再次检查
[root@vastdata4 ~]# srvctl status database -d PROD -f
Instance PROD2 is running on node vastdata4
2.删除 DB 软件
2.1更新 inventory
[oracle@vastdata3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata3" -local
Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
2.2卸载 DB 软件
[oracle@vastdata3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: vastdata3
Checking for sufficient temp space availability on node(s) : 'vastdata3'
## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-02-25_12-27-57-AM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-02-25_12-27-59-AM.log Database Check Configuration END Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2019-02-25_12-28-02-AM.log Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check2332.log
Oracle Configuration Manager check END ######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:vastdata3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'vastdata3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home. No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.err'
######################## CLEAN OPERATION START ######################## Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2019-02-25_12-28-02-AM.log Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-02-25_12-29-25-AM.log Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-02-25_12-29-25-AM.log De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully. De-configuring backup files...
Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean2332.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done Failed to delete the directory '/u01/app/oracle'. The directory is in use.
Delete directory '/u01/app/oracle' on the local node : Failed <<<< Oracle Universal Installer cleanup completed with errors. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-27-29AM' on node 'vastdata3' ## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.
Failed to delete directory '/u01/app/oracle' on the local node.
Oracle Universal Installer cleanup completed with errors. Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
2.3更新 inventory
[oracle@vastdata4 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata4" -local
Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
2.4如果卸载不干净,需要人为手工执行下面命令
[oracle@vastdata3 ~]$ rm -rf $ORACLE_HOME/*
3.删除 GI 软件
3.1检查被删除节点状态
[grid@vastdata4 ~]$ olsnodes -s -n -t
vastdata3 1 Active Unpinned
vastdata4 2 Active Unpinned
3.2节点被 PIN 住,需要 UNPIN
[root@vastdata4 ~]# crsctl unpin css -n vastdata3
CRS-4667: Node vastdata3 successfully unpinned.
3.3停止被删节点 HAS 服务
[root@vastdata3 ~]# export ORACLE_HOME=/u01/app/11.2.0/grid
[root@vastdata3 ~]# cd $ORACLE_HOME/crs/install
[root@vastdata3 install]# perl rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.0.0/255.255.255.0/eth0, type static
VIP exists: /vastdata3-vip/192.168.0.22/192.168.0.0/255.255.255.0/eth0, hosting node vastdata3
VIP exists: /vastdata4-vip/192.168.0.23/192.168.0.0/255.255.255.0/eth0, hosting node vastdata4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'vastdata3'
CRS-2677: Stop of 'ora.registry.acfs' on 'vastdata3' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'vastdata3'
CRS-2673: Attempting to stop 'ora.crsd' on 'vastdata3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'vastdata3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'vastdata3'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'vastdata3'
CRS-2677: Stop of 'ora.FRA.dg' on 'vastdata3' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'vastdata3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'vastdata3'
CRS-2677: Stop of 'ora.asm' on 'vastdata3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'vastdata3' has completed
CRS-2677: Stop of 'ora.crsd' on 'vastdata3' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vastdata3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'vastdata3'
CRS-2673: Attempting to stop 'ora.crf' on 'vastdata3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'vastdata3'
CRS-2673: Attempting to stop 'ora.evmd' on 'vastdata3'
CRS-2673: Attempting to stop 'ora.asm' on 'vastdata3'
CRS-2677: Stop of 'ora.mdnsd' on 'vastdata3' succeeded
CRS-2677: Stop of 'ora.crf' on 'vastdata3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'vastdata3' succeeded
CRS-2677: Stop of 'ora.asm' on 'vastdata3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'vastdata3'
CRS-2677: Stop of 'ora.ctssd' on 'vastdata3' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'vastdata3' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'vastdata3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'vastdata3'
CRS-2677: Stop of 'ora.cssd' on 'vastdata3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vastdata3'
CRS-2677: Stop of 'ora.gipcd' on 'vastdata3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vastdata3'
CRS-2677: Stop of 'ora.gpnpd' on 'vastdata3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'vastdata3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
3.4检查集群资源状态
[root@vastdata4 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE vastdata4
ora.FRA.dg
ONLINE ONLINE vastdata4
ora.LISTENER.lsnr
ONLINE ONLINE vastdata4
ora.asm
ONLINE ONLINE vastdata4 Started
ora.gsd
OFFLINE OFFLINE vastdata4
ora.net1.network
ONLINE ONLINE vastdata4
ora.ons
ONLINE ONLINE vastdata4
ora.registry.acfs
ONLINE ONLINE vastdata4
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE vastdata4
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE vastdata4
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE vastdata4
ora.cvu
1 ONLINE ONLINE vastdata4
ora.oc4j
1 ONLINE ONLINE vastdata4
ora.prod.db
2 ONLINE ONLINE vastdata4 Open
ora.scan1.vip
1 ONLINE ONLINE vastdata4
ora.scan2.vip
1 ONLINE ONLINE vastdata4
ora.scan3.vip
1 ONLINE ONLINE vastdata4
ora.vastdata4.vip
1 ONLINE ONLINE vastdata4
3.5检查集群下所有节点状态
[root@vastdata4 ~]# olsnodes -s -n -t
vastdata3 1 Inactive Unpinned
vastdata4 2 Active Unpinned
3.6更新 inventory
[grid@vastdata3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata3" CRS=TRUE -silent -local
Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
3.7卸载 GI 软件
[grid@vastdata3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-02-25_00-43-06AM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START #########################
## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: vastdata3
Checking for sufficient temp space availability on node(s) : 'vastdata3' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2019-02-25_00-43-06AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "vastdata3"[vastdata3-vip]
> The following information can be collected by running "/sbin/ifconfig -a" on node "vastdata3"
Enter the IP netmask of Virtual IP "192.168.0.22" on node "vastdata3"[255.255.255.0]
> Enter the network interface name on which the virtual IP address "192.168.0.22" is active
> Enter an address or the name of the virtual IP[]
> Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_check2019-02-25_12-44-14-AM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_check2019-02-25_12-44-19-AM.log ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:vastdata3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'vastdata3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.err' ######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_clean2019-02-25_12-44-27-AM.log
ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_clean2019-02-25_12-44-27-AM.log De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1 De-configuring listener: LISTENER
Stopping listener on node "vastdata3": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully. De-configuring listener: LISTENER_SCAN3
Stopping listener on node "vastdata3": LISTENER_SCAN3
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully. De-configuring listener: LISTENER_SCAN2
Stopping listener on node "vastdata3": LISTENER_SCAN2
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully. De-configuring listener: LISTENER_SCAN1
Stopping listener on node "vastdata3": LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully. De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully. De-configuring backup files...
Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END
----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "vastdata3". /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands
<----------------------------------------
[root@vastdata3 ~]# /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp"Using configuration parameter file: /tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp****Unable to retrieve Oracle Clusterware home.Start Oracle Clusterware stack and try again.CRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Stop failed, or completed with errors.Either /etc/oracle/ocr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessEither /etc/oracle/ocr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessCRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Modify failed, or completed with errors.CRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Delete failed, or completed with errors.CRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Stop failed, or completed with errors.################################################################# You must kill processes or reboot the system to properly ## cleanup the processes started by Oracle clusterware #################################################################ACFS-9313: No ADVM/ACFS installation detected.Either /etc/oracle/olr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessEither /etc/oracle/olr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessFailure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstallerror: package cvuqdisk is not installedSuccessfully deconfigured Oracle clusterware stack on this node
Remove the directory: /tmp/deinstall2019-02-25_00-43-06AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done Delete directory '/u01/app/grid' on the local node : Done Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-43-06AM' on node 'vastdata3' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "vastdata3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'vastdata3' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'vastdata3' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'vastdata3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
3.8更新 inventory
[grid@vastdata4 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata4" CRS=TRUE -silent
Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
3.9检查集群下所有节点状态
[grid@vastdata4 ~]$ olsnodes -s
vastdata3 Inactive
vastdata4 Active
3.10如果卸载不干净,需要人为手工执行下面命令
ps -ef |grep ora |awk '{print $2}' |xargs kill -9
ps -ef |grep grid |awk '{print $2}' |xargs kill -9
ps -ef |grep asm |awk '{print $2}' |xargs kill -9
ps -ef |grep storage |awk '{print $2}' |xargs kill -9
ps -ef |grep ohasd |awk '{print $2}' |xargs kill -9
ps -ef |grep grid
ps -ef |grep ora
ps -ef |grep asm
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
cd $ORACLE_HOME
rm -rf *
cd $ORACLE_BASE
rm -rf *
rm -rf /etc/rc5.d/S96ohasd
rm -rf /etc/rc3.d/S96ohasd
rm -rf /rc.d/init.d/ohasd
rm -rf /etc/oracle
rm -rf /etc/ora*
rm -rf /etc/oratab
rm -rf /etc/oraInst.loc
rm -rf /opt/ORCLfmap/
rm -rf /taryartar/12c/oraInventory
rm -rf /usr/local/bin/dbhome
rm -rf /usr/local/bin/oraenv
rm -rf /usr/local/bin/coraenv
rm -rf /tmp/*
rm -rf /var/tmp/.oracle
rm -rf /var/tmp
rm -rf /home/grid/*
rm -rf /home/oracle/*
rm -rf /etc/init/oracle*
rm -rf /etc/init.d/ora
rm -rf /tmp/.*
3.11从集群中删除节点
[root@vastdata4 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n vastdata3
CRS-4661: Node vastdata3 successfully deleted.
3.12检查集群下所有节点状态
[root@vastdata4 ~]# olsnodes -s
vastdata4 Active
3.13检查节点删除是否成功
这步非常重要,关系以后是否可以顺利增加节点到集群中。
[grid@vastdata4 ~]$ cluvfy stage -post nodedel -n vastdata3 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed
The Oracle Clusterware is healthy on node "vastdata4" CRS integrity check passed
Result:
Node removal check passed Post-check for node removal was successful.
3.14备份 OCR
[root@vastdata4 ~]# ocrconfig -manualbackup
vastdata4 2019/02/25 00:53:53 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_005353.ocr
vastdata4 2019/02/25 00:04:20 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000420.ocr
vastdata4 2019/02/25 00:00:08 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000008.ocr
至此,Oracle RAC架构搭建完成。
如有转载,请标明出处。
Oracle 11.2 RAC 删除节点的更多相关文章
- Oracle 11.2 RAC on Redhat 6.5 安装最佳实践
本文讲述了在Redhat 6.5 上安装Oracle 11.2 RAC的详细步骤,是一篇step by step指南,全文没有什么技术难度,只要一步步跟着做就一定能安装成功. 环境介绍 分类 项目 说 ...
- 【Oracle】RAC删除节点
环境: OS:OEL5.6 RAC:10.2.0.1.0 眼下有rac1.rac2.rac3三个节点,下面是删除rac3节点的具体过程 1.删除rac3节点上的数据库实例 [oracle@rac1 ~ ...
- Oracle HA 之 oracle 11.2 rac库配置active dataguard
目录 configing active dataguard for 11.2 rac. 1 一.建组.建用户.配置环境变量.内核参数等... 1 二.配置共享磁盘... 3 1)创建4块共享磁盘并fd ...
- Oracle RAC集群删除节点
一,节点环境 [root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost ...
- RHEL6.7 x64双节点安装Oracle 11g r2 RAC
基础环境 使用两台HP DL580服务器作为RAC节点,存储使用IBM V7000.具体环境如下: 设备 用途 IP地址 磁盘空间 HP DL580 RAC节点01 RAC01-pub:116.1.1 ...
- 【RAC】oracle11g r2 rac环境删除节点步骤
1.移除数据库实例 如果节点运行了service首先需要删除service使用dbca图形化界面删除节点依次选择 Real Application Clusters -- > Instance ...
- Oracle 11.2.0.4 RAC安装最新PSU补丁
环境:两节点RAC(RHEL 6.4 + GI 11.2.0.4 + Oracle 11.2.0.4) 需求:安装最新PSU补丁11.2.0.4.7 1.下载补丁和最新OPatch 2.检查数据库当前 ...
- ORACLE LINUX 6.3 + ORACLE 11.2.0.3 RAC + VBOX安装文档
ORACLE LINUX 6.3 + ORACLE 11.2.0.3 RAC + VBOX安装文档 2015-10-21 12:51 525人阅读 评论(0) 收藏 举报 分类: Oracle RA ...
- Oracle 11g RAC 第二节点root.sh执行失败后再次执行root.sh
Oracle 11g RAC 第二节点root.sh执行失败后再次执行root.sh前,要先清除之前的crs配置信息 # /u01/app/11.2.0/grid/crs/install/rootcr ...
- Oracle 11.2.0.4 RAC重建EM案例
环境:Oracle 11.2.0.4 RAC 重建EM 背景:客户之前的EM已经被损坏,需要重建EM 重建EM的方案有很多,其中最简单的方法是:直接使用emca重建,oracle用户下,只需一条命令搞 ...
随机推荐
- Ubuntu18.04server 双网卡,开机自动设置路由并启动校园网网络认证程序(Ubuntu开机自动设置路由,开机自启动应用程序)
本博主为高龄在校生,实验室服务器需要假期时候无人守候也能实现自动登录校园网从而实现网络连接,以使实验室同学在家也可以使用校园vpn连接服务器. 由于假期时候实验室没有人,而假期实验室可能会出现断电断网 ...
- 最新版gym-0.26.2下Atari环境的安装以及环境版本v0,v4,v5的说明
强化学习的游戏仿真环境可以分为连续控制和非连续控制两类,其中连续控制的以mujoco为主,而非连续控制的以Atari游戏为主,本文对gym下的Atari环境的游戏环境版本进行一定的介绍. 参考:[转载 ...
- Apache DolphinScheduler数仓任务管理规范
前言: 大数据领域对多种任务都有调度需求,以离线数仓的任务应用最多,许多团队在调研开源产品后,选择Apache DolphinScheduler(以下简称DS)作为调度场景的技术选型.得益于DS优秀的 ...
- notepad++安装HexEdit插件
notepad++安装HexEdit插件 打开notepad++,选择插件->插件管理 在这里找到HexEdit点击安装就可以 点击完,notepad++会自动重启,重启完成就安装好了
- 细说WebService
细说WebService 简介 WebService 是一个应用于客户端.服务端,基于http协议的web应用程序,他有一个非常重要的特点,那就是可以跨语言.跨平台进行远程调用,可应用于分布式系统中的 ...
- Redis解读(3):Redis分布式锁、消息队列、操作位图进阶应用
Redis 做分布式锁 分布式锁也算是 Redis 比较常见的使用场景 问题场景: 例如一个简单的用户操作,一个线城去修改用户的状态,首先从数据库中读出用户的状态,然后 在内存中进行修改,修改完成后, ...
- Comfyui 基础教程(一) —— 本地安装部署
前言 前面一篇文章已经介绍过,ComfyUI 和 Stable Diffusion 的关系.不清楚的朋友,看传送门 Stable Diffusion 小白的入坑铺垫 . WebUI 以及 ComfyU ...
- 如果nacos注册中心挂了怎么办
当服务异常宕机,Nacos还未反应过来时,可能会发生的状况以及现有的解决方案. Nacos的健康检查 故事还要从Nacos对服务实例的健康检查说起. Nacos目前支持临时实例使用心跳上报方式维持活性 ...
- web前端使用mcg-helper代码生成工具学习笔记
学习资料介绍 github地址:mcg-helper代码生成工具 什么是 FreeMarker? - FreeMarker 中文官方参考手册 视频学习地址: 第一节.视频教程内容介绍 探讨研发工作 ...
- CCIA数安委等组织发布PIA星级标识名单,合合信息再次通过数据安全领域权威评估
CCIA数安委等组织发布PIA星级标识名单,合合信息再次通过数据安全领域权威评估 近期,"中国网络安全产业联盟(CCIA)数据安全工作委员会"."数据安全共同体计划( ...