自己造成的一个案例:

场景:ocr磁盘组被我dd掉了,dbfile磁盘组也被我dd掉了。Rac起不来。之前ocr的DATA磁盘组被替换到了ABC磁盘。所幸的是有备份。

  

    重新加载OCR磁盘

[root@rhel1 ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

(Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

(Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1780.

(Maybe you should just omit the defined()?)

2019-05-22 10:31:57: Checking for super user privileges

2019-05-22 10:31:57: User has super user privileges

2019-05-22 10:31:57: Parsing the host name

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

You must kill ohasd processes or reboot the system to properly

cleanup the processes started by Oracle clusterware

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

ACFS-9201: Not Supported

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

/bin/dd: failed to open ‘’: No such file or directory

Successfully deconfigured Oracle Restart stack

[root@rhel1 ~]# less /etc/oracle/olr.loc

/etc/oracle/olr.loc: No such file or directory

[root@rhel1 ~]# cd /u01/app/11.2.0/grid/

[root@rhel1 grid]# ./root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2019-05-22 10:33:37: Parsing the host name

2019-05-22 10:33:37: Checking for super user privileges

2019-05-22 10:33:37: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params   --->root.sh执行的是这个脚本

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab   ---->(11.2.0.1需要执行dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1)

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

Disk Group DATA already exists. Cannot be created again   --->这个磁盘组之前替换后并没有被我dd,所以asm加载失败。

Configuration of ASM failed, see logs for details

Did not succssfully configure and start ASM

CRS-2500: Cannot stop resource 'ora.crsd' as it is not running

CRS-4000: Command Stop failed, or completed with errors.

Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init

Stop of resource "ora.crsd -init" failed

Failed to stop CRSD

CRS-2500: Cannot stop resource 'ora.asm' as it is not running

CRS-4000: Command Stop failed, or completed with errors.

Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init

Stop of resource "ora.asm -init" failed

Failed to stop ASM

CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel1'

CRS-2677: Stop of 'ora.ctssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rhel1'

CRS-2677: Stop of 'ora.cssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel1'

CRS-2677: Stop of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.gpnpd' on 'rhel1'

CRS-2681: Clean of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel1'

CRS-2677: Stop of 'ora.gipcd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel1'

CRS-2677: Stop of 'ora.mdnsd' on 'rhel1' succeeded

Initial cluster configuration failed.  See /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_rhel1.log for details

[root@rhel1 grid]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force –verbose

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

(Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1702.

(Maybe you should just omit the defined()?)

defined(@array) is deprecated at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1780.

(Maybe you should just omit the defined()?)

2019-05-22 10:38:55: Checking for super user privileges

2019-05-22 10:38:55: User has super user privileges

2019-05-22 10:38:55: Parsing the host name

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Stop failed, or completed with errors.

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Delete failed, or completed with errors.

CRS-4133: Oracle High Availability Services has been stopped.

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

ACFS-9201: Not Supported

Successfully deconfigured Oracle Restart stack

[root@rhel1 ~]# vi  /u01/app/11.2.0/grid/crs/install/crsconfig_params  ----->更改root.sh执行的脚本,在此我修改了asm的组

(

ASM_DISK_GROUP=ABC

ASM_DISCOVERY_STRING=

ASM_DISKS=ORCL:AAA,ORCL:BBB,ORCL:CCC

)

[root@rhel1 grid]# ./root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]:

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2019-05-22 10:40:41: Parsing the host name

2019-05-22 10:40:41: Checking for super user privileges

2019-05-22 10:40:41: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on redhat-release-server-7.1-1.el7.x86_64

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

ASM created and started successfully.

DiskGroup ABC created successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on 'rhel1'

CRS-2676: Start of 'ora.crsd' on 'rhel1' succeeded

Successful addition of voting disk 5967fa6e88834f56bf236d0128957259.

Successful addition of voting disk 49c2405c6c6e4f96bf7a417022905afb.

Successful addition of voting disk 9f7ea2d3d0fc4f94bf9e7fd81a2228d2.

Successfully replaced voting disk group with +ABC.

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

1. ONLINE   5967fa6e88834f56bf236d0128957259 (ORCL:AAA) [ABC]

2. ONLINE   49c2405c6c6e4f96bf7a417022905afb (ORCL:BBB) [ABC]

3. ONLINE   9f7ea2d3d0fc4f94bf9e7fd81a2228d2 (ORCL:CCC) [ABC]

Located 3 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on 'rhel1'

CRS-2677: Stop of 'ora.crsd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.crsd' on 'rhel1'

CRS-2681: Clean of 'ora.crsd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rhel1'

CRS-2677: Stop of 'ora.asm' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel1'

CRS-2677: Stop of 'ora.ctssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rhel1'

CRS-2677: Stop of 'ora.cssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel1'

CRS-2677: Stop of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.gpnpd' on 'rhel1'

CRS-2681: Clean of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel1'

CRS-2677: Stop of 'ora.gipcd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel1'

CRS-2677: Stop of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rhel1'

CRS-2676: Start of 'ora.asm' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rhel1'

CRS-2676: Start of 'ora.crsd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'rhel1'

CRS-2676: Start of 'ora.evmd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rhel1'

CRS-2676: Start of 'ora.asm' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ABC.dg' on 'rhel1'

CRS-2676: Start of 'ora.ABC.dg' on 'rhel1' succeeded

rhel1     2019/05/22 10:46:57     /u01/app/11.2.0/grid/cdata/rhel1/backup_20190522_104657.olr

Preparing packages...

cvuqdisk-1.0.7-1.x86_64

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3054 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

[root@rhel1 grid]#

节点二执行相同步骤。

可是查看asm磁盘组的时候,之前的dbfile与FRA磁盘的信息却不在了。

[grid@rhel2 ~]$ asmcmd

ASMCMD> ls

ABC/

ASMCMD> exit

两个节点分别执行前面的操作

    还原OCR磁盘

[root@rhel2 ~]#  /u01/app/11.2.0/grid/bin/crsctl stop crs

[root@rhel1 grid]# ll /u1/ocrdisk/   ---->查看自动备份ocr的信息,其中/u1/ocrdisk/备份路径是我之前定义的,

backup00.ocr                day.ocr

backup01.ocr                ocr

backup02.ocr                week_.ocr

backup_20190507_111618.ocr  week.ocr

backup_20190521_161536.ocr

[root@rhel1 grid]# ./bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          3

Total space (kbytes)     :     262120

Used space (kbytes)      :       2032

Available space (kbytes) :     260088

ID                       :  849232275

Device/File Name         :       +ABC

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@rhel1 grid]# ./bin/ocrconfig -showbackup

PROT-24: Auto backups for the Oracle Cluster Registry are not available

PROT-25: Manual backups for the Oracle Cluster Registry are not available

[root@rhel1 grid]# ./bin/crsctl start crs -excl

CRS-4123: Oracle High Availability Services has been started.

CRS-2672: Attempting to start 'ora.gipcd' on 'rhel1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel1'

CRS-2676: Start of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2676: Start of 'ora.gipcd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel1'

CRS-2676: Start of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rhel1'

CRS-2679: Attempting to clean 'ora.diskmon' on 'rhel1'

CRS-2681: Clean of 'ora.diskmon' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.diskmon' on 'rhel1'

CRS-2676: Start of 'ora.diskmon' on 'rhel1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rhel1'

CRS-2676: Start of 'ora.ctssd' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rhel1'

CRS-2676: Start of 'ora.asm' on 'rhel1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rhel1'

CRS-2676: Start of 'ora.crsd' on 'rhel1' succeeded

[root@rhel1 grid]# ll /u1/ocrdisk/                  ---->这是之前的ocr备份,通过该备份还原

backup00.ocr                day.ocr

backup01.ocr                ocr

backup02.ocr                week_.ocr

backup_20190507_111618.ocr  week.ocr

backup_20190521_161536.ocr

[root@rhel1 grid]# ll /u1/ocrdisk/backup00.ocr

-rw------- 1 root root 7012352 May 20 16:37 /u1/ocrdisk/backup00.ocr

[root@rhel1 grid]# ./bin/ocrconfig -restore /u1/ocrdisk/backup00.ocr

PROT-19: Cannot proceed while the Cluster Ready Service is running

[root@rhel1 grid]# ./bin/crsctl stop resource ora.crsd –init

CRS-2673: Attempting to stop 'ora.crsd' on 'rhel1'

CRS-2677: Stop of 'ora.crsd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.crsd' on 'rhel1'

CRS-2681: Clean of 'ora.crsd' on 'rhel1' succeeded

[root@rhel1 grid]# ./bin/ocrconfig -restore /u1/ocrdisk/backup00.ocr

[root@rhel1 grid]# ./bin/crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel1'

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rhel1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel1'

CRS-2673: Attempting to stop 'ora.asm' on 'rhel1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rhel1'

CRS-2677: Stop of 'ora.cssd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel1'

CRS-2673: Attempting to stop 'ora.diskmon' on 'rhel1'

CRS-2677: Stop of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2679: Attempting to clean 'ora.gpnpd' on 'rhel1'

CRS-2681: Clean of 'ora.gpnpd' on 'rhel1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel1'

CRS-2677: Stop of 'ora.gipcd' on 'rhel1' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'rhel1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

[root@rhel1 grid]# ./bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

[root@rhel1 grid]# su - grid

Last login: Wed May 22 10:46:59 CST 2019 on pts/0

[grid@rhel1 ~]$ asmcmd    --->如下存在FRA磁盘组。

ASMCMD> ls

ABC/

DATA/

FRA/

ASMCMD>

    ocr磁盘组还原成功,之后就是还原数据库。

    还原数据库

[grid@rhel1 ~]$ srvctl start database -d ORCL -o nomount

PRCR-1079 : Failed to start resource ora.orcl.db

ORA-15032: not all alterations performed

ORA-15017: diskgroup "DBFILE" cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DBFILE"

CRS-2674: Start of 'ora.DBFILE.dg' on 'rhel1' failed

ORA-15032: not all alterations performed

ORA-15017: diskgroup "DBFILE" cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DBFILE"

CRS-2674: Start of 'ora.DBFILE.dg' on 'rhel2' failed

CRS-2632: There are no more servers to try to place resource 'ora.orcl.db' on that would satisfy its placement policy

[grid@rhel1 ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.1.0 Production on Wed May 22 11:25:00 2019

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

SQL> create diskgroup DBFILE external redundancy disk 'ORCL:DBFILE' name DBFILE;

Diskgroup created.

[oracle@rhel1 ~]$ cat /u01/app/oracle/product/11.2.0/db_1/dbs/initORCL1.ora  --->查看pfile参数文件,调用的是dbfile磁盘组下面的spfile文件。

SPFILE='+DBFILE/ORCL/spfileORCL.ora'

[oracle@rhel1 ~]$ cat /u01/app/oracle/admin/ORCL/pfile/init.ora.3182019191943 --->该目录下会有最开始的参数文件,通过这个打开至nomount状态

##############################################################################

# Copyright (c) 1991, 2001, 2002 by Oracle Corporation

##############################################################################

###########################################

# Cache and I/O

###########################################

db_block_size=8192

###########################################

# Cluster Database

###########################################

remote_listener=rhel-cluster-scan.grid.example.com:1521

###########################################

# Cursors and Library Cache

###########################################

open_cursors=300

###########################################

# Database Identification

###########################################

db_domain=""

db_name=ORCL

###########################################

# File Configuration

###########################################

db_create_file_dest=+DBFILE

db_recovery_file_dest=+FRA

db_recovery_file_dest_size=5218762752

###########################################

# Miscellaneous

###########################################

compatible=11.2.0.0.0

diagnostic_dest=/u01/app/oracle

memory_target=764411904

###########################################

# Processes and Sessions

###########################################

processes=150

###########################################

# Security and Auditing

###########################################

audit_file_dest=/u01/app/oracle/admin/ORCL/adump

audit_trail=db

remote_login_passwordfile=exclusive

###########################################

# Shared Server

###########################################

dispatchers="(PROTOCOL=TCP) (SERVICE=ORCLXDB)"

control_files=("+DBFILE/orcl/controlfile/current.256.1005931173", "+FRA/orcl/controlfile/current.256.1005931177")

cluster_database=true

ORCL1.instance_number=1

ORCL2.instance_number=2

ORCL2.thread=2

ORCL1.undo_tablespace=UNDOTBS1

ORCL2.undo_tablespace=UNDOTBS2

ORCL1.thread=1

    

[oracle@rhel1 ~]$ sqlplus "/as sysdba"

SQL > startup nomount pfile='/u01/app/oracle/admin/ORCL/pfile/init.ora.3182019191943'; --->可以将创建成spfile,再复制到asm磁盘文件中。

ORACLE instance started.

Total System Global Area  764121088 bytes

Fixed Size                  2217264 bytes

Variable Size             511707856 bytes

Database Buffers          247463936 bytes

Redo Buffers                2732032 bytes

[oracle@rhel1 ~]$ rman target /

RMAN> restore controlfile from '+FRA/orcl/controlfile/current.256.1005931177'; --->因为fra盘并未被我删除。所以控制文件,归档日志文件还存在里面

Starting restore at 22-MAY-19

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=30 instance=ORCL1 device type=DISK

channel ORA_DISK_1: copied control file copy

output file name=+DBFILE/orcl/controlfile/current.256.1008936907

output file name=+FRA/orcl/controlfile/current.256.1005931177

Finished restore at 22-MAY-19

RMAN> alter database mount;

database mounted

released channel: ORA_DISK_1

RMAN> list backup;

List of Backup Sets

===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time

------- ---- -- ---------- ----------- ------------ ---------------

3       Full    947.01M    DISK        00:02:47     14-MAY-19

BP Key: 3   Status: AVAILABLE  Compressed: NO  Tag: TAG20190514T170446

Piece Name: /home/oracle/03u1hntf_1_1.bak

List of Datafiles in backup set 3

File LV Type Ckp SCN    Ckp Time  Name

---- -- ---- ---------- --------- ----

1       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/system.259.1005931193

2       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/sysaux.260.1005931273

3       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/undotbs1.261.1005931315

4       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/undotbs2.263.1005931371

5       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/users.264.1005931391

6       Full 1240032    14-MAY-19 +DBFILE/orcl/datafile/tbs1.269.1008256827

BS Key  Type LV Size       Device Type Elapsed Time Completion Time

------- ---- -- ---------- ----------- ------------ ---------------

4       Full    17.70M     DISK        00:00:15     14-MAY-19

BP Key: 4   Status: AVAILABLE  Compressed: NO  Tag: TAG20190514T170446

Piece Name: /home/oracle/04u1ho2o_1_1.bak   --->刚好在路径下面有备份。

SPFILE Included: Modification time: 14-MAY-19

SPFILE db_unique_name: ORCL

Control File Included: Ckp SCN: 1240638      Ckp time: 14-MAY-19

RMAN> restore database;

Starting restore at 22-MAY-19

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=30 instance=ORCL1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to +DBFILE/orcl/datafile/system.259.1005931193

channel ORA_DISK_1: restoring datafile 00002 to +DBFILE/orcl/datafile/sysaux.260.1005931273

channel ORA_DISK_1: restoring datafile 00003 to +DBFILE/orcl/datafile/undotbs1.261.1005931315

channel ORA_DISK_1: restoring datafile 00004 to +DBFILE/orcl/datafile/undotbs2.263.1005931371

channel ORA_DISK_1: restoring datafile 00005 to +DBFILE/orcl/datafile/users.264.1005931391

channel ORA_DISK_1: restoring datafile 00006 to +DBFILE/orcl/datafile/tbs1.269.1008256827

channel ORA_DISK_1: reading from backup piece /home/oracle/03u1hntf_1_1.bak

channel ORA_DISK_1: piece handle=/home/oracle/03u1hntf_1_1.bak tag=TAG20190514T170446

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:02:30

Finished restore at 22-MAY-19

RMAN> recover database;

Starting recover at 22-MAY-19

using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 66 is already on disk as file +FRA/orcl/archivelog/2019_05_14/thread_1_seq_66.263.1008264651

archived log for thread 1 with sequence 67 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_67.262.1008754305

archived log for thread 1 with sequence 68 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_68.269.1008756045

archived log for thread 1 with sequence 69 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_69.270.1008757911

archived log for thread 1 with sequence 70 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_70.272.1008757951

archived log for thread 1 with sequence 71 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_71.273.1008762899

archived log for thread 1 with sequence 72 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_72.276.1008764651

archived log for thread 1 with sequence 73 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_1_seq_73.277.1008774719

archived log for thread 1 with sequence 74 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_1_seq_74.278.1008858631

archived log for thread 1 with sequence 75 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_1_seq_75.282.1008863235

archived log for thread 1 with sequence 76 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_1_seq_76.283.1008864805

archived log for thread 2 with sequence 13 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_13.267.1008754309

archived log for thread 2 with sequence 14 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_14.268.1008754309

archived log for thread 2 with sequence 15 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_15.271.1008757919

archived log for thread 2 with sequence 16 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_16.274.1008762921

archived log for thread 2 with sequence 17 is already on disk as file +FRA/orcl/archivelog/2019_05_20/thread_2_seq_17.275.1008762923

archived log for thread 2 with sequence 18 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_2_seq_18.279.1008863229

archived log for thread 2 with sequence 19 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_2_seq_19.280.1008863233

archived log for thread 2 with sequence 20 is already on disk as file +FRA/orcl/archivelog/2019_05_21/thread_2_seq_20.281.1008863233

archived log file name=+FRA/orcl/archivelog/2019_05_14/thread_1_seq_66.263.1008264651 thread=1 sequence=66

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_13.267.1008754309 thread=2 sequence=13

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_67.262.1008754305 thread=1 sequence=67

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_14.268.1008754309 thread=2 sequence=14

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_68.269.1008756045 thread=1 sequence=68

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_15.271.1008757919 thread=2 sequence=15

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_69.270.1008757911 thread=1 sequence=69

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_70.272.1008757951 thread=1 sequence=70

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_16.274.1008762921 thread=2 sequence=16

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_71.273.1008762899 thread=1 sequence=71

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_2_seq_17.275.1008762923 thread=2 sequence=17

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_72.276.1008764651 thread=1 sequence=72

archived log file name=+FRA/orcl/archivelog/2019_05_20/thread_1_seq_73.277.1008774719 thread=1 sequence=73

archived log file name=+FRA/orcl/archivelog/2019_05_21/thread_2_seq_18.279.1008863229 thread=2 sequence=18

archived log file name=+FRA/orcl/archivelog/2019_05_21/thread_1_seq_74.278.1008858631 thread=1 sequence=74

archived log file name=+FRA/orcl/archivelog/2019_05_21/thread_2_seq_19.280.1008863233 thread=2 sequence=19

media recovery complete, elapsed time: 00:00:42

Finished recover at 22-MAY-19

RMAN> alter database open;

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of alter db command at 05/22/2019 12:48:55

ORA-19751: could not create the change tracking file

ORA-19750: change tracking file: '+DBFILE/orcl/changetracking/ctf.268.1008159369'

ORA-17502: ksfdcre:4 Failed to create file +DBFILE/orcl/changetracking/ctf.268.1008159369

ORA-15046: ASM file name '+DBFILE/orcl/changetracking/ctf.268.1008159369' is not in single-file creation form

tracking file是优化增量备份的

连接oracle

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;----->可以先关闭

Database altered.

SQL> alter database open resetlogs;

Database altered.

SQL> archive log list;

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence     2

Next log sequence to archive   3

Current log sequence           3

SQL>  ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Database altered.

SQL> SELECT STATUS, FILENAME FROM V$BLOCK_CHANGE_TRACKING;---->因为是OMF管理的,可以不用using file来创建

STATUS

--------------------

FILENAME

--------------------------------------------------------------------------------

ENABLED

+DBFILE/orcl/changetracking/ctf.277.1008949385

[grid@rhel1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ABC.dg

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.DATA.dg

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.DBFILE.dg

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.FRA.dg

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.LISTENER.lsnr

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.asm

ONLINE  ONLINE       rhel1                    Started

ONLINE  ONLINE       rhel2                    Started

ora.eons

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.gsd

OFFLINE OFFLINE      rhel1

OFFLINE OFFLINE      rhel2

ora.net1.network

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

ora.ons

ONLINE  ONLINE       rhel1

ONLINE  ONLINE       rhel2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1        ONLINE  ONLINE       rhel2

ora.LISTENER_SCAN2.lsnr

1        ONLINE  ONLINE       rhel1

ora.LISTENER_SCAN3.lsnr

1        ONLINE  ONLINE       rhel1

ora.oc4j

1        OFFLINE OFFLINE

ora.orcl.db

1        ONLINE  ONLINE       rhel1                    Open

2        ONLINE  ONLINE       rhel2                    Open

ora.rhel1.vip

1        ONLINE  ONLINE       rhel1

ora.rhel2.vip

1        ONLINE  ONLINE       rhel2

ora.scan1.vip

1        ONLINE  ONLINE       rhel2

ora.scan2.vip

1        ONLINE  ONLINE       rhel1

ora.scan3.vip

1        ONLINE  ONLINE       rhel1

记一次删除ocr与dbfile的恢复记录的更多相关文章

  1. 高性能Linux服务器 第6章 ext3文件系统反删除利器ext3grep extundelete工具恢复rm -rf 误删除的文件

    高性能Linux服务器 第6章  ext3文件系统反删除利器ext3grep  extundelete工具恢复rm -rf 误删除的文件 只能用于ext3文件系统!!!!!!!高俊峰(高性能Linux ...

  2. Oracle查询重复数据并删除,只保留一条记录

    前言 项目中,在“资源目录-在线编目”中,资源项子表存在多条重发数据,需要进行数据清理,删除重发的数据,最终只保留一条相同的数据. 操作的表名:R_RESOURCE_DETAILS 操作步骤 一.重复 ...

  3. 笔记:Oracle查询重复数据并删除,只保留一条记录

    1.查找表中多余的重复记录,重复记录是根据单个字段(Id)来判断 select * from 表 where Id in (select Id from 表 group byId having cou ...

  4. 请教Mysql如何删除 不包含 某些字符的记录

    删除包含指定字符的记录 delete from `表` where `字段` like '%指定字符1%' or like '%指定字符2%' or like '%指定字符3%' 删除不包含指定字符的 ...

  5. Linux下定时切割Mongodb数据库日志并删除指定天数前的日志记录

    此为在网络上找来的,觉得很好! 实现目的: 对Mongodb数据库日志按天保存,并且只保留最近7天的日志记录. 具体操作: 使用Mongodb数据库自带的命令来切割日志 ps -def | grep ...

  6. 【方法1】删除Map中Value反复的记录,而且仅仅保留Key最小的那条记录

    介绍 晚上无聊的时候,我做了一个測试题,測试题的大体意思是:删除Map中Value反复的记录,而且仅仅保留Key最小的那条记录. 比如: I have a map with duplicate val ...

  7. 找回phpstorm删除文件/文件夹(phpstorm删除文件/文件夹的恢复)

    恢复phpstorm删除文件/文件夹 再开发的过程中,不小心删除了一个文件夹,后来百度了一下如何恢复,还好PHPStorm是个十分强大的编辑器,不小心删除了文件还可以恢复.一下是恢复的操作: 打开Vi ...

  8. Oracle数据库delete删除普通堆表千万条记录

    Oracle数据库delete删除普通堆表千万条历史记录. 直接删除的影响: 1.可能由于undo表空间不足从而导致最终删除失败的问题: 2.可能导致undo表空间过度使用,影响到其他用户正常操作. ...

  9. git 删除、合并多次commit提交记录

    合并多次记录 1. git log找到要合并的记录的数量. 2. git rebase -i HEAD~5 将最上面一个的记录选为pack,下面记录都改为s. ================= 删除 ...

随机推荐

  1. NOIP 2012 Vigenère 密码

    洛谷 P1079 Vigenère 密码 https://www.luogu.org/problemnew/show/P1079 JDOJ 1779: [NOIP2012]Vigenèr密码 D1 T ...

  2. Python进阶-XI 常用模块之一:collections、time、random、os、sys

    简要介绍一下各种集合: 列表.元组.字典.集合(含frozenset).字符串.堆栈(如手枪弹夹:先进后出).队列(如马克沁机枪的弹夹:先进先出) 1.collections 1)queue 队列介绍 ...

  3. chrome 模拟发送请求的方法

    chrome f12 看到了web页面的请求,有时候想修改一下参数重新执行一下怎么办? 如果是get方法.参数不多可以直接在浏览器中打开.否则post方法参数多时很多人会复制到postman中执行,但 ...

  4. html头部中各式各样的meta

    在写网页的过程中,第一步就是创建一个html文档.如下是最简单的 html5 文档. <!DOCTYPE html> <html lang="en"> &l ...

  5. 使用阿里云OSS上传文件

    本文介绍如何利用Java API操作阿里云OSS对象存储. 1.控制台操作 首先介绍一下阿里云OSS对象存储的一些基本概念. 1.1 进入对象存储界面 登录阿里云账号,进入对象存储界面,如图所示. 进 ...

  6. Elasticsearch由浅入深(六)批量操作:mget批量查询、bulk批量增删改、路由原理、增删改内部原理、document查询内部原理、bulk api的奇特json格式

    mget批量查询 批量查询的好处就是一条一条的查询,比如说要查询100条数据,那么就要发送100次网络请求,这个开销还是很大的如果进行批量查询的话,查询100条数据,就只要发送1次网络请求,网络请求的 ...

  7. Unity和Jenkins真是绝配,将打包彻底一键化!

    说起打包,我们的QA简直是要抓狂,这个确实我也很同情他们.项目最开始打包是另一个同事做的,打包步骤是有些繁琐,但是项目上线后,不敢轻易动啊!每次他们打包总要跟我抱怨,国内版本打包步骤要10多步还能忍, ...

  8. linux中断子系统

    参考引用:http://www.wowotech.net/sort/irq_subsystem wowotech:一个很好的linux技术博客. 一.概述 目的 kernel管理硬件设备的方式:轮询. ...

  9. ant-design自定义FormItem--上传文件组件

    自定义上传组件,只需要在内部的值变化之后调用props中的onChange方法就可以托管在From组件中, 此外为了保证,初始化值发生变化后组件也发生变化,需要检测initialValue 变化,这是 ...

  10. scala中的Option

    Scala中Option是用来表示一个可选类型 什么是可选? --> 主要是指 有值(Some) 和 无值(None)-->Some和None是Option的子类 val myMap:Ma ...