11gR2 Clusterware and Grid Home - What You Need to Know
11gR2 Clusterware Key Facts
- 11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.
- The GRID home consists of the Oracle Clusterware and ASM. ASM should not be in a separate home.
- The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.
- The 11gR2 Clusterware can be run by itself or on top of vendor clusterware. See the certification matrix for certified combinations. Ref: Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"
- The GRID Home and the RAC/DB Home must be installed in different locations.
- The 11gR2 Clusterware requires a shared OCR files and voting files. These can be stored on ASM or a cluster filesystem.
- The OCR is backed up automatically every 4 hours to <GRID_HOME>/cdata/<clustername>/ and can be restored via ocrconfig.
- The voting file is backed up into the OCR at every configuration change and can be restored via crsctl.
- The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication. Several virtual IPs need to be registered with DNS. This includes the node VIPs (one per node), SCAN VIPs (three). This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).
- A SCAN (Single Client Access Name) is provided to clients to connect to. For more information on SCAN see Note: 887522.1
- The root.sh script at the end of the clusterware installation starts the clusterware stack. For information on troubleshooting root.sh issues see Note: 1053970.1
- Only one set of clusterware daemons can be running per node.
- On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with "respawn".
- A node can be evicted (rebooted) if a node is deemed to be unhealthy. This is done so that the health of the entire cluster can be maintained. For more information on this see: Note: 1050693.1"Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
- Either have vendor time synchronization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchronization. See Note: 1054006.1 for more information.
- If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors. See Note 946332.1 and Note:948456.1 for more information.
- The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes. Note that crsctl is in the <GRID_HOME>/bin directory. Note that "crsctl start cluster" will only work if ohasd is running.
- The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl stop cluster" to stop the clusterware on all nodes. Note that crsctl is in the <GRID_HOME>/bin directory.
- Killing clusterware daemons is not supported.
- Instance is now part of .db resources in "crsctl stat res -t" output, there is no separate .inst resource for 11gR2 instance.
Clusterware Startup Sequence
The following is the Clusterware startup sequence (image from the "Oracle Clusterware Administration and Deployment Guide):
Don't let this picture scare you too much. You aren't responsible for managing all of these processes, that is the Clusterware's job!
Short summary of the startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High Availability Services Daemon). This daemon spawns 4 processes.
Level 1: OHASD Spawns:
- cssdagent - Agent responsible for spawning CSSD.
- orarootagent - Agent responsible for managing all root owned ohasd resources.
- oraagent - Agent responsible for managing all oracle owned ohasd resources.
- cssdmonitor - Monitors CSSD and node health (along wth the cssdagent).
Level 2: OHASD rootagent spawns:
- CRSD - Primary daemon responsible for managing cluster resources.
- CTSSD - Cluster Time Synchronization Services Daemon
- Diskmon
- ACFS (ASM Cluster File System) Drivers
Level 2: OHASD oraagent spawns:
- MDNSD - Used for DNS lookup
- GIPCD - Used for inter-process and inter-node communication
- GPNPD - Grid Plug & Play Profile Daemon
- EVMD - Event Monitor Daemon
- ASM - Resource for monitoring ASM instances
Level 3: CRSD spawns:
- orarootagent - Agent responsible for managing all root owned crsd resources.
- oraagent - Agent responsible for managing all oracle owned crsd resources.
Level 4: CRSD rootagent spawns:
- Network resource - To monitor the public network
- SCAN VIP(s) - Single Client Access Name Virtual IPs
- Node VIPs - One per node
- ACFS Registery - For mounting ASM Cluster File System
- GNS VIP (optional) - VIP for GNS
Level 4: CRSD oraagent spawns:
- ASM Resouce - ASM Instance(s) resource
- Diskgroup - Used for managing/monitoring ASM diskgroups.
- DB Resource - Used for monitoring and managing the DB and instances
- SCAN Listener - Listener for single client access name, listening on SCAN VIP
- Listener - Node listener listening on the Node VIP
- Services - Used for monitoring and managing services
- ONS - Oracle Notification Service
- eONS - Enhanced Oracle Notification Service
- GSD - For 9i backward compatibility
- GNS (optional) - Grid Naming Service - Performs name resolution
This image shows the various levels more clearly:
11gR2 Clusterware and Grid Home - What You Need to Know的更多相关文章
- Grid Infrastructure Single Client Access Name (SCAN) Explained (文档 ID 887522.1)
APPLIES TO: Oracle Database - Enterprise Edition - Version 11.2.0.1 and laterExalogic Elastic Cloud ...
- 转://诊断 Grid Infrastructure 启动问题 (文档 ID 1623340.1) .
文档内容 用途 适用范围 详细信息 启动顺序: 集群状态 问题 1: OHASD 无法启动 问题 2: OHASD Agents 未启动 问题 3: OCSSD.BI ...
- RAC5——11gR2以后GI进程的变化
参考文档: 11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)诊断 Grid Infrastructu ...
- 诊断 Grid Infrastructure 启动问题 (文档 ID 1623340.1)
适用于: Oracle Database - Enterprise Edition - 版本 11.2.0.1 和更高版本本文档所含信息适用于所有平台 用途 本文提供了诊断 11GR2 和 12C G ...
- 时间同步ctss与ntp的关系【CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running (Doc ID 1054006.1) 】
CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running (Doc ID 1054006.1) In this ...
- 如何诊断 11.2 集群节点驱逐问题 (文档 ID 1674872.1)
适用于: Oracle Database - Enterprise Edition - 版本 11.2.0.1 到 11.2.0.2 [发行版 11.2]本文档所含信息适用于所有平台 用途 这篇文档提 ...
- 11G GI启动顺序
--11gR2 Clusterware and Grid Home - What You Need to Know (文档 ID 1053147.1) 上图来自<Oracle C ...
- RAC Concept
1. RAC的高可用性 RAC的高可用性主要包含以下几点: 1> 实现节点间的负载均衡. 2> 实现失败切换的功能. 3> 通过Service组件来控制客户端的访问路径. 4> ...
- RAC的QA
RAC: Frequently Asked Questions [ID 220970.1] 修改时间 13-JAN-2011 类型 FAQ 状态 PUBLISHED Appli ...
随机推荐
- tree 查询出数据遍历tree
$('#tree1').tree({ url:'${contextPath}/pedition/treelistJc.html?editionUid=${ formatEdition.ppmId}', ...
- 拖动控件 javascript原生,兼容IE6-11、chrome、firefox、Opera、Safari
鼠标拖动元素,对于初学者来说,是一个很难的话题,其实只要应用好事件,就能很好的控制拖动的对象,其主要事件是 mousedown,mousemove,mouseup,其原理是在鼠标点击元素时,在给定鼠标 ...
- [转]Java获取当前路径
1.利用System.getProperty()函数获取当前路径:System.out.println(System.getProperty("user.dir"));//user ...
- 【mysql】SQL常用指令
常用操作指令 show databases;显示所有的数据库: use dbName; 使用指定数据库 show tables; 显示所有的数据表: desc tableName; 查看数据表的字段信 ...
- sql数据库带补全终端命令
mysql pip install mycli pgsql pip install pgcli 都是python脚本,记录备忘.
- [rootfs]Yaffs2
1. busybox: sudo apt-get install busybox(v1.21.1) 2. mkyaffs2image: http://www.aleph1.co.uk/gitweb/? ...
- 49. Group Anagrams
Given an array of strings, group anagrams together. For example, given: ["eat", "tea& ...
- 【Linux】系统 之 Load
一.查看系统负荷 在Linux系统中,我们一般使用uptime命令查看(w命令和top命令也行).你在终端窗口键入uptime,系统会返回一行信息.这行信息的后半部分,显示"load ave ...
- centos7配置笔记
原因:前两天服务器的硬盘出故障,报错:scsi 0:0:2:0: rejecting I/O to dead device,报这个错误的时候重启过一次,撑了一个月时间,现在直接导致整个文件系统崩溃.很 ...
- ucos-内存管理:
注意:一个内存分区至少含有2个内存块(块的大小至少能满足一个指针大小) 1先定义一个内存块结构指针OS_MEM *buffMEM,在定义一个而为指针A[m][n] 2创建内存分区:buffMEM=OS ...