Oracle® Clusterware Administration and Deployment Guide 11g Release 2 (11.2) Part Number E10717-03 |
|
|
View PDF |
This chapter includes the following topics:
Overview of Oracle Clusterware Platform-Specific Software Components
Overview of Cloning and Extending Oracle Clusterware in Grid Environments
Overview of the Oracle Clusterware High Availability Framework and APIs
Oracle Clusterware enables servers to communicate with each other, so that they appear to function as a collective unit. This combination of servers is commonly known as a cluster. Although the servers are standalone servers, each server has additional processes that communicate with other servers. In this way the separate servers appear as if they are one system to applications and end users.
Oracle Clusterware provides the infrastructure necessary to run Oracle Real Application Clusters (Oracle RAC). Oracle Clusterware also manages resources, such as virtual IP (VIP) addresses, databases, listeners, services, and so on. These resources are generally named ora.
resource_name.host_name
. Oracle does not support editing these resources except under the explicit direction of Oracle support. Additionally, Oracle Clusterware can help you manage your applications.
See Also:
Chapter 2, "Administering Oracle Clusterware" and Chapter 5, "Making Applications Highly Available Using Oracle Clusterware" for more informationFigure 1-1 shows a configuration that uses Oracle Clusterware to extend the basic single-instance Oracle Database architecture. In Figure 1-1, the cluster is running Oracle Database and is actively servicing applications and users. Using Oracle Clusterware, you can use the same high availability mechanisms to make your Oracle database and your custom applications highly available.
Figure 1-1 Oracle Clusterware Configuration
The benefits of using a cluster include:
Scalability of applications
Use of less expensive commodity hardware
Ability to fail over
Ability to increase capacity over time by adding servers
Ability to program the startup of applications in a planned order that ensures dependent processes are started
Ability to monitor processes and restart them if they stop
You can program Oracle Clusterware to manage the availability of user applications and Oracle databases. In an Oracle RAC environment, Oracle Clusterware manages all of the resources automatically. All of the applications and processes that Oracle Clusterware manages are either cluster resources or local resources.
Creating a cluster with Oracle Clusterware provides the ability to:
Eliminate unplanned downtime due to hardware or software malfunctions
Reduce or eliminate planned downtime for software maintenance
Increase throughput for cluster-aware applications by enabling the applications to run on all of the nodes in a cluster
Increase throughput on demand for cluster-aware applications, by adding servers to a cluster to increase cluster resources
Reduce the total cost of ownership for the infrastructure by providing a scalable system with low-cost commodity hardware
Oracle Clusterware is required for using Oracle RAC; it is the only clusterware that you need for platforms on which Oracle RAC operates. Although Oracle RAC continues to support many third-party clusterware products on specific platforms, you must also install and use Oracle Clusterware. Note that the servers on which you want to install and run Oracle Clusterware must use the same operating system.
Using Oracle Clusterware eliminates the need for proprietary vendor clusterware and provides the benefit of using only Oracle software. Oracle provides an entire software solution, including everything from disk management with Oracle Automatic Storage Management (Oracle ASM) to data management with Oracle Database and Oracle RAC. In addition, Oracle Database features, such as Oracle Services, provide advanced functionality when used with the underlying Oracle Clusterware high availability framework.
Oracle Clusterware has two stored components, besides the binaries: The voting disk files, which record node membership information, and the Oracle Cluster Registry (OCR), which records cluster configuration information. Voting disks and OCRs must reside on shared storage available to all cluster member nodes.
To use Oracle Clusterware, you must understand the hardware and software concepts and requirements as described in the following sections:
Note:
Many hardware providers have validated cluster configurations that provide a single part number for a cluster. If you are new to clustering, then use the information in this section to simplify your hardware procurement efforts when you purchase hardware to create a cluster.A cluster comprises one or more servers. The hardware in a server in a cluster (or cluster member or node) is similar to a standalone server. However, a server that is part of a cluster, otherwise known as a node or a cluster member, requires a second network. This second network is referred to as the interconnect. For this reason, cluster member nodes require at least two network interface cards: one for a public network and one for a private network. The interconnect network is a private network using a switch (or multiple switches) that only the nodes in the cluster can access.Foot 1
Note:
Oracle does not support using crossover cables as Oracle Clusterware interconnects.Cluster size is determined by the requirements of the workload running on the cluster and the number of nodes that you have configured in the cluster. If you are implementing a cluster for high availability, then configure redundancy for all of the components of the infrastructure as follows:
At least two network interfaces for the public network, bonded to provide one address
At least two network interfaces for the private interconnect network, also bonded to provide one address
The cluster requires cluster-aware storageFoot 2 that is connected to each server in the cluster. This may also be referred to as a multihost device. Oracle Clusterware supports NFS, iSCSI, Direct Attached Storage (DAS), Storage Area Network (SAN) storage, and Network Attached Storage (NAS).
To provide redundancy for storage, generally provide at least two connections from each server to the cluster-aware storage. There may be more connections depending on your I/O requirements. It is important to consider the I/O requirements of the entire cluster when choosing your storage subsystem.
Most servers have at least one local disk that is internal to the server. Often, this disk is used for the operating system binaries; you can also use this disk for the Oracle software binaries. The benefit of each server having its own copy of the Oracle binaries is that it increases high availability, so that corruption to a one binary does not affect all of the nodes in the cluster simultaneously. It also allows rolling upgrades, which reduce downtime.
Each server must have an operating system that is certified with the Oracle Clusterware version you are installing. Refer to the certification matrices available on My Oracle Support (formerly OracleMetaLink) for details, which are available from the following URL:
http://certify.oraclecorp.com/certifyv3/certify/cert_views.group_selection?p_html_source=0
When the operating system is installed and working, you can then install Oracle Clusterware to create the cluster. Oracle Clusterware is installed independently of Oracle Database. Once Oracle Clusterware is installed, you can then install Oracle Database or Oracle RAC on any of the nodes in the cluster.
See Also:
Your platform-specific Oracle database installation documentationOracle Clusterware uses voting disk files to provide fencing and cluster node membership determination. The OCR provides cluster configuration information. You can place the Oracle Clusterware files on either Oracle ASM or on shared common disk storage. If you configure Oracle Clusterware on storage that does not provide file redundancy, then Oracle recommends that you configure multiple locations for OCR and voting disks. The voting disks and OCR are described as follows:
Oracle Clusterware uses voting disk files to determine which nodes are members of a cluster. You can configure voting disks on Oracle ASM, or you can configure voting disks on shared storage.
If you configure voting disks on Oracle ASM, then you do not need to manually configure the voting disks. Depending on the redundancy of your disk group, an appropriate number of voting disks are created.
If you do not configure voting disks on Oracle ASM, then for high availability, Oracle recommends that you have a minimum of three voting disks on physically separate storage. This avoids having a single point of failure. If you configure a single voting disk, then you must use external mirroring to provide redundancy.
You should have at least three voting disks, unless you have a storage device, such as a disk array that provides external redundancy. Oracle recommends that you do not use more than five voting disks. The maximum number of voting disks that is supported is 15.
Oracle Clusterware uses the Oracle Cluster Registry (OCR) to store and manage information about the components that Oracle Clusterware controls, such as Oracle RAC databases, listeners, virtual IP addresses (VIPs), and services and any applications. The OCR stores configuration information in a series of key-value pairs in a tree structure. To ensure cluster high availability, Oracle recommends that you define multiple OCR locations (multiplex). In addition:
You can have up to five OCR locations
Each OCR location must reside on shared storage that is accessible by all of the nodes in the cluster
You can replace a failed OCR location online if it is not the only OCR location
You must update the OCR through supported utilities such as Oracle Enterprise Manager, the Server Control Utility (SRVCTL), the OCR configuration utility (OCRCONFIG), or the Database Configuration Assistant (DBCA)
See Also:
Chapter 2, "Administering Oracle Clusterware" for more information about voting disks and the OCROracle Clusterware enables a dynamic grid infrastructure through the self-management of the network requirements for the cluster. Oracle Clusterware 11g release 2 (11.2) supports the use of DHCP for all private interconnect addresses, as well as for most of the VIP addresses. DHCP provides dynamic configuration of the host's IP address, but it does not provide an optimal method of producing names that are useful to external clients.
When you are using Oracle RAC, all of the clients must be able to reach the database. This means that the VIP addresses must be resolved by the clients. This problem is solved by the addition of the Grid Naming Service (GNS) to the cluster. GNS is linked to the corporate Domain Name Service (DNS) so that clients can easily connect to the cluster and the databases running there. Activating GNS in a cluster requires a DHCP service on the public network.
To implement GNS, you must collaborate with your network administrator to obtain an IP address on the public network for the GNS VIP. DNS uses the GNS VIP to forward requests to the cluster to GNS. The network administrator must delegate a subdomain in the network to the cluster. The subdomain forwards all requests for addresses in the subdomain to the GNS VIP.
GNS and the GNS VIP run on one node in the cluster. The GNS daemon listens, using default port 53, on this address for incoming requests. Oracle Clusterware manages the GNS and the GNS VIP to ensure that they are always available. If the server on which GNS is running fails, then Oracle Clusterware fails GNS over, along with the GNS VIP, to another node in the cluster.
With DHCP on the network, Oracle Clusterware obtains an IP address from the server along with other network information, such as what gateway to use, what DNS servers to use, what domain to use, and what NTP server to use. Oracle Clusterware initially obtains the necessary IP addresses during cluster configuration and it updates the Oracle Clusterware resources with the correct information obtained from the DHCP server.
Oracle RAC 11g release 2 (11.2) introduces the Single Client Access Name (SCAN). The SCAN is a single name that resolves to three IP addresses in the public network. When using GNS and DHCP, Oracle Clusterware configures the VIP addresses for the SCAN name that is provided during cluster configuration.
The node VIP and the three SCAN VIPs are obtained from the DHCP server when using GNS. If a new server joins the cluster, then Oracle Clusterware dynamically obtains the required VIP address from the DHCP server, updates the cluster resource, and makes the server accessible through GNS.
Example 1-1 shows the DNS entries that delegate a domain to the cluster.
Example 1-1 DNS Entries
# Delegate to gns on mycluster mycluster.example.com NS myclustergns.example.com #Let the world know to go to the GNS vip myclustergns.example.com. 10.9.8.7
See Also:
Oracle Grid Infrastructure Installation Guide for details about establishing resolution through DNSAlternatively, you can choose manual address configuration, in which you configure the following:
One public host name for each node.
One VIP address for each node.
You must assign a VIP address to each node in the cluster. Each VIP address must be on the same subnet as the public IP address for the node and should be an address that is assigned a name in the DNS. Each VIP address must also be unused and unpingable from within the network before you install Oracle Clusterware.
Up to three SCAN addresses for the entire cluster.
Note:
The SCAN must resolve to at least one address on the public network. For high availability and scalability, Oracle recommends that you configure the SCAN to resolve to three addresses.See Also:
Your platform-specific Oracle Grid Infrastructure Installation Guide installation documentation for information about requirements and configuring network addressesOracle supports in-place and out-of-place upgrades. Both strategies facilitate rolling upgrades. For Oracle Clusterware 11g release 2 (11.2), in-place upgrades are supported for patches only. Patch bundles and one-off patches are supported for in-place upgrades but patch sets and major point releases are supported for out-of-place upgrades only.
An in-place upgrade replaces the Oracle Clusterware software with the newer version in the same Grid home. Out-of-place upgrade has both versions of the same software present on the nodes at the same time, in different Grid homes, but only one version is active.
Rolling upgrades avoid downtime and ensure continuous availability of Oracle Clusterware while the software is upgraded to the new version. When you upgrade to 11g release 2 (11.2), Oracle Clusterware and Oracle ASM binaries are installed as a single binary called the grid infrastructure. You can upgrade Oracle Clusterware in a rolling manner from Oracle Clusterware 10g and Oracle Clusterware 11g, however you can only upgrade Oracle ASM in a rolling manner from Oracle Database 11g release 1 (11.1).
See Also:
Oracle Grid Infrastructure Installation Guide for more information about upgrading Oracle ClusterwareWhen Oracle Clusterware operational, several platform-specific processes or services run on each node in the cluster. The UNIX, Linux, and Windows processes are described in the following sections:
Oracle Clusterware consists of two separate stacks: an upper stack anchored by the Cluster Ready Services (CRS) daemon (crsd
) and a lower stack anchored by the Oracle High Availability Services daemon (ohasd
). These two stacks have several processes that facilitate cluster operations. The following sections describe these stacks in more detail:
The list in this section describes the processes that comprise CRS. The list includes components that are processes on Linux and UNIX operating systems, or services on Windows.
Cluster Ready Services (CRS): The primary program for managing high availability operations in a cluster.
The CRS daemon (crsd
) manages cluster resources based on the configuration information that is stored in OCR for each resource. This includes start, stop, monitor, and failover operations. The crsd
process generates events when the status of a resource changes. When you have Oracle RAC installed, the crsd
process monitors the Oracle database instance, listener, and so on, and automatically restarts these components when a failure occurs.
Cluster Synchronization Services (CSS): Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using certified third-party clusterware, then CSS processes interfaces with your clusterware to manage node membership information.
The cssdagent
process monitors the cluster and provides I/O fencing. This service formerly was provided by Oracle Process Monitor Daemon (oprocd
), also known as OraFenceService
on Windows. A cssdagent
failure results in Oracle Clusterware restarting the node.
Oracle ASM: Provides disk management for Oracle Clusterware.
Cluster Time Synchronization Service (CTSS): Provides time management in a cluster for Oracle Clusterware.
Event Management (EVM): A background process that publishes events that Oracle Clusterware creates.
Oracle Notification Service (ONS): A publish and subscribe service for communicating Fast Application Notification (FAN) events.
Oracle Agent (oraagent): Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g release 1 (11.1).
Oracle Root Agent (orarootagent): A specialized oraagent
process that helps crsd
manage resources owned by root, such as the network, and the Grid virtual IP address.
The Cluster Synchronization Service (CSS), Event Management (EVM), and Oracle Notification Services (ONS) components communicate with other cluster component layers in the other instances in the same cluster database environment. These components are also the main communication links between Oracle Database, applications, and the Oracle Clusterware high availability components. In addition, these background processes monitor and manage database operations.
The list in this section describes the processes that comprise the Oracle High Availability Services stack. The list includes components that are processes on Linux and UNIX operating systems, or services on Windows.
Grid Plug and Play (gpnpd): GPNPD provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes node have the most recent profile.
Grid Interprocess Communication (GIPC): A helper daemon for the communications infrastructure. Currently has no functionality; to be activated in a later release.
Multicast Domain Name Service (mDNS): Allows DNS requests. The mDNS process is a background process on Linux and UNIX, and a service on Windows.
Oracle Grid Naming Service (GNS): A gateway between the cluster mDNS and external DNS servers. The gnsd
process performs name resolution within the cluster.
Table 1-1 lists the processes and services associated with Oracle Clusterware components. In Table 1-1, if a UNIX or a Linux system process has an (r
) beside it, then the process runs as the root
user.
Table 1-1 List of Processes and Services Associated with Oracle Clusterware Components
Oracle Clusterware Component | Linux/UNIX Process | Windows Services | Windows Processes |
---|---|---|---|
CRS |
|
|
|
CSS |
|
|
|
CTSS |
|
|
|
EVM |
|
|
|
GIPC |
|
|
|
GNS |
|
|
|
Grid Plug and Play |
|
|
|
Master Diskmon |
|
||
mDNS |
|
|
|
Oracle agent |
|
|
|
Oracle High Availability Services |
|
|
|
ONS |
|
|
|
Oracle root agent |
|
|
|
See Also:
"Clusterware Log Files and the Unified Log Directory Structure" for information about the location of log files created for processesOracle Clusterware processes on Linux and UNIX systems include the following:
crsd
: Performs high availability recovery and management operations such as maintaining the OCR and managing application resources. This grid infrastructure process runs as root
and restarts automatically upon failure.
When you install Oracle Clusterware in a single-instance database environment for Oracle ASM and Oracle Restart, ohasd
manages application resources and crsd
is not used.
cssdagent
: Starts, stops, and checks the status of the CSS daemon, ocssd
. In addition, the cssdagent
and cssdmonitor
provide the following services to guarantee data integrity:
Monitors the CSS daemon; if the CSS daemon stops, then it shuts down the node
Monitors the node scheduling to verify that the node is not hung, and shuts down the node on recovery from a hang.
oclskd
: (Oracle Clusterware Kill Daemon) CSS uses this daemon to stop processes associated with CSS group members for which stop requests have come in from other members on remote nodes.
ctssd
: Cluster time synchronization service daemon: Synchronizes the time on all of the nodes in a cluster to match the time setting on the master node but not to an external clock.
When you install Oracle Clusterware, the Cluster Time Synchronization Service (CTSS) is installed as part of the software package. During installation, the Cluster Verification Utility (CVU) determines if the network time protocol (NTP) is in use on any nodes in the cluster. On Windows systems, CVU checks for NTP and Windows Time Service.
If Oracle Clusterware finds that NTP is running or that NTP has been configured, then NTP is not affected by the CTSS installation. Instead, CTSS starts in observer mode (this condition is logged in the alert log for Oracle Clusterware). CTSS then monitors the cluster time and logs alert messages, if necessary, but CTSS does not modify the system time.
If Oracle Clusterware detects that NTP is not running and is not configured, then CTSS designates one node as a clock reference, and synchronizes all of the other cluster member time and date settings to those of the clock reference.
Oracle Clusterware considers an NTP installation to be misconfigured if one of the following is true:
NTP is not installed on some nodes of the cluster; CVU detects an NTP installation by a configuration file, such as ntp.conf
The primary and alternate clock references are different for all of the nodes of the cluster
The NTP processes are not running on all of the nodes of the cluster
Only one type of time synchronization service can be active on the cluster.
See Also:
"Cluster Time Management" for more information about CTSSdiskmon
(Disk Monitor daemon): Monitors and performs I/O fencing for HP Oracle Exadata Storage Server storage. Because Exadata storage can be added to any Oracle RAC node at any time, the diskmon
daemon is always started when ocssd
starts.
evmd
(Event manager daemon): Distributes and communicates some cluster events to all of the cluster members so that they are aware of changes in the cluster.
evmlogger
(Event manager logger): This is started by EVMD at startup. This reads a configuration file to determine what events to subscribe to from EVMD and it runs user defined actions for those events. This facility maintains backward compatibility only.
gpnpd
(Grid Plug and Play daemon): Manages distribution and maintenance of the Grid Plug and Play profile containing cluster definition data.
mdnsd
(Multicast Domain Name Service daemon): Manages name resolution and service discovery within attached subnets.
ocssd
(Cluster Synchronization Service daemon): Manages cluster node membership and runs as the oracle
user; failure of this process results in a node restart.
ohasd
(Oracle High Availability Services daemon): Starts Oracle Clusterware processes and also manages the OLR and acts as the OLR server, as shown in Figure 1-2.
In a cluster, ohasd
runs as root
. However, in an Oracle Restart environment, where ohasd
manages application resources, it runs as the oracle
user.
Note:
Oracle Clusterware on Linux platforms can have multiple threads that appear as separate processes with unique process identifiers.The following section introduces the installation processes for Oracle Clusterware:
You can install different releases of Oracle Clusterware, Oracle ASM, and Oracle Database on your cluster. Follow these guidelines when installing different releases of software on your cluster:
You can only have one installation of Oracle Clusterware running in a cluster, and it must be installed into its own home (Grid_home
). The release of Oracle Clusterware that you use must be equal to or higher than the Oracle ASM and Oracle RAC versions that are running in the cluster. You cannot install a version of Oracle RAC that was released after the version of Oracle Clusterware that you run on the cluster. In other words:
Oracle Clusterware 11g release 2 (11.2) supports Oracle ASM release 11.2 only, because Oracle ASM is in the grid infrastructure home, which also includes Oracle Clusterware.
Oracle Clusterware release 11.2 supports Oracle Database 11g release 2 (11.2), release 1 (11.1), Oracle Database 10g release 2 (10.2), and release 1 (10.1)
Oracle ASM release 11.2 requires Oracle Clusterware release 11.2 and supports Oracle Database 11g release 2 (11.2), release 1 (11.1), Oracle Database 10g release 2 (10.2), and release 1 (10.1)
Oracle Database 11g release 2 (11.2) requires Oracle Clusterware 11g release 2 (11.2)
For example:
If you have Oracle Clusterware 11g release 2 (11.2) installed as your clusterware, then you can have an Oracle Database 10g release 1 (10.1) single-instance database running on one node, and separate Oracle Real Application Clusters 10g release 1 (10.1), release 2 (10.2), and Oracle Real Application Clusters 11g release 1 (11.1) databases also running on the cluster. However, you cannot have Oracle Clusterware 10g release 2 (10.2) installed on your cluster, and install Oracle Real Application Clusters 11g. You can install Oracle Database 11g single-instance on a node in an Oracle Clusterware 10g release 2 (10.2) cluster.
When using different Oracle ASM and Oracle Database releases, the functionality of each is dependent on the functionality of the earlier software release. Thus, if you install Oracle Clusterware 11g and you later configure Oracle ASM, and you use Oracle Clusterware to support an existing Oracle Database 10g release 10.2.0.3 installation, then the Oracle ASM functionality is equivalent only to that available in the 10.2 release version. Set the compatible attributes of a disk group to the appropriate release of software in use.
See Also:
Oracle Database Storage Administrator's Guide for information about compatible attributes of disk groupsThere can be multiple Oracle homes for the Oracle database (both single instance and Oracle RAC) in the cluster. Note that the Oracle RAC databases must be running Oracle9i Database, or higher.
You can use different users for the Oracle Clusterware and Oracle database homes if they belong to the same primary group.
As of 11g release 2 (11.2), there can only be one installation of Oracle ASM running in a cluster. Oracle ASM is always the same version as Oracle Clusterware, which must be the same (or higher) release than that of the Oracle database.
For Oracle RAC running Oracle9i you must run an Oracle9i cluster. For UNIX systems, that is HACMP, Serviceguard, Sun Cluster, or Veritas SF. For Windows and Linux systems, that is the Oracle Cluster Manager. To install Oracle RAC 10g, you must also install Oracle Clusterware.
You cannot install Oracle9i RAC on an Oracle Database 10g cluster. If you have an Oracle9i RAC cluster, you can add Oracle RAC 10g to the cluster. However, when you install Oracle Clusterware 10g, you can no longer install any new Oracle9i RAC databases.
Oracle recommends that you do not run different cluster software on the same servers unless they are certified to work together. However, if you are adding Oracle RAC to servers that are part of a cluster, either migrate to Oracle Clusterware or ensure that:
The clusterware you run is supported to run with Oracle RAC 11g release 2 (11.2).
You have installed the correct options for Oracle Clusterware and the other vendor clusterware to work together.
See Also:
Oracle Grid Infrastructure Installation Guide for more version compatibility informationThe following list describes the tools and utilities for managing your Oracle Clusterware environment:
Oracle Enterprise Manager: Oracle Enterprise Manager has both the Database Control and Grid Control GUI interfaces for managing both single instance and Oracle RAC database environments. It also has GUI interfaces to manage Oracle Clusterware and all components configured in the Oracle grid infrastructure installation. Oracle recommends that you use Oracle Enterprise Manager to perform administrative tasks, including applying patches.
See Also:
Oracle Database 2 Day + Real Application Clusters Guide, Oracle Real Application Clusters Administration and Deployment Guide, and Oracle Enterprise Manager online documentation for more information about administering Oracle Clusterware with Oracle Enterprise ManagerCluster Verification Utility (CVU): CVU is a command-line utility that you use to verify a range of cluster and Oracle RAC specific components. Use CVU to verify shared storage devices, networking configurations, system requirements, and Oracle Clusterware, and operating system groups and users.
Install and use CVU for both preinstallation and postinstallation checks of your cluster environment. CVU is especially useful during preinstallation and during installation of Oracle Clusterware and Oracle RAC components to ensure that your configuration meets the minimum installation requirements. Also use CVU to verify your configuration after completing administrative tasks, such as node additions and node deletions.
See Also:
Your platform-specific Oracle Clusterware and Oracle RAC installation guide for information about how to manually install CVU, and Appendix A, "Cluster Verification Utility Reference" for more information about using CVUServer Control (SRVCTL): SRVCTL is a command-line interface that you can use to manage Oracle resources, such as databases, services, or listeners in the cluster.
Note:
You can only manage server pools that have names prefixed withora.*
by using SRVCTL.See Also:
Server Control Utility reference appendix in the Oracle Real Application Clusters Administration and Deployment GuideOracle Clusterware Control (CRSCTL): CRSCTL is a command-line tool that you can use to manage Oracle Clusterware. CRSCTL should be used for general Clusterware management and management of individual resources.
Oracle Clusterware 11g release 2 (11.2) introduces cluster-aware commands with which you can perform operations from any node in the cluster on another node in the cluster, or on all nodes in the cluster, depending on the operation.
You can use crsctl
commands to monitor cluster resources (crsctl status resource
) and to monitor and manage servers and server pools other than server pools that have names prefixed with ora.*
, such as crsctl status server
, crsctl status serverpool
, crsctl modify serverpool
, and crsctl relocate server
. You can also manage Oracle High Availability Services on the entire cluster (crsctl start | stop | enable | disable | config crs
), using the optional node-specific arguments -n
or -all
. You also can use CRSCTL to manage Oracle Clusterware on individual nodes (crsctl start | stop | enable | disable | config crs
).
See Also:
Chapter 2, "Administering Oracle Clusterware" for more information about using crsctl
commands to manage Oracle Clusterware
Appendix E, "CRSCTL Utility Reference" for a complete list of CRSCTL commands
Oracle Interface Configuration Tool (OIFCFG): OIFCFG is a command-line tool for both single-instance Oracle databases and Oracle RAC environments. Use OIFCFG to allocate and deallocate network interfaces to components. You can also use OIFCFG to direct components to use specific network interfaces and to retrieve component configuration information.
Oracle Cluster Registry Configuration Tool (OCRCONFIG): OCRCONFIG is a command-line tool for OCR administration. You can also use the OCRCHECK and OCRDUMP utilities to troubleshoot configuration problems that affect OCR.
See Also:
Chapter 2, "Administering Oracle Clusterware" for more information about managing OCRCloning nodes is the preferred method of creating new clusters. The cloning process copies Oracle Clusterware software images to other nodes that have similar hardware and software. Use cloning to quickly create several clusters of the same configuration. Before using cloning, you must install an Oracle Clusterware home successfully on at least one node using the instructions in your platform-specific Oracle Clusterware installation guide.
For new installations, or if you must install on only one cluster, Oracle recommends that you use the automated and interactive installation methods, such as Oracle Universal Installer or the Provisioning Pack feature of Oracle Enterprise Manager. These methods perform installation checks to ensure a successful installation. To add or delete Oracle Clusterware from nodes in the cluster, use the addNode.sh
and rootcrs.pl
scripts.
See Also:
Chapter 3, "Cloning Oracle Clusterware to Create a Cluster" for step-by-step cloning procedures
Oracle Enterprise Manager online Help system for more information about the Provisioning Pack
Oracle Clusterware provides many high availability application programming interfaces called CLSCRS APIs that you use to enable Oracle Clusterware to manage applications or processes that run in a cluster. The CLSCRS APIs enable you to provide high availability for all of your applications. Oracle Clusterware with Oracle ASM enables you to create a consolidated pool of storage to support both single-instance Oracle databases and Oracle RAC databases.
See Also:
Appendix F, "Oracle Clusterware C Application Program Interfaces" for more detailed information about the CLSCRS APIsYou can define a VIP address for an application to enable users to access the application independently of the node in the cluster on which the application is running. This is referred to as the application VIP. You can define multiple application VIPs, with generally one application VIP defined for each application running. The application VIP is related to the application by making it dependent on the application resource defined by Oracle Clusterware.
To maintain high availability, Oracle Clusterware components can respond to status changes to restart applications and processes according to defined high availability rules. You can use the Oracle Clusterware high availability framework by registering your applications with Oracle Clusterware and configuring the clusterware to start, stop, or relocate your application processes. That is, you can make custom applications highly available by using Oracle Clusterware to create profiles that monitor, relocate, and restart your applications.
Footnote Legend
Footnote 1: Oracle Clusterware supports up to 100 nodes in a cluster on configurations running Oracle Database 10g release 2 (10.2) and later releases.