Oracle® Clusterware Administration and Deployment Guide 11g Release 2 (11.2) Part Number E10717-03 |
|
|
View PDF |
This chapter describes how to administer Oracle Clusterware and includes the following topics:
The Cluster Time Synchronization Service (CTSS) is installed as part of Oracle Clusterware and runs in observer mode if it detects a time synchronization service or a time synchronization service configuration, valid or broken, on the system.If CTSS detects that there is no time synchronization service or time synchronization service configuration on any node in the cluster, then CTSS goes into active mode and takes over time management for the cluster.
When nodes join the cluster, if CTSS is in active mode, then it compares the time on those nodes to a reference clock located on one node in the cluster. If there is a discrepancy between the two times and the discrepancy is within a certain stepping limit, then CTSS steps the time of the nodes joining the cluster to synchronize them with the reference.
When Oracle Clusterware starts, if CTSS is running in active mode and the time discrepancy is outside the stepping limit, then CTSS generates an alert in the alert log, exits, and Oracle Clusterware startup fails. You must manually adjust the time of the nodes joining the cluster to synchronize with the cluster, after which Oracle Clusterware can start and CTSS can manage the time for the nodes.
Clocks on the nodes in the cluster become desynchronized with the reference clock periodically for various reasons. When this happens, CTSS either speeds up or slows down the clocks on the nodes until they synchronize with the reference clock. CTSS never runs time backward to synchronize with the reference clock. CTSS periodically writes alerts to the alert log containing information about how often it adjusts time on nodes to keep them synchronized with the reference clock.
To activate CTSS in your cluster, then stop and deconfigure the vendor time synchronization service on all nodes in the cluster. CTSS detects when this happens and assumes time management for the cluster.
Similarly, if you want to deactivate CTSS in your cluster, then configure and start the vendor time synchronization service on all nodes in the cluster. CTSS detects this change and reverts back to observer mode.
See Also:
Oracle Grid Infrastructure Installation Guide for your platform for information about configuring NTP for Oracle Clusterware, or disabling it to use CTSSThis section contains the following topics:
To support the member-kill escalation to node-termination, you must configure and use an external mechanism capable of restarting a problem node without cooperation either from Oracle Clusterware or from the operating system running on that node. To provide this capability, Oracle Clusterware 11g release 2 (11.2) supports the Intelligent Management Platform Interface specification (IPMI), an industry-standard management protocol.
Normally, you configure node termination using IPMI during installation, when you are provided with the option of configuring IPMI from the Failure Isolation Support screen. If you do not configure IPMI during installation, then you can configure it after installation using crsctl
as described in this section.
To use IPMI for node termination, each cluster member node must be equipped with a Baseboard Management Controller (BMC) running firmware compatible with IPMI version 1.5, which supports IPMI over a local area network (LAN). During database operation, member-kill escalation is accomplished by communication from the evicting CSSD daemon to the victim node's BMC over LAN. The IPMI over LAN protocol is carried over an authenticated session protected by a user name and password, which are obtained from the administrator during installation.
However, CSSD requires direct communication with the local BMC during CSS startup to obtain the IP address of the BMC. This is accomplished using a BMC probe command (OSD), which communicates with the BMC through an IPMI driver, which must be installed on each cluster system. The following sub-sections describe the required operations and interfaces.
Ensure that each node in the cluster has
IPMI requires a management network. The management network should be a secure, dedicated network, but it may also be a shared, multi-purpose network. In either case, the Ethernet port used by the BMC on each node must be connected to the network used for server management.
To enable fencing with IPMI, Oracle Clusterware must have access to the management network, so that it can communicate with the BMCs on cluster member nodes. You can provide that access in one of the following ways:
Use the same network connection both for the private network for interconnects and the management network
Use forwarding of network traffic for the management network to the BMC addresses from the private network or some other network accessible to Oracle Clusterware.
For flexibility in network management, you can configure the BMC for dynamic IP address assignment using DHCP. The design and implementation of fencing using IPMI supports this and assumes this to be the normal case. To support dynamic addressing of BMCs, the network used for IPMI-based management must have a DHCP server to assign the BMC's IP address.
Note:
Some server platforms put their network interfaces into a power saving mode when they are powered off, with the result that they may operate only at a lower link speed (for example, 100MB instead of 1 GB). For these platforms, the network switch port to which the BMC is connected must be able to auto-negotiate down to the lower speed, or IPMI cannot function properly.The IPMI driver must be permanently installed on each node, so that it is available when the node is started, so that Oracle Clusterware can communicate with the BMCs.
Typically, the IMPI hardware on a server node obtains its IP address using DHCP when the node starts. While other fencing configuration information is stored in Oracle Registries, dynamic address information must be obtained from the BMC when Oracle Clusterware starts.
Follow the instructions for installing the driver for your platform:
On Linux platforms, use the OpenIPMI
driver. This driver is included with the following Linux distributions:
Red Hat Enterprise Linux 4 AS Update 2
SUSE SLES9 SP3
For testing or configuration purposes, you can install the driver dynamically by manually loading the required modules. To install the driver manually, log in as root
and execute the following commands:
# /sbin/modprobe ipmi_msghandler # /sbin/modprobe ipmi_si # /sbin/modprobe ipmi_devintf
To confirm the modules are loaded, enter /sbin/lsmod |grep ipmi
. For example:
# /sbin/lsmod | grep ipmi ipmi_devintf 12617 0 ipmi_si 33377 0 ipmi_msghandler 33701 2 ipmi_devintf,ipmi_si
You can install these modules automatically by running the modprobe
commands from an initialization script (for example, rc.local
) when the nodes are started.
Access the OpenIPMI
driver using the device special file /dev/ipmi0
. You can display the file using the command ls -l /dev/ipmi0
. For example:
# ls -l /dev/ipmi0 crw------- 1 root root 253, 0 Sep 23 06:29 /dev/ipmi0
Linux systems are typically configured to recognize devices dynamically (hotplug, udev
). However if you do not see the device file using the ls command, then udevd
is not set up to create device files automatically, and you must create the device special file manually. To create the device file manually, you must first determine the device major number for the IPMI device. For example:
# grep ipmi /proc/devices 253 ipmidev
In the preceding example, the displayed "253" is the major number. Use this number with the mknod
command to create the device file. For example:
# mknod /dev/ipmi0 c 253 0x0
Note in the example that the permissions on /dev/ipmi0 allow the device to be opened only by root, as otherwise the system would be vulnerable.
On Windows systems, use the Microsoft IPMI driver (ipmidrv.sys
), which is included on Windows Server 2003 R2 (released December 2005), Windows Vista, and Windows Server 2008. The driver is included as part of the Hardware Management feature, which includes the driver and the WMI (Windows Management Interface (WMI). Note that the Microsoft driver is incompatible with other (third party) drivers, and that these must be removed if they are installed.
Hardware Management is not installed and enabled by default on Windows Server 2003 or Vista systems. Install Hardware Management from the Management and Monitoring Tools section of the Add/Remove Windows Components Wizard. The following steps are taken from the Microsoft TechNet web-site (http://technet.microsoft.com/en-us/library/cc781099.aspx
):
In Control Panel, select Add or Remove Programs.
Select Add/Remove Windows Components.
Select (but do not check) Management and Monitoring Tools and click Details to start the detailed components selection window.
Select and check the Hardware Management option. If a BMC is detected using the SMBIOS Table Type 38h, then a dialog box is displayed instructing you to remove any 3rd party drivers.
If no 3rd party IPMI drivers are installed or they have been removed from the system, then click OK to continue.
Click OK to select the Hardware Management Component, and then click Next. Hardware Management (including WinRM) will be installed.
When the driver and hardware management have been installed, the BMC should be visible in the Windows Device Manager under System devices with the label: Microsoft Generic IPMI Compliant Device. If the BMC is not automatically detected by the plug and play system, then the device must be created manually. Run the following command from a command prompt:
Rundll32 ipmisetp.dll,AddTheDevice
An alternate driver (imbdrv.sys
) is available from Intel as part of "Intel Server Control," but this driver has not been tested with the Oracle Fencing implementation.
For IPMI-based fencing to function, the BMC hardware must be properly configured for remote control using a LAN. Requirements are the same for all platforms, but the configuration process is platform-dependent. The BMC configuration may be effected from the firmware interface (BIOS), using a platform-specific management utility or one of many publicly available utilities, which can be downloaded from the Internet. Two of the available utilities are ipmitool
(http://ipmitool.sourceforge.net/
), which is available for Linux, and ipmiutil
(http://ipmiutil.sourceforge.net/
), which is available for both Linux and Windows.
The BMC configuration must provide the following:
Enable IPMI over LAN, which permits the BMC to be controlled over the network.
Establish a static IP address, or enable dynamic IP-addressing using DHCP.
Establish an administrator account with user name and password.
Configure the BMC for VLAN tags, if the BMC is on a tagged VLAN
The configuration mechanism does not influence functionality if the above requirements are met.
The following is an example of BMC configuration on Linux using ipmitool (version 1.8.6). Because the protection settings on the BMC device (/dev/ipmi0
) restrict access to root
, the following commands must be run with root privilege.
When the IPMI driver is loaded and the device special file is created, verify that ipmitool can communicate with the BMC using the driver using the command ipmitool bmc info
. For example:
# ipmitool bmc info Device ID : 32 . . .
Enable IPMI over LAN:
Determine the IPMI channel number for LAN (typically 1) by printing channel attributes, starting from channel "1" until the output shows typical LAN attributes, such as the IP address. For example:
# ipmitool lan print 1 . . . IP Address Source : 0x01 IP Address : 192.0.2.244 . . .
Turn on LAN access for the channel found. For example, where the LAN channel is 1:
# ipmitool lan set 1 access on
Configure IP address settings for IPMI
Use dynamic IP addressing (DHCP). This is the default assumed by the Oracle Universal Installer. Use of DHCP requires a DHCP-server on the subnet, but has the advantage of making the other addressing settings for you automatically. For example:
# ipmitool lan set 1 ipsrc dhcp
Use static IP address assignment. If the BMC shares a network connection, with the OS, then the IP address must be on the same subnet. You must set not only the IP address, but the proper values for netmask and default gateway for your network configuration.
# ipmitool lan set 1 ipaddr 192.168.0.55 # ipmitool lan set 1 netmask 255.255.255.0 # ipmitool lan set 1 defgw ipaddr 192.168.0.1
Note:
The address that you specify in this example, 192.168.0.55, is associated only with the BMC and does not respond to normal pings.Establish an administration account with username and password.
Set BMC to require password authentication for ADMIN access over LAN. For example:
# ipmitool lan set 1 auth ADMIN MD5,PASSWORD
List the account slots on the BMC and identify an unused slot (a User ID with an empty User Name field -
in this example):
# ipmitool channel getaccess 1 . . . User ID : 4 User Name : Fixed Name : No Access Available : call-in / callback Link Authentication : disabled IPMI Messaging : disabled Privilege Level : NO ACCESS . . .
Assign the administrator user name and password and enable messaging for the identified slot. (Note that for IPMI v1.5 the user name and password can be at most 16 characters). Finally, set the privilege level for that slot when accessed over LAN (channel 1) to ADMIN (level 4). For example, using the administrator IPMIadm
and password myIPMIpwd
:
# ipmitool user set name 4 IPMIadm # ipmitool user set password 4 myIPMIpwd # ipmitool user enable 4 # ipmitool channel setaccess 1 4 privilege=4
Verify the setup after you complete configuration using the command ipmitool lan print
. For example (note configuration settings in italic)
# ipmitool lan print 1 Set in Progress : Set Complete Auth Type Support : NONE MD2 MD5 PASSWORD Auth Type Enable : Callback : MD2 MD5 : User : MD2 MD5 : Operator : MD2 MD5 : Admin : MD5 PASSWORD : OEM : MD2 MD5 IP Address Source : Static Address (or DHCP Address) IP Address : 192.168.0.55 Subnet Mask : 255.255.255.0 MAC Address : 00:14:22:23:fa:f9 SNMP Community String : public IP Header : TTL=0x40 Flags=0x40 Precedence=… Default Gateway IP : 192.168.0.1 Default Gateway MAC : 00:00:00:00:00:00 . . . # ipmitool channel getaccess 1 4 Maximum User IDs : 10 Enabled User IDs : 2 User ID : 4 User Name : IPMIadm Fixed Name : No Access Available : call-in / callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR
Verify that the BMC is accessible and controllable from a remote node in your cluster. For example, on node1, with user IPMIadm
, and password IPMIpwd
, enter the following command:
# ipmitool -H node1 -U IPMIadm -P IPMIpwd bmc info
This command should return the BMC information from the node.
The following is an example of BMC configuration on Windows Server 2003 R2 using ipmiutil (version 2.2.3). These operations should be executed in a command window while logged in with Administrator privileges.
When the IPMI driver is loaded and the device special file is created, verify that ipmitool can communicate with the BMC using the driver using the command ipmitool lan
. For example:
C:\bin> ipmitool lan
ipmiutil ver 2.23
PEFilter parameters displayed . . .
pefconfig, GetLanEntry for channel 1 . . .
Lan Param(0) Set in progress: 00
.
.
.
Caution:
ipmiutil
sets certain LAN parameters only in the context of enabling IPMI over LAN; for example, by supplying the -l
option. This can have the undesired effect of resetting to default values of some previously established parameters if they are not supplied on the command line. For that reason, it is important to complete the following steps in order.Establish remote LAN access with ADMIN privilege (-v 4
) and the IPMI administrator user name and password (ipmiutil locates the LAN channel on its own). For example:
C:\bin> ipmiutil lan -l -v 4 -u root -p password
Configure dynamic or static IP address settings for the BMC:
Use dynamic IP-addressing (DHCP). This is the default assumed by the Oracle Universal Installer. Use of DHCP requires a DHCP-server on the subnet, but has the advantage of making the other addressing settings for you automatically.
C:\bin> ipmiutil lan -l -D
Use static IP address assignment. If the BMC shares a network connection wit the operating system, then the IP address must be on the same subnet. You must set not only the IP address, but also the proper values default gateway and net mask for your network configuration. For example:
C:\bin> ipmiutil lan –l –I 192.0.2.244(IP addr) C:\bin> ipmiutil lan -l –G 192.0.2.1 (gateway IP) C:\bin> ipmiutil lan –l –S 255.255.248.0 (netmask)
Note:
The address that you specify in this example, 192.0.244, is associated only with the BMC and does not respond to normal pings.Also, enabling IPMI over LAN with the -l
option resets the subnet mask to a value obtained from the operating system. Thus, when setting parameters one at a time as in the above example, the -S
option should be specified last.
Verify the setup after you complete configuration using the command ipmitool lan
, which displays the configuration. For example (note configuration settings in italic):
C:\bin> ipmiutil lan ipmiutil ver 2.23 pefconfig ver 2.23 -- BMC version 1.40, IPMI version 1.5 pefconfig, GetPefEntry ... PEFilter(01): 04 h ? event ... (ignore PEF entries) ... pefconfig, GetLanEntry for channel 1 ... Lan Param(0) Set in progress: 00 Lan Param(1) Auth type support: 17 : None MD2 MD5 Pswd Lan Param(2) Auth type enables: 16 16 16 16 00 Lan Param(3) IP address: 192.0.2.244 Lan Param(4) IP addr src: 01 : Static Lan Param(5) MAC addr: 00 11 43 d7 4f bd Lan Param(6) Subnet mask: 255 255 248 0 Lan Param(7) IPv4 header: 40 40 10 GetLanEntry: completion code=cc GetLanEntry(10), ret = -1 GetLanEntry: completion code=cc GetLanEntry(11), ret = -1 Lan Param(12) Def gateway IP: 192.0.2.1 Lan Param(13) Def gateway MAC: 00 00 0c 07 ac dc ... Get User Access(1): 0a 01 01 0f : No access () Get User Access(2): 0a 01 01 14 : IPMI, Admin (root) Get User Access(3): 0a 01 01 0f : No access () pefconfig, completed successfully
Use the ipmitool
command to verify that the BMC is accessible and controllable from a remote node in your cluster. For example, where the IPMI administrator is root, and the node you are checking is node1:
# ipmitool -H node1-U root -P password lan print 1
This command should return the LAN configuration from the node.
This section contains the following topics:
When IPMI is installed during Oracle Clusterware installation, Failure Isolation is configured in two phases. Before you start installation, you install and enable the IPMI driver in the server operating system, and configure the IPMI hardware on each node (IP address mode, admin credentials, and so on). When you start Oracle Clusterware installation, the installer collects the IPMI administrator user ID and password, and stores them in an Oracle Wallet in node-local storage, in the Oracle Local Registry.
With postinstallation configuration, install and enable the IPMI driver, and configure the BMC as described in the preceding section, "Configuring Server Hardware for IPMI".
After you have completed the server configuration, complete the following procedure on each cluster node to register IPMI administrators and passwords on the nodes.
Note:
If the BMC is configured to obtain its IP address using DHCP, it may be necessary to reset the BMC or restart the node to cause it to obtain an address.Start Oracle Clusterware, which allows it to obtain the current IP address from the BMC. This confirms the ability of the clusterware to communicate with the BMC, which is necessary at startup.
If the clusterware was running before the BMC was configured, you can shut it down and restart it. Alternatively, you can use the BMC management utility to obtain the BMC's IP address and then use the cluster control utility crsctl
to store the BMC's IP address in the Oracle Registry by issuing a command similar to the following:
crsctl set css ipmiaddr 192.168.10.45
Use the cluster control utility crsctl to store the previously established user ID and password for the resident BMC in the Oracle Registry by issuing the crsctl set css ipmiadmin
command, and supplying the administrator and password at the prompt. For example:
crsctl set css ipmiadmin youradmin IPMI BMC password: yourpassword
This command validates the supplied credentials and fails if another cluster node cannot access the local BMC using them.
After you complete hardware and operating system configuration, and register the IPMI administrator on Oracle Clusterware, IPMI-based Failure Isolation should be fully functional.
To modify an existing IPMI-based fencing configuration (for example to change BMC passwords, or to configure IPMI for fencing in an existing installation), use the cluster configuration utility crsctl
with the BMC configuration tool appropriate to your platform. For example, to change the administrator password for the BMC, you must first modify the BMC configuration as described in "Configuring Server Hardware for IPMI", and then use crsctl
to inform the clusterware that the password has changed.
The configuration data needed by Oracle Clusterware for IPMI is kept in an Oracle Wallet in the Oracle Registry. Because the configuration information is kept in a secure store, it must be written by the Oracle Clusterware installation owner account (the Grid user), so you must log in as that installation user.
Use the following procedure to modify an existing IPMI configuration:
Enter the command crsctl set css ipmiadmin
admin_name
. For example, with the user IPMIadm
:
$ crsctl set css ipmiadmin IPMIadm
Provide the administrator password. Oracle Clusterware stores the administrator name and password for the local BMC in the Oracle Local Registry.
After storing the new credentials, Oracle Clusterware can retrieve the new credentials and distribute them as required.
Enter the command crsctl set css ipmiaddr
bmc_ip_address
. For example:
$ crsctl set css ipmiaddr 192.0.2.244
This command stores the new BMC IP address of the local BMC in the Oracle Registry, After storing the BMC IP address, Oracle Clusterware can retrieve the new configuration and distribute it as required.
Enter the command crsctl get css ipmiaddr
. For example:
$ crsctl get css ipmiaddr
This command retrieves the IP address for the local BMC from the Oracle Registry and displays it on the console.
This section contains the following topics:
With Oracle Clusterware 11g release 2 (11.2) and later, resources are contained in logical groups of servers called server pools. Resources are hosted on a shared infrastructure and are isolated with respect to their resource consumption by policies, behaving as if they were deployed in a single-system environment.
You can choose to manage resources dynamically using server pools to provide policy-based management of resources in the cluster, or you can choose to manage resources using the traditional method of physically assigning resources to run on particular nodes.
Policy-based management:
Enables dynamic capacity assignment when needed to provide server capacity in accordance with the priorities you set with policies
Enables allocation of resources by importance, so that applications obtain the required minimum resources, whenever possible, and so that lower priority applications do not take resources from more important applications
Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases
Applications and databases running in server pools do not share resources. Because of this, server pools isolate resources where necessary, but enable dynamic capacity assignments as required. Together with role-separated management, this capability addresses the needs of organizations that have a standardized cluster environments, but allow multiple administrator groups to share the common cluster infrastructure.
See Also:
Appendix B, "Oracle Clusterware Resource Reference" for more information about resource attributesOracle Clusterware efficiently allocates different resources in the cluster. You need only to provide the minimum and maximum number of nodes on which a resource can run, combined with a level of importance for each resource that is running on these nodes.
Oracle Clusterware assigns each server a set of attributes as soon as you add a server to a cluster. If you remove the server from the cluster, then Oracle Clusterware revokes those settings. Table 2-1 lists and describes server attributes.
Table 2-1 Server Attributes
Attribute | Description |
---|---|
NAME |
The node name of the server. A server name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server name cannot begin with a period, or with ora. This attribute is required. |
ACTIVE_POOLS |
A space-delimited list of the names of the server pools to which a server belongs. Oracle Clusterware manages this list, automatically. |
STATE |
A server can be in one of the following states: ONLINE The server is a member of the cluster and is available for resource placement. OFFLINE The server is not currently a member of the cluster. Subsequently, it is not available for resource placement. JOINING When a server joins a cluster, Oracle Clusterware processes it to ensure that it is valid for resource placement. Oracle Clusterware also checks the state of resources configured to run on the server. Once the validity of the server and the state of the resources are determined, the server transitions out of the state. LEAVING When a planned shutdown for a server begins, the state of the server transitions to VISIBLE Servers that have Oracle Clusterware running, but not the Cluster Ready Services daemon ( RECONFIGURING When servers move between server pools due to server pool reconfiguration, a server is placed into this state if resources that ran on it in the current server pool must be stopped and relocated. This happens because resources running on the server may not be configured to run in the server pool to which the server is moving. As soon as the resources are successfully relocated, the server is put back into the Use the |
STATE_DETAILS |
This is a read-only attribute that Oracle Clusterware manages. The attribute provides additional details about the state of a server. Possible additional details about a server state are: Server state:
Server state:
Server state:
|
This section contains the following topics:
Server pools divide the cluster into groups of servers hosting the same or similar resources. They distribute a uniform workload (a set of Oracle Clusterware resources) over several servers in the cluster. For example, you can restrict Oracle databases to run only in a particular server pool. When you enable role-separated management, you can explicitly grant permission to operating system users to change attributes of certain server pools.
Logically divide the cluster
Are always exclusive, meaning that one server can only reside in one particular server pool at a certain point in time
Server pools each have three attributes that they are assigned when they are created:
MIN_SIZE
: The minimum number of servers the server pool should contain. If the number of servers in a server pool is below the value of this attribute, then Oracle Clusterware automatically moves servers from elsewhere into the server pool until the number of servers reaches the attribute value.
MAX_SIZE
: The maximum number of servers the server pool should contain.
IMPORTANCE
: A number from 0 to 1000 (0 being least important) that ranks a server pool among all other server pools in a cluster.
When Oracle Clusterware is installed, two server pools are created automatically: Generic and Free. All servers in a new installation are assigned to the Free server pool, initially. Servers move from Free to newly defined server pools automatically. When you upgrade Oracle Clusterware, all nodes are assigned to the Generic server pool, to ensure compatibility with database releases before Oracle Database 11g release 2 (11.2).
The Free server pool contains servers that are not assigned to any other server pools. The attributes of the Free server pool are restricted, as follows:
SERVER_NAMES
, MIN_SIZE
, and MAX_SIZE
cannot be edited by the user
IMPORTANCE
and ACL
can be edited by the user
The Generic server pool stores pre-11g release 2 (11.2) databases and administrator-managed databases that have fixed configurations. Additionally, the Generic server pool contains servers that match either of the following:
Servers that you specified in the HOSTING_MEMBERS
attribute of all resources of the application
resource type
See Also:
"HOSTING_MEMBERS" for more information about this attributeServers with names you specified in the SERVER_NAMES
attribute of the server pools that list the Generic server pool as a parent server pool
The Generic server pool's attributes are restricted, as follows:
No one can modify configuration attributes of the Generic server pool (all attributes are read-only)
When you specify a server name in the HOSTING_MEMBERS
attribute, Oracle Clusterware only allows it if the server is:
Online and exists in the Generic server pool
Online and exists in the Free server pool, in which case Oracle Clusterware moves the server into the Generic server pool
Online and exists in any other server pool and the client is either a cluster administrator or is allowed to use the server pool's servers, in which case, the server is moved into the Generic server pool
Offline and the client is a cluster administrator
When you register a child server pool with the Generic server pool, Oracle Clusterware only allows it if the server names pass the same requirements as previously specified for the resources.
Servers are initially considered for assignment into the Generic server pool at cluster startup time or when a server is added to the cluster, and only after that to other server pools.
If the number of servers in a server pool falls below the value of the MIN_SIZE
attribute for the server pool (such as when a server fails), based on values you set for the MIN_SIZE
and IMPORTANCE
attributes for all server pools, Oracle Clusterware can move servers from other server pools into the server pool whose number of servers has fallen below the value for MIN_SIZE
. Oracle Clusterware selects servers from other server pools to move into the deficient server pool that meet the following criteria:
For server pools that have a lower IMPORTANCE
value than the deficient server pool, Oracle Clusterware can take servers from those server pools even if it means that the number of servers falls below the value for the MIN_SIZE
attribute.
For server pools with equal or greater IMPORTANCE
, Oracle Clusterware only takes servers from those server pools if the number of servers in a server pool is greater than the value of its MIN_SIZE
attribute.
Table 2-2 lists and describes all server pool attributes.
Table 2-2 Server Pool Attributes
Attribute | Values and Format | Description |
---|---|---|
ACL |
String in the following format: owner:user:rwx,pgrp:group:rwx,other::r— |
Defines the owner of the server pool and which privileges are granted to various operating system users and groups. The server pool owner defines the operating system user of the owner, and which privileges that user is granted. The value of this optional attribute is populated at the time a server pool is created based on the identity of the process creating the server pool, unless explicitly overridden. The value can subsequently be changed, if such a change is allowed based on the existing privileges of the server pool. In the string:
By default, the identity of the client that creates the server pool is the user:username:rwx group:group_name:rwx |
ACTIVE_SERVERS |
A string of server names in the following format: server_name1 server_name2 ... |
Oracle Clusterware automatically manages this attribute, which contains the space-delimited list of servers that are currently assigned to a server pool. |
EXCLUSIVE_POOLS |
String |
This optional attribute indicates if servers assigned to this server pool are shared with other server pools. A server pool can explicitly state that it is exclusive of any other server pool that has the same value for this attribute. Two or more server pools are mutually exclusive when the sets of servers assigned to them do not have a single server in common. For example, server pools A and B must be exclusive if they both set the value of this attribute to Top-level server pools are mutually exclusive, by default. |
IMPORTANCE |
Any integer from 0 to 1000 |
Relative importance of the server pool, with |
MAX_SIZE |
Any nonnegative integer or |
The maximum number of servers a server pool can contain. This attribute is optional and is set to Note: A value of |
MIN_SIZE |
Any nonnegative integer |
The minimum size of a server pool. If the number of servers contained in a server pool is below the number you specify in this attribute, then Oracle Clusterware automatically moves servers from other pools into this one until the that number is met. Note: The value of this optional attribute does not set a hard limit. It governs the priority for server assignment whenever the cluster is reconfigured. The default value is |
NAME |
String |
The name of the server pool, which you must specify when you create the server pool. Server pool names must be unique within the domain of names of user-created entities, such as resources, types, and servers. A server pool name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server pool name cannot begin with a period nor with ora. This attribute is required. |
PARENT_POOLS |
A string of space-delimited server pool names in the following format: sp1 sp2 ... |
Use of this attribute makes it possible to create nested server pools. Server pools listed in this attribute are referred to as parent server pools. A server pool included in a parent server pool is referred to as a child server pool. |
SERVER_NAMES |
A string of space-delimited server names in the following format: server1 server2 ... |
A list of candidate node names that may be associated with a server pool. If this optional attribute is empty, Oracle Clusterware assumes that any server may be assigned to any server pool, to the extent allowed by values of other attributes, such as The server names identified as candidate node names are not validated to confirm that they are currently active cluster members. Cluster administrators can use this attribute to define servers as candidates that have not yet been added to the cluster. |
You manage server pools that are managing Oracle RAC databases with the Server Control (SRVCTL) utility. Use the Oracle Clusterware Control (CRSCTL) utility to manage all other server pools. Only cluster administrators have permission to create top-level server pools.
Oracle Clusterware assigns new servers to server pools in the following order:
Generic server pool
User-created server pool
Free server pool
When a server joins a cluster, several things occur.
Consider the server pools configured in Table 2-3:
Table 2-3 Sample Server Pool Attributes Configuration
NAME | IMPORTANCE | MIN_SIZE | MAX_SIZE | PARENT_POOLS | EXCLUSIVE_POOLS |
---|---|---|---|---|---|
sp1 |
1 |
1 |
10 |
|
|
sp2 |
3 |
1 |
6 |
|
|
sp3 |
2 |
1 |
2 |
|
|
sp2_1 |
2 |
1 |
5 |
sp2 |
S123 |
sp2_2 |
1 |
1 |
5 |
sp2 |
S123 |
For example, assume that there are no servers in a cluster; all server pools are empty.
When a server, named server1
, joins the cluster:
Server-to-pool assignment commences.
Oracle Clusterware only processes top-level server pools (those that have no parent server pools), first. In this example, the top-level server pools are sp1
, sp2
, and sp3
.
Oracle Clusterware lists the server pools in order of IMPORTANCE
, as follows: sp2
, sp3
, sp1
.
Oracle Clusterware assigns server1
to sp2
because sp2
has the highest IMPORTANCE
value and its MIN_SIZE
value has not yet been met.
Oracle Clusterware processes the remaining two server pools, sp2_1
and sp2_2
. The sizes of both server pools are below the value of the MIN_SIZE
attribute (both server pools are empty and have MIN_SIZE
values of 1
).
Oracle Clusterware lists the two remaining pools in order of IMPORTANCE
, as follows: sp2_1
, sp2_2
.
Oracle Clusterware assigns server1
to sp2_1
but cannot assign server1
to sp2_2
because sp2_1
is configured to be exclusive with sp2_2
.
After processing, the cluster configuration appears, as follows
This section contains the following topics
Role-separated management is a feature you can implement that enables multiple resources to share the same cluster and hardware resources by setting permissions on server pools or resources, and then using access control lists (ACLs) to provide access. By default, this feature is not implemented during installation.
Role-separated management can be implemented in two ways:
Vertical implementation: Access permissions to server pools or resources are granted by assigning ownership of them to different users for each layer in the stack, and using ACLs assigned to those users. Oracle ASM provides an even more granular approach using groups. Careful planning is required to enable overlapping tasks.
Horizontal implementation: Access permissions for resources are granted using ACLS assigned to server pools and policy-managed databases or applications.
Role-separated management in Oracle Clusterware depends on a cluster administrator. The set of users that are cluster administrators is managed within Oracle Clusterware, as opposed to being an operating system group. By default, after an Oracle grid infrastructure for a cluster installation or after an upgrade, all users are cluster administrators (denoted by the asterisk (*
) in the list of cluster administrators). However, the user that installed Oracle Clusterware in the Grid infrastructure home (Grid home) and root
are permanent cluster administrators, and only these two users can add or remove users from the group. Additionally, only a permanent cluster administrator can remove the asterisk (*
) value from the cluster administrator group, changing the group to enable role-separation management. When you enable this feature, you can limit the cluster administrator group to only those users added by the permanent cluster administrators.
If the cluster is shared by various users, then the cluster administrator can restrict access to certain server pools and, consequently, to certain hardware resources to specific users in the cluster. The permissions are stored for each server pool in the ACL
attribute, described in Table 2-2.
Use the following commands to manage cluster administrators in the cluster:
To query the list of users that are cluster administrators:
$ crsctl query crs administrator
To enable role-separated management, you must remove the *
value from the list of cluster administrators, as either the user who installed Oracle Clusterware or root
, as follows:
# crsctl delete crs administrator -u "*"
The asterisk (*
) must be enclosed in double quotation marks (""
).
To add specific users to the group of cluster administrators:
# crsctl add crs administrator -u user_name
To make all users cluster administrators, enter -u "*"
.
To remove specific users from the group of cluster administrators:
# crsctl delete crs administrator -u user_name
Use the crsctl
command crsctl setperm
to configure horizontal role separation using ACLs that are assigned to server pools, resources, or both. The crsctl
command is located in the path Grid_home
/bin
, where Grid_home
is the Oracle grid infrastructure for a cluster home.
The command uses the following syntax, where the access control string (ACL) is indicated by italics:
crsctl setperm {resource|type|serverpool} name {-u aclstring|-x aclstring|-o user_name|-g group_name}
The flag options are:
-u
Update the entity ACL
-x
Delete the entity ACL
-o
Change the entity owner
-g
Change the entity primary group
The ACL strings are:
{ user:user_name[:readPermwritePermexecPerm] | group:group_name[:readPermwritePermexecPerm] | other[::readPermwritePermexecPerm] }
where:
user
designates the user ACL (access permissions granted to the designated user)
group
designates the group ACL (permissions granted to the designated group members)
other
designates the other ACL (access granted to users or groups not granted particular access permissions)
readperm
Location of the read permission; r
grants, and -
forbids)
writeperm
Location of the write permission; w
grants, and -
forbids)
execperm
Location of the execute permission; x
grants, and -
forbids)
For example, to set permissions on a server pool called psft
for the group personnel, where the administrative user has read/write/execute privileges, the psft
group has read/write privileges, and users outside of the group are granted no access, enter the following command as the root
user:
# crsctl setperm serverpool psft -o personadmin -g personnel:rwxr-w---
This section includes the following topics:
About Voting Disks, Oracle Cluster Registry, and Oracle Local Registry
Managing the Oracle Cluster Registry and Oracle Local Registries
Oracle Clusterware includes three components: voting disks, Oracle Cluster Registry (OCR), and Oracle Local Registry (OLR).
Voting disks manage information about node membership. Each voting disk must be accessible by all nodes in the cluster for nodes to be members of the cluster
OCR manages Oracle Clusterware and Oracle RAC database configuration information
OLR resides on every node in the cluster and manages Oracle Clusterware configuration information for each particular node
You can store voting disks and the OCR on Oracle Automatic Storage Management (Oracle ASM), or a certified cluster file system.
Oracle Universal Installer for Oracle Clusterware 11g release 2 (11.2), does not support the use of raw or block devices. However, if you upgrade from a previous Oracle Clusterware release, then you can continue to use raw or block devices. Oracle recommends that you use Oracle ASM to store voting disks and OCR.
Oracle recommends that you select the option to configure multiple voting disks during Oracle Clusterware installation to improve availability. If necessary, you can dynamically add or replace voting disks after you complete the Oracle Clusterware installation process without stopping the cluster.
Oracle recommends that you select the option to configure multiple OCRs during Oracle Clusterware installation to improve availability. If necessary, you can dynamically add or replace OCRs after you complete the Oracle Clusterware installation process without stopping the cluster.
This section includes the following topics for managing voting disks in your cluster:
Notes:
Voting disk backups are stored in OCR.
Voting disk management requires a valid and working OCR. Before you add, delete, replace, or restore voting disks, run the ocrcheck
command as root
. If OCR is not available or it is corrupt, then you must restore OCR as described in "Restoring Oracle Cluster Registry".
If you upgrade from a previous version of Oracle Clusterware to 11g release 2 (11.2) and you want to store voting disks in an Oracle ASM disk group, then you must set the ASM Compatibility compatibility attribute to 11.2.0.0
.
See Also:
Oracle Database Storage Administrator's Guide for information about setting Oracle ASM compatibility attributes
Oracle Database Administrator's Guide for information about creating server parameter files
Oracle ASM manages voting disks differently from other files that it stores. When you place voting disks on disks in an Oracle ASM disk group, Oracle Clusterware records exactly where they are located. If Oracle ASM fails, then Cluster Synchronization Services (CSS) can still access the voting disks.
If you choose to store your voting disks in Oracle ASM, then Oracle ASM stores all the voting disks for the cluster.
The number of voting files you can store in a particular Oracle ASM disk group depends upon the redundancy of the disk group.
External redundancy: A disk group with external redundancy can store only one voting disk
Normal redundancy: A disk group with normal redundancy can store up to three voting disks
High redundancy: A disk group with high redundancy can store up to five voting disks
By default, Oracle ASM puts each voting disk in its own failure group within the disk group. A failure group is a subset of the disks in a disk group, which could fail at the same time because they share hardware. The failure of common hardware must be tolerated. For example, four drives that are in a single removable tray of a large JBOD (Just a Bunch of Disks) array are in the same failure group because the tray could be removed, making all four drives fail at the same time.
Conversely, drives in the same cabinet can be in multiple failure groups if the cabinet has redundant power and cooling so that it is not necessary to protect against failure of the entire cabinet. However, Oracle ASM mirroring is not intended to protect against a fire in the computer room that destroys the entire cabinet. If voting disks stored on Oracle ASM with Normal or High redundancy, and the storage hardware in one failure group suffers a failure, then if there is another disk available in a disk group in an unaffected failure group, Oracle ASM recovers the voting disk in the unaffected failure group.
A normal redundancy disk group must contain at least two failure groups but if you are storing your voting disks on Oracle ASM, then a normal redundancy disk group must contain at least three failure groups. A high redundancy disk group must contain at least three failure groups. However, Oracle recommends using several failure groups. A small number of failure groups, or failure groups of uneven capacity, can create allocation problems that prevent full use of all of the available storage.
You must specify enough failure groups in each disk group to support the redundancy type for that disk group.
See Also:
Oracle Database Storage Administrator's Guide for more information about disk group redundancy and failure groups
"Adding, Deleting, or Migrating Voting Disks" for information about migrating voting disks
In Oracle Clusterware 11g release 2 (11.2), you no longer have to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added. If all voting disks are corrupted, however, you can restore them as described in "Restoring Voting Disks".
If you have multiple voting disks, then you can remove the voting disks and add them back into your environment using the following commands, where GUID
is the file universal identifier of the voting disk and path_to_voting_disk
is the directory path to the voting disk:
$ crsctl delete css votedisk FUID $ crsctl add css votedisk path_to_voting_disk
Run the crsctl query css votedisk
command to obtain the GUIDs of the voting disks in the cluster.
If all of the voting disks are corrupted, then you can restore them, as follows:
Restore OCR as described in "Restoring Oracle Cluster Registry", if necessary.
This step is necessary only if OCR is also corrupted or otherwise unavailable, such as if OCR is on Oracle ASM and the disk group is no longer available.
See Also:
Oracle Database Storage Administrator's Guide for more information about managing Oracle ASM disk groupsRun the following command as root
from only one node to start the Oracle Clusterware stack in exclusive mode, which does not require voting files to be present or usable:
# crsctl start crs -excl
Run the following command to retrieve the list of voting files currently defined:
$ crsctl query css votedisk
This list may be empty if all voting disks are corrupted, or may have entries that are marked as status 3
or OFF
.
Depending on where you store your voting files, do one of the following:
If the voting disks are stored in Oracle ASM, then run the following command to migrate the voting disks to the Oracle ASM disk group you specify:
crsctl replace votedisk +asm_disk_group
The Oracle ASM disk group to which you migrate the voting files must exist in Oracle ASM. You can use this command whether the voting disks were stored in Oracle ASM or some other storage device.
If voting disks are OFFLINE
and are not stored in Oracle ASM (as indicated by the absence of a name in the last column of the string of information returned by the crsctl query css votedisk
command), then run the following command using the File Universal Identifier (FUID) obtained in the previous step:
$ crsctl delete css votedisk FUID
Add a voting disk, as follows:
$ crsctl add css votedisk path_to_voting_disk
Stop the Oracle Clusterware stack as root
:
# crsctl stop crs
Restart the Oracle Clusterware stack in normal mode as root
:
# crsctl start crs
You can add, remove, and migrate voting disks after you install Oracle Clusterware. Note that the commands you use to do this are different, depending on whether your voting disks are located on Oracle ASM, or are located on another storage option.
Use the following commands to modify voting disks, depending on your storage option:
To display the voting disk FUID and file path of each current voting disk, run the following command:
$ crsctl query css votedisk 1. ONLINE 296641fd201f4f3fbf3452156d3b5881 (/ocfs2/staih09_vd3) []
This command returns a disk sequence number, the status of the disk, the FUID, and the path of the disk or the name of the Oracle ASM disk group on which the disk is stored.
To replace a voting disk with an Oracle ASM disk group, run the following command:
$ crsctl replace votedisk +asm_disk_group
To add one or more voting disks to a storage location other than Oracle ASM storage, run the following command, replacing the path_to_voting_disk
variable with one or more space-delimited, complete paths to the voting disks you want to add:
$ crsctl add css votedisk path_to_voting_disk [...]
To replace voting disk A with voting disk B, you must add voting disk B, and then delete voting disk A. To add a new disk and remove the existing disk, run the following command, replacing the path_to_voting_diskB
variable with the fully qualified path name of voting disk B:
$ crsctl add css votedisk path_to_voting_diskB -purge
The -purge
option deletes existing voting disks.
Use the crsctl replace votedisk
command to replace a voting disk with an Oracle ASM disk group.
To remove a voting disk, run the following command, specifying one or more space-delimited, voting disk FUIDs or comma-delimited directory paths to the voting disks you want to remove:
$ crsctl delete css votedisk {FUID | path_to_voting_disk[...]}
To migrate voting disks from a storage device other than Oracle ASM to Oracle ASM, or to migrate from Oracle ASM to an alternative storage device, specify the Oracle ASM disk group name or path to the non-Oracle ASM storage device in the following command:
$ crsctl replace votedisk {+asm_disk_group | path_to_voting_disk}
You can run this command on any node in the cluster.
Note:
If the cluster is down and cannot restart due to lost voting disks, then you must start CSS in exclusive mode to replace the voting disks by entering the following command:$ crsctl start crs -excl
After modifying the voting disk, verify the voting disk location, as follows:
$ crsctl query css votedisk
See Also:
Appendix E, "CRSCTL Utility Reference" for more information about CRSCTL commandsThis section describes how to manage the OCR and the Oracle Local Registry (OLR) with the following utilities: OCRCONFIG, OCRDUMP, and OCRCHECK.
The OCR contains information about all Oracle resources in the cluster.
The OLR is a registry similar to the OCR located on each node in a cluster, but contains information specific to each node. It contains manageability information about Oracle Clusterware, including dependencies between various services. The Oracle High Availability Services uses this information. The OLR is located on local storage on each node in a cluster. Its default location is in the path Grid_home
/cdata/
host_name
.olr
, where Grid_home
is the Oracle grid infrastructure home, and host_name
is the host name of the node.
This section describes how to administer the OCR in the following topics:
Migrating Oracle Cluster Registry to Oracle Automatic Storage Management
Adding, Replacing, Repairing, and Removing Oracle Cluster Registry Locations
Administering Oracle Cluster Registry with Oracle Cluster Registry Export and Import Commands
Upgrading and Downgrading the Oracle Cluster Registry Configuration
See Also:
"About OCRCONFIG" for information about the OCRCONFIG utility, and "Oracle Cluster Registry Troubleshooting" for information about the OCRDUMP and OCRCHECK utilitiesTo improve Oracle Clusterware storage manageability, OCR is configured, by default, to use Oracle ASM in Oracle Database 11g release 2 (11.2). With the Oracle Clusterware storage residing in an Oracle ASM disk group, you can manage both database and clusterware storage using Oracle Enterprise Manager.
However, if you upgrade from a previous version of Oracle Clusterware, you can migrate OCR to reside on Oracle ASM, and take advantage of the improvements in managing Oracle Clusterware storage.
Note:
If you upgrade from a previous version of Oracle Clusterware to 11g release 2 (11.2) and you want to store OCR in an Oracle ASM disk group, then you must set the ASM Compatibility compatibility attribute to11.2.0.0
.See Also:
Oracle Database Storage Administrator's Guide for information about setting Oracle ASM compatibility attributesTo migrate OCR to Oracle ASM using OCRCONFIG:
Ensure that Oracle Clusterware upgrade to 11g release 2 (11.2) is complete. Run the following command to verify the current running version:
$ crsctl query crs activeversion
Use the Oracle ASM Configuration Assistant (ASMCA) to configure and start Oracle ASM on all nodes in the cluster.
See Also:
Oracle Database Storage Administrator's Guide for more information about using ASMCAUse ASMCA to create an Oracle ASM disk group that is at least the same size of the existing OCR and has at least normal redundancy.
Notes:
You can store the OCR in an Oracle ASM disk group that has external redundancy.
If a disk fails in the disk group, or if you bring down Oracle ASM, then you can lose the OCR because it depends on Oracle ASM for I/O.
To avoid this issue, add another OCR to a different disk group. Alternatively, you can store OCR on a block device, or on a shared file system using OCRCONFIG to enable OCR redundancy.
Oracle does not support storing the OCR on different storage types simultaneously, such as storing OCR on both Oracle ASM and a shared file system, except during a migration.
If Oracle ASM fails, then OCR is not accessible on the node on which Oracle ASM failed, but the cluster remains operational. The entire cluster only fails if the Oracle ASM instance on the OCR master node fails, if the majority of the OCR locations are in Oracle ASM, and if there is an OCR read or write access, then the crsd
stops and the node becomes inoperative.
Ensure that Oracle ASM disk groups that you create are mounted on all of the nodes in the cluster.
See Also:
Oracle Grid Infrastructure Installation Guide for more detailed sizing informationTo add OCR to an Oracle ASM disk group, ensure that the Oracle Clusterware stack is running and run the following command as root
:
# ocrconfig -add +new_disk_group
You can run this command more than once if you add multiple OCR locations. You can have up to five OCR locations. However, each successive run must point to a different disk group.
To remove storage configurations no longer in use, run the following command as root
:
# ocrconfig -delete old_storage_location
Run this command for every configured OCR.
The following example shows how to migrate two OCRs to Oracle ASM using OCRCONFIG.
# ocrconfig -add +new_disk_group
# ocrconfig -delete /dev/raw/raw2
# ocrconfig -delete /dev/raw/raw1
Note:
OCR inherits the redundancy of the disk group. If you want high redundancy for OCR, you must configure the disk group with high redundancy when you create it.To migrate OCR from Oracle ASM to another storage type:
Ensure that Oracle Clusterware upgrade to 11g release 2 (11.2) is complete. Run the following command to verify the current running version:
$ crsctl query crs activeversion
Create a file with the following permissions: root
, oinstall
, 640
.
Note:
Create a mirror of the primary storage location to eliminate a single point of failure for OCR.Ensure there is at least 280 MB of space on the mount partition.
Ensure that the file you created is visible from all nodes in the cluster.
To add the file as an OCR location, ensure that the Oracle Clusterware stack is running and run the following command as root
:
# ocrconfig -add new_file_location
You can run this command more than once if you add more than one OCR location. Each successive run of this command must point to a different file location.
To remove storage configurations no longer in use, run the following command as root
:
# ocrconfig -delete unused_storage_location
You can run this command more than once if there is more than one OCR location configured.
The following example shows how to migrate OCR from Oracle ASM to block devices using OCRCONFIG. For OCRs not stored on Oracle ASM, Oracle recommends that you mirror OCR on different devices.
# ocrconfig -add /dev/sdd1
# ocrconfig -add /dev/sde1
# ocrconfig -add /dev/sdf1
# ocrconfig -delete +unused_disk_group
The Oracle installation process for Oracle Clusterware gives you the option of automatically mirroring OCR. You can manually put the mirrored OCRs on a shared network file system (NFS), or on any cluster file system that is certified by Oracle. Alternatively, you can place the OCR on Oracle ASM and allow it to create mirrors automatically, depending on the redundancy option you select.
This section includes the following topics:
Repairing an Oracle Cluster Registry Configuration on a Local Node
Overriding the Oracle Cluster Registry Data Loss Protection Mechanism
You can manually mirror OCR, as described in the "Adding an Oracle Cluster Registry Location" section, if you:
Upgraded to 11g release 2 (11.2) but did not choose to mirror OCR during the upgrade
Created only one OCR location during the Oracle Clusterware installation
Notes:
Oracle recommends that you configure:
At least three OCR locations, if the OCR is configured on non-mirrored or non-redundant storage. Oracle strongly recommends that you mirror the OCR if the underlying storage is not RAID. Mirroring can help prevent the OCR from becoming a single point of failure.
At least two OCR locations if the OCR is configured on an Oracle ASM disk group. You should configure the OCR in two independent disk groups. Typically this is the work area and the recovery area.
At least two OCR locations if the OCR is configured on mirrored hardware or third-party mirrored volumes.
If the original OCR location does not exist, then you must create an empty (0 byte) OCR location before you run the ocrconfig -add
or ocrconfig -replace
commands.
Ensure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
Ensure that the Oracle ASM disk group that you specify exists and is mounted.
The new OCR file, device, or disk group must be accessible from all of the active nodes in the cluster.
See Also:
Oracle Grid Infrastructure Installation Guide for information about creating OCRs
Oracle Database Storage Administrator's Guide for more information about Oracle ASM disk group management
In addition to mirroring OCR locations, you can also:
Replace an OCR location if there is a misconfiguration or other type of OCR error, as described in the "Replacing an Oracle Cluster Registry Location" section.
Repair an OCR location if Oracle Database displays an OCR failure alert in Oracle Enterprise Manager or in the Oracle Clusterware alert log file, as described in the "Repairing an Oracle Cluster Registry Configuration on a Local Node" section.
Remove an OCR location if, for example, your system experiences a performance degradation due to OCR processing or if you transfer your OCR to RAID storage devices and choose to no longer use multiple OCR locations, as described in the "Removing an Oracle Cluster Registry Location" section.
Note:
The operations in this section affect OCR clusterwide: they change the OCR configuration information in theocr.loc
file on Linux and UNIX systems and the Registry keys on Windows systems. However, the ocrconfig
command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running.Use the procedure in this section to add an OCR location. Oracle Clusterware can manage up to five redundant OCR locations.
Note:
If OCR resides on a cluster file system file or a network file system, create an empty (0 byte) OCR location file before performing the procedures in this section.As the root
user, run the following command to add an OCR location to either Oracle ASM or other storage device:
# ocrconfig -add +asm_disk_group | file_name
Note:
On Linux and UNIX systems, you must beroot
to run ocrconfig
commands. On Windows systems, the user must be a member of the Administrator's group.If you must change an existing OCR location, or change a failed OCR location to a working location, then you can use the following procedure, if one OCR location remains online.
To change an Oracle Cluster Registry location:
Complete the following procedure:
Use the OCRCHECK utility to verify that a copy of OCR other than the one you are going to replace is online, using the following command:
$ ocrcheck
OCRCHECK displays all OCR locations that are registered and whether they are available (online). If an OCR location suddenly becomes unavailable, then it might take a short period for Oracle Clusterware to show the change in status.
Note:
The OCR location that you are replacing can be either online or offline.Use the following command to verify that Oracle Clusterware is running on the node on which the you are going to perform the replace operation:
$ crsctl check crs
Run the following command as root
to replace the current OCR location using either destination_file
or +
ASM_disk_group
to indicate the current and target OCR locations:
# ocrconfig -replace current_OCR_location -replacement new_OCR_location
If you have only one OCR location, then use the following commands:
# ocrconfig -add +new_storage_disk_group # ocrconfig -delete +current_disk_group
See Also:
Oracle Database Storage Administrator's Guide for more information about migrating storageIf any node that is part of your current Oracle RAC cluster is shut down, then use the following command syntax on the stopped node to let that node rejoin the cluster after the node is restarted where you use either a destination_file
or +
ASM_disk_group
to indicate the current and target OCR locations:
ocrconfig -repair -replace current_OCR_location -replacement new_OCR_location
It may be necessary to repair the OCR if your cluster configuration changes while that node is stopped. Repairing an OCR involves either adding, deleting, or replacing an OCR location. For example, you may have to repair an OCR location on a node that was stopped while you were adding, replacing, or removing the OCR. To repair an OCR, run the following command as root
on the node on which you have stopped the Oracle Clusterware daemon:
# ocrconfig -repair -add file_name | -delete file_name | -replace current_file_name -replacement new_file_name
This operation only changes the OCR on the node on which you run this command. For example, if the OCR location is /dev/sde1
, then use the command syntax ocrconfig -repair -add /dev/sde1
on this node to repair the OCR on that node.
Notes:
You cannot repair the OCR on a node on which Oracle Clusterware is running.
When you repair the OCR on a stopped node using ocrconfig -repair
, you must provide the same OCR filename (which should be case-sensitive) as the OCR filenames on other nodes.
If you run the ocrconfig -add | -repair | -replace
command, then the device, file, or Oracle ASM disk group that you are adding must be accessible. This means that a device must exist. You must create an empty (0 byte) OCR location, or the Oracle ASM disk group must exist and be mounted.
If you store OCR on Oracle ASM, then
See Also:
Oracle Database Storage Administrator's Guide for more information about Oracle ASM disk group managementTo remove an OCR location, at least one other OCR must be online. You can remove an OCR location to reduce OCR-related overhead or to stop mirroring your OCR because you moved OCR to redundant storage such as RAID.
Perform the following procedure as the root
user to remove an OCR location from your Oracle Clusterware environment:
Ensure that at least one OCR location other than the OCR location that you are removing is online.
Caution:
Do not perform this OCR removal procedure unless there is at least one other active OCR location online.Run the following command on any node in the cluster to remove an OCR location from either Oracle ASM or other location:
# ocrconfig -delete +ASM_disk_group | file_name
The file_name
variable can be a device name or a file name. This command updates the OCR configuration on all of the nodes on which Oracle Clusterware is running.
OCR has a mechanism that prevents data loss due to accidental overwrites. If you configure a mirrored OCR and if Oracle Clusterware cannot access the mirrored OCR locations and also cannot verify that the available OCR location contains the most recent configuration, then Oracle Clusterware prevents further modification to the available OCR location. In addition, the process prevents overwriting by prohibiting Oracle Clusterware from starting on the node on which only one OCR is available. In such cases, Oracle Database displays an alert message in either Oracle Enterprise Manager, the Oracle Clusterware alert log files, or both. If this problem is local to only one node, you can use other nodes to start your cluster database.
However, if you are unable to start any cluster node in your environment and if you can neither repair OCR nor restore access to all OCR locations, then you can override the protection mechanism. The procedure described in the following list enables you to start the cluster using the available OCR location. However, overriding the protection mechanism can result in the loss of data that was not available when the previous known good state was created.
Note:
Overriding OCR using the following procedure can result in the loss of OCR updates that were made between the time of the last known good OCR update made to the currently accessible OCR and the time at which you performed the overwrite. In other words, running theocrconfig -overwrite
command can result in data loss if the OCR location that you are using to perform the overwrite does not contain the latest configuration updates for your cluster environment.Perform the following procedure to overwrite OCR if a node cannot start and if the alert log contains CLSD-1009 and CLSD-1011 messages.
Attempt to resolve the cause of the CLSD-1009 and CLSD-1011 messages.
Compare the node's OCR configuration (ocr.loc
on Linux and UNIX systems and the Registry on Windows systems) with other nodes on which Oracle Clusterware is running.
If the configurations do not match, run ocrconfig -repair
.
If the configurations match, ensure that the node can access all of the configured OCRs by running an ls
command on Linux and UNIX systems. On Windows, use a dir
command if the OCR location is a file and run GuiOracleObjectManager.exe
to verify that the part of the cluster with the name exists.
Ensure that the most recent OCR contains the latest OCR updates.
Look at output from the ocrdump
command and determine whether it has your latest updates.
If you cannot resolve the problem that caused the CLSD message, then run the command ocrconfig -overwrite
to start the node.
This section describes how to back up OCR content and use it for recovery. The first method uses automatically generated OCR copies and the second method enables you to issue a backup command manually:
Automatic backups: Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three backup copies of OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of files that Oracle Database retains.
Manual backups: Use the ocrconfig -manualbackup
command to force Oracle Clusterware to perform a backup of OCR at any time, rather than wait for the automatic backup. The -manualbackup
option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to the OCR. The OLR only supports manual backups.
When the clusterware stack is down on all nodes in the cluster, the backups that are listed by the command ocrconfig -showbackup
may differ from node to node. After you install or upgrade Oracle Clusterware on a node, or add a node to the cluster, when the root.sh
script finishes, it backs up the OLR.
Listing Backup Files
Run the following command to list the backup files:
# ocrconfig -showbackup
On Windows:
C:\>ocrconfig -showbackup
The ocrconfig -showbackup
command to displays the backup location, timestamp, and the originating node name of the backup files that Oracle Clusterware creates. By default, the -showbackup
option displays information for both automatic and manual backups but you can include the auto
or manual
flag to display only the automatic backup information or only the manual backup information, respectively.
Run the following command to inspect the contents and verify the integrity of the backup file:
# ocrdump -backupfile backup_file_name
On Windows:
C:\>ocrdump -backupfile backup_file_name
You can use any backup software to copy the automatically generated backup files at least once daily to a different device from where the primary OCR resides.
The default location for generating backups on Linux or UNIX systems is Grid_home
/cdata/
cluster_name
, where cluster_name
is the name of your cluster. The Windows default location for generating backups uses the same path structure. Because the default backup is on a local file system, Oracle recommends that you include the backup file created with the OCRCONFIG utility as part of your operating system backup using standard operating system or third-party tools.
Tip:
You can use theocrconfig -backuploc
option to change the location where OCR creates backups. Appendix G, "Managing the Oracle Cluster Registry" describes the OCRCONFIG utility options.Note:
On Linux and UNIX systems, you must beroot
user to run most but not all of the ocrconfig
command options. On Windows systems, the user must be a member of the Administrator's group.See Also:
"Administering Oracle Cluster Registry with Oracle Cluster Registry Export and Import Commands" to use manually created OCR export files to copy OCR content and use it for recoveryIf a resource fails, then before attempting to restore OCR, restart the resource. As a definitive verification that OCR failed, run ocrcheck
and if the command returns a failure message, then both the primary OCR and the OCR mirror have failed. Attempt to correct the problem using the OCR restoration procedure for your platform.
Notes:
You cannot restore your configuration from an OCR backup file using the -import
option, which is explained in "Administering Oracle Cluster Registry with Oracle Cluster Registry Export and Import Commands". You must instead use the -restore
option, as described in the following sections.
If you store OCR on an Oracle ASM disk group and the disk group is not available, then you must recover and mount the Oracle ASM disk group.
See Also:
Oracle Database Storage Administrator's Guide for more information about managing Oracle ASM disk groupsRestoring the Oracle Cluster Registry on Linux or UNIX Systems
If you are storing the OCR on an Oracle ASM disk group, and that disk group is corrupt, then you must restore the Oracle ASM disk group using Oracle ASM utilities, and then mount the disk group again before recovering the OCR. Recover the OCR by running the command ocrconfig -restore
.
See Also:
Oracle Database Storage Administrator's Guide for information about how to restore Oracle ASM disk groupsUse the following procedure to restore OCR on Linux or UNIX systems:
List the nodes in your cluster by running the following command on one node:
$ olsnodes
Stop Oracle Clusterware by running the following command as root
on all of the nodes:
# crsctl stop crs
If the preceding command returns any error due to OCR corruption, stop Oracle Clusterware by running the following command as root
on all of the nodes:
# crsctl stop crs -f
If you are restoring OCR to a cluster file system or network file system, then run the following command as root
to restore OCR with an OCR backup that you can identify in "Listing Backup Files":
# ocrconfig -restore file_name
After you complete this step, skip to step 8.
Start the Oracle Clusterware stack on one node in exclusive mode by running the following command as root
:
# crsctl start crs -excl
Ignore any errors that display.
Check whether crsd
is running. If it is, stop it by running the following command as root
:
# crsctl stop resource ora.crsd -init
Caution:
Do not use the-init
flag with any other command.Restore OCR with an OCR backup that you can identify in "Listing Backup Files" by running the following command as root
:
# ocrconfig -restore file_name
Notes:
If the original OCR location does not exist, then you must create an empty (0 byte) OCR location before you run the ocrconfig -restore
command.
Ensure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
If you configured OCR in an Oracle ASM disk group, then ensure that the Oracle ASM disk group exists and is mounted.
See Also:
Oracle Grid Infrastructure Installation Guide for information about creating OCRs
Oracle Database Storage Administrator's Guide for more information about Oracle ASM disk group management
Verify the integrity of OCR:
# ocrcheck
Stop Oracle Clusterware on the node where it is running in exclusive mode:
# crsctl stop crs -f
Begin to start Oracle Clusterware by running the following command as root
on all of the nodes:
# crsctl start crs
Verify the OCR integrity of all of the cluster nodes that are configured as part of your cluster by running the following CVU command:
$ cluvfy comp ocr -n all -verbose
See Also:
Appendix A, "Cluster Verification Utility Reference" for more information about enabling and using CVURestoring the Oracle Cluster Registry on Windows Systems
If you are storing the OCR on an Oracle ASM disk group, and that disk group is corrupt, then you must restore the Oracle ASM disk group using Oracle ASM utilities, and then mount the disk group again before recovering the OCR. Recover the OCR by running the command ocrconfig -restore
.
See Also:
Oracle Database Storage Administrator's Guide for information about how to restore Oracle ASM disk groupsUse the following procedure to restore OCR on Windows systems:
List the nodes in your cluster by running the following command on one node:
C:\>olsnodes
Stop Oracle Clusterware by running the following command as a Windows administrator on all of the nodes:
C:\>crsctl stop crs
If the preceding command returns any error due to OCR corruption, stop Oracle Clusterware by running the following command as a Windows administrator on all of the nodes:
C:\>crsctl stop crs -f
Start the Oracle Clusterware stack on one node in exclusive mode by running the following command as a Windows administrator:
C:\>crsctl start crs -excl
Ignore any errors that display.
Check whether crsd
is running. If it is, stop it by running the following command as a Windows administrator:
C:\>crsctl stop resource ora.crsd -init
Caution:
Do not use the-init
flag in any other command.Restore OCR with the OCR backup file that you identified in "Listing Backup Files" by running the following command as a Windows administrator:
C:\>ocrconfig -restore file_name
Make sure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
Notes:
If the original OCR location does not exist, then you must create an empty (0 byte) OCR location before you run the ocrconfig -restore
command.
Ensure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
Ensure that the Oracle ASM disk group you specify exists and is mounted.
See Also:
Oracle Grid Infrastructure Installation Guide for information about creating OCRs
Oracle Database Storage Administrator's Guide for more information about Oracle ASM disk group management
Verify the integrity of OCR:
C:\>ocrcheck
Stop Oracle Clusterware on the node where it is running in exclusive mode:
C:\>crsctl stop crs -f
Begin to start Oracle Clusterware by running the following command as a Windows administrator on all of the nodes:
C:\>crsctl start crs
Run the following Cluster Verification Utility (CVU) command to verify the OCR integrity of all of the nodes in your cluster database:
C:\>cluvfy comp ocr -n all -verbose
See Also:
Appendix A, "Cluster Verification Utility Reference" for more information about enabling and using CVUYou can use the OCRDUMP and OCRCHECK utilities to diagnose OCR problems as described under the following topics:
Use the OCRDUMP utility to write the OCR contents to a file so that you can examine the OCR content.
See Also:
"OCRDUMP Utility Syntax and Options" for more information about the OCRDUMP utilityUse the OCRCHECK utility to verify the OCR integrity.
See Also:
"Using the OCRCHECK Utility" for more information about the OCRCHECK utilityIn addition to using the automatically created OCR backup files, you should also export the OCR contents before and after making significant configuration changes, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, and upgrading, downgrading or creating a database. Do this by using the ocrconfig -export
command, which exports the OCR content to a file format.
Caution:
Note the following restrictions for restoring the OCR:The file format generated by ocrconfig -restore
is incompatible with the file format generated by ocrconfig -export
. The ocrconfig -export
and ocrconfig -import
commands are compatible. The ocrconfig -manualbackup
and ocrconfig -restore
commands are compatible. The two file formats are incompatible and must not be interchangeably used.
When exporting the OCR, Oracle recommends including "ocr
", the cluster name, and the timestamp in the name string. For example:
ocr_mycluster1_20090521_2130_export
Using the ocrconfig -export
command also enables you to restore OCR using the -import
option if your configuration changes cause errors. For example, if you have unresolvable configuration problems, or if you are unable to restart Oracle Clusterware after such changes, then restore your configuration using the procedure for your platform.
Notes:
Oracle recommends that you use either automatic or manual backups and theocrconfig -restore
command instead of the ocrconfig -export
and ocrconfig -import
commands to restore OCR for the following reasons:
A backup is a consistent snapshot of OCR, whereas an export is not.
Backups are created when the system is online. You must shut down Oracle Clusterware on all nodes in the cluster to get a consistent snapshot using the ocrconfig -export
command.
You can inspect a backup using the OCRDUMP utility. You cannot inspect the contents of an export.
You can list backups with the ocrconfig -showbackup
command, whereas you must keep track of all generated exports.
Note:
Most configuration changes that you make not only change the OCR contents, the configuration changes also cause file and database object creation. Some of these changes are often not restored when you restore OCR. Do not restore OCR as a correction to revert to previous configurations, if some of these configuration changes should fail. This may result in an OCR location that has contents that do not match the state of the rest of your system.
Note:
This procedure assumes default installation of Oracle Clusterware on all nodes in the cluster, where Oracle Clusterware autostart is enabled.Use the following procedure to import OCR on Linux or UNIX systems:
List the nodes in your cluster by running the following command on one node:
$ olsnodes
Stop Oracle Clusterware by running the following command as root
on all of the nodes:
# crsctl stop crs
If the preceding command returns any error due to OCR corruption, stop Oracle Clusterware by running the following command as root
on all of the nodes:
# crsctl stop crs -f
Start the Oracle Clusterware stack on one node in exclusive mode by running the following command as root
:
# crsctl start crs -excl
Ignore any errors that display.
Check whether crsd
is running. If it is, stop it by running the following command as root
:
# crsctl stop resource ora.crsd -init
Caution:
Do not use the-init
flag with any other command.Import the OCR by running the following command as root
:
# ocrconfig -import file_name
Notes:
If the original OCR location does not exist, then you must create an empty (0 byte) OCR location before you run the ocrconfig -import
command.
Ensure that the OCR devices that you specify in the OCR configuration exist and that these OCR devices are valid.
If you configured OCR in an Oracle ASM disk group, then ensure that the Oracle ASM disk group exists and is mounted.
See Also:
Oracle Grid Infrastructure Installation Guide for information about creating OCRs
Oracle Database Storage Administrator's Guide for more information about Oracle ASM disk group management
Verify the integrity of OCR:
# ocrcheck
Stop Oracle Clusterware on the node where it is running in exclusive mode:
# crsctl stop crs -f
Begin to start Oracle Clusterware by running the following command as root
on all of the nodes:
# crsctl start crs
Verify the OCR integrity of all of the cluster nodes that are configured as part of your cluster by running the following CVU command:
$ cluvfy comp ocr -n all -verbose
Note:
You can only import an exported OCR. To restore OCR from a backup, you must instead use the-restore
option, as described in "Backing Up Oracle Cluster Registry".See Also:
Appendix A, "Cluster Verification Utility Reference" for more information about enabling and using CVUIn Oracle Clusterware 11g release 2 (11.2), each node in a cluster has a local registry for node-specific resources, called an Oracle Local Registry (OLR), that is installed and configured when Oracle Clusterware installs OCR. Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, regardless of whether Oracle Clusterware is running or fully functional.
By default, OLR is located at Grid_home
/cdata/
host_name
.olr
on each node.
Manage OLR using the OCRCHECK, OCRDUMP, and OCRCONFIG utilities as root
with the -local
option.
You can check the status of OLR on the local node using the OCRCHECK utility, as follows:
# ocrcheck -local Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262132 Used space (kbytes) : 9200 Available space (kbytes) : 252932 ID : 604793089 Device/File Name : /private2/crs/cdata/localhost/dglnx6.olr Device/File integrity check succeeded Local OCR integrity check succeeded
You can display the content of OLR on the local node to the text terminal that initiated the program using the OCRDUMP utility, as follows:
# ocrdump -local -stdout
You can perform administrative tasks on OLR on the local node using the OCRCONFIG utility.
# ocrconfig –local –export file_name
Notes:
Oracle recommends that you use the -manualbackup
and -restore
commands and not the -import
and -export
commands.
When exporting OLR, Oracle recommends including "olr
", the host name, and the timestamp in the name string. For example:
olr_myhost1_20090603_0130_export
To import a specified file to OLR:
# ocrconfig –local –import file_name
# ocrconfig –local –manualbackup
Note:
The OLR is backed up at the end of an installation or an upgrade. After that time, you can only manually back up the OLR. Automatic backups are not supported for the OLR. You should create a new backup when you migrate the OCR from Oracle ASM to other storage, or you migrate the OCR from other storage to Oracle ASM.The default backup location for the OLR is in the path Grid_home
/cdata/
host_name
.
To view the contents of the OLR backup file:
ocrdump -local -backupfile olr_backup_file_name
To change the ORL backup location:
ocrconfig -local -backuploc new_olr_backup_path
# crsctl stop crs
# ocrconfig -local -restore file_name
# ocrcheck -local
# crsctl start crs
$ cluvfy comp olr
When you install Oracle Clusterware, it automatically runs the ocrconfig -upgrade
command. To downgrade, follow the downgrade instructions for each component and also downgrade OCR using the ocrconfig -downgrade
command. If you are upgrading OCR, then you can use the OCRCHECK utility to verify the integrity of OCR.
This section contains the following topics:
An Oracle Clusterware configuration requires at least two interfaces:
A public network interface, on which users and application servers connect to access data on the database server
A private network interface for internode communication.
If you use Grid Naming Service and DHCP to manage your network connections, then you may not need to configure address information on the cluster. Using GNS allows public Virtual Internet Protocol (VIP) addresses to be dynamic, DHCP-provided addresses. Clients submit name resolution requests to your network's Domain Name Service (DNS), which forwards the requests to the grid naming service (GNS), managed within the cluster. GNS then resolves these requests to nodes in the cluster.
If you do not use GNS, and instead configure networks manually, then public VIP addresses must be statically configured in the DNS, VIPs must be statically configured in the DNS and hosts file, and private IP addresses require static configuration.
Public network addresses are used to provide services to clients. If your clients are connecting to the Single Client Access Name addresses, then you may need to change public and virtual IP addresses as you add or remove nodes from the cluster, but you do not need to update clients with new cluster addresses.
SCANs function like a cluster alias. However, SCANs are resolved on any node in the cluster, so unlike a VIP address for a node, clients connecting to the SCAN no longer require updated VIP addresses as nodes are added to or removed from the cluster. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
The SCAN is a fully qualified name (host name+domain) that is configured to resolve to all the addresses allocated for the SCAN. The addresses resolve using Round Robin DNS either on the DNS server, or within the cluster in a GNS configuration. SCAN listeners can run on any node in the cluster. SCANs provide location independence for the databases, so that client configuration does not have to depend on which nodes run a particular database.
Oracle Database 11g release 2 (11.2) and later instances only register with SCAN listeners as remote listeners. Upgraded databases register with SCAN listeners as remote listeners, and also continue to register with all node listeners.
Clients configured to use Public VIP addresses for Oracle Database releases before Oracle Database 11g release 2 (11.2) can continue to use their existing connection addresses. Oracle recommends that you configure clients to use SCANs, but it is not required that you use SCANs. When an earlier version of Oracle Database is upgraded, it is registered with the SCAN, and clients can start using the SCAN to connect to that database, or continue to use VIP addresses for connections.
If you continue to use VIP addresses for client connections, you can modify the VIP address while Oracle Database and Oracle ASM continue to run. However, you must stop services while you modify the address. When you restart the VIP address, services are also restarted on the node.
This procedure cannot be used to change a static public subnet to use DHCP. Only the command srvctl add nodeapps -S
creates a DHCP network.
Note:
The following instructions describe how to change only a VIP address, and assume that the host name associated with the VIP address does not change. Note that you do not need to update VIP addresses manually if you are using GNS, and VIPs are assigned using DHCP.If you are changing only the VIP address, then update the DNS and the client hosts files. Also, update the server hosts files, if those are used for VIP addresses.
Perform the following steps to change a VIP address:
Stop all services running on the node whose VIP address you want to change using the following command syntax, where database_name
is the name of the database, service_name_list
is a list of the services you want to stop, and my_node
is the name of the node whose VIP address you want to change:
This example specifies the database name (grid
) using the -d
option and specifies the services (sales,oltp
) on the appropriate node (mynode
).
$ srvctl stop service -d grid -s sales,oltp -n mynode
Confirm the current IP address for the VIP address by running the srvctl config vip
command. This command displays the current VIP address bound to one of the network interfaces. The following example displays the configured VIP address:
$ srvctl config vip -n stbdp03 VIP exists.:stbdp03 VIP exists.: /stbdp03-vip/192.168.2.20/255.255.255.0/eth0
Stop the VIP address using the srvctl stop vip
command:
$ srvctl stop vip -n mynode
Verify that the VIP address is no longer running by running the ifconfig -a
command on Linux and UNIX systems (or issue the ipconfig /all
command on Windows systems), and confirm that the interface (in the example it was eth0:1
) is no longer listed in the output.
Make any changes necessary to the /etc/hosts
files on all nodes on Linux and UNIX systems, or the %windir%\system32\drivers\etc\hosts
file on Windows systems, and make any necessary DNS changes to associate the new IP address with the old host name.
Modify the node applications and provide the new VIP address using the following srvctl modify nodeapps
syntax:
$ srvctl modify nodeapps -n node_name -A new_vip_address
The command includes the following flags and values:
-n
node_name
is the node name
-A
new_vip_address
is the node-level VIP address: name
|ip
/netmask[/if1[|if2|...]]
)
For example, issue the following command as the root
user:
srvctl modify nodeapps -n mynode -A 192.168.2.125/255.255.255.0/eth0
Attempting to issue this command as the installation owner account may result in an error. For example, if the installation owner is oracle
, then you may see the error PRCN-2018: Current user oracle is not a privileged user
.To avoid the error, run the command as the root
or system administrator account.
Note:
To use a different subnet or NIC for the default network before any VIP resource is changed, you must use the command syntaxsrvctl modify nodeapps -S
new_subnet
/
new_netmask
/
new_interface
to change the network resource, where new_subnet
is the new subnet address, new_netmask
is the new netmask, and new_interface
is the new interface. After you change the subnet, then you must change each node's VIP to an IP address on the new subnet, using the command syntax srvctl modify nodeapps -A
new_ip
.Start the node VIP by running the srvctl start vip
command:
$ srvctl start vip -n node_name
The following command example starts the VIP on the node named mynode
:
$ srvctl start vip -n mynode
Repeat the steps for each node in the cluster.
Because the SRVCTL utility is a clusterwide management tool, you can accomplish these tasks for any specific node from any node in the cluster, without logging in to each of the cluster nodes.
Run the following command to verify node connectivity between all of the nodes for which your cluster is configured. This command discovers all of the network interfaces available on the cluster nodes and verifies the connectivity between all of the nodes by way of the discovered interfaces. This command also lists all of the interfaces available on the nodes which are suitable for use as VIP addresses.
$ cluvfy comp nodecon -n all -verbose
This section contains the following topics:
About Private Networks, Network Interfaces, and Network Adapters
Changing or Deleting a Network Interface From a Cluster Configuration File
Oracle Clusterware requires that each node is connected through a private network (in addition to the public network). The private network connection is referred to as the cluster interconnect.
Oracle only supports clusters in which all of the nodes use the same network interface connected to the same subnet (defined as a global interface with the oifcfg
command). You cannot use different network interfaces for each node (node-specific interfaces). Refer to Appendix D, "Oracle Interface Configuration Tool (OIFCFG) Command Reference" for more information about global and node-specific interfaces. Table 2-5 describes how the network interface card (NIC), or network adapter and the private IP address are stored.
Table 2-5 Storage for the Network Adapter, Private IP Address, and Private Host Name
Entity | Stored In... | Comments |
---|---|---|
Network adapter name |
Operating system For example: |
Must be the same on all nodes. It can be changed globally. |
Private network Interfaces |
Oracle Clusterware, in the Grid Plug and Play (GPnP) Profile |
Configure an interface for use as a private interface during installation by marking the interface as Private, or use the |
Note:
You cannot use the steps in this section to change the private node name after you have installed Oracle Clusterware.The consequences of changing interface names depend on which name you are changing, and whether you are also changing the IP address. In cases where you are only changing the interface names, the consequences are minor. If you change the name for the Public Interface that is stored in the OCR, then you also must modify the node applications for each node. Therefore, you must stop the node applications for this change to take effect.
See Also:
My Oracle Support (formerly OracleMetaLink) note 276434.1 for more details about changing the nodeapps to use a new public interface name, available at the following URL:https://metalink.oracle.com
Use the following procedure to change or delete a network interface.
Caution:
The interface used by the Oracle RAC (RDBMS) interconnect must be the same interface that Oracle Clusterware is using with the hostname. Do not configure the private interconnect for Oracle RAC on a separate interface that is not monitored by Oracle Clusterware.Ensure that Oracle Clusterware is running on all of the cluster nodes by running the following command:
olsnodes -s
The output from this command should show that Oracle Clusterware is running on all of the nodes in the cluster. For example:
./olsnodes -s myclustera Active myclusterc Active myclusterb Active
Ensure that the replacement interface is configured and operational in the operating system on all of the nodes. To do thus, use the form of the ifconfig
command for your platform. For example, on Linux, use:
/sbin/ifconfig..
Add the new interface to the cluster as follows, providing the name of the new interface and the subnet address, using the following commands:
$ oifcfg setif -global /interface_name/subnet/:cluster_interconnect
Note:
You can use wildcards in the preceding command example.Ensure that the previous step completes. Then you can remove the former subnet as follows by providing the name and subnet address of the former interface:
$ oifcfg delif -global /interface_name/subnet/
Verify the current configuration using the following command:
oifcfg getif
For example:
$ oifcfg getif eth2 10.220.52.0 global cluster_interconnect eth0 10.220.16.0 global public
On all of the nodes, stop Oracle Clusterware by running the following command as the root
user:
# crsctl stop crs
When the cluster stops, you can deconfigure the deleted network interface in the operating system using the following command:
$ ifconfig down
At this point, the IP address from network interfaces for the old subnet is deconfigured from Oracle Clusterware. This command does not affect the configuration of the IP address on the operating system.
Restart Oracle Clusterware by running the following command on each cluster member node as the root
user:
# crsctl start crs
The changes take effect when Oracle Clusterware restarts.
If you use the CLUSTER_INTERCONNECTS
initialization parameter, then you must update it to reflect the changes.
To change the network interface for the private interconnect (for example, eth1
), you must perform the change on all nodes (globally). This is because Oracle currently does not support the use of different network interface cards in the same subnet for the cluster interconnect.
To change the network interface, perform the following steps:
Make sure that the Oracle Clusterware stack is up and running on all cluster nodes.
Use operating system commands (ifconfig
, or the command for your system) to ensure that the new or replacement interface is configured and up on all cluster member nodes.
On a single node in the cluster add the new global interface specification:
$ oifcfg setif -global interface_name/subnet:cluster_interconnect
Note:
You can use wild cards with the interface name. For example,oifcfg setif -global "eth*/192.168.0.0:cluster_interconnect
is valid syntax. However, be careful to avoid ambiguity with other addresses or masks used with other cluster interfaces. If you use wild cards, then you see warning similar to the following:
eth* 192.168.0.0 global cluster_interconnect PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system
Legacy network configuration does not support wildcards; thus wildcards are resolved using current node configuration at the time of the update.
On a node in the cluster, use the ifconfig
command to ensure that the new IP address exists.
Add the new subnet, with the following command, providing the name of the interface and the subnet address. The changes take effect when Oracle Clusterware restarts:
$ oifcfg setif -global interface_name/subnet:cluster_interconnect
See Also:
Appendix D, "Oracle Interface Configuration Tool (OIFCFG) Command Reference" for more information about using the OIFCFG commandVerify the configuration with the oifcfg getif
command.
Stop Oracle Clusterware on all nodes by running the following command as root
on each node:
# crsctl stop crs
Note:
With cluster network configuration changes, the cluster must be fully stopped; do not use rolling stops and restarts.Assign the current network address to the new network adapter using the ifconfig
command.
As root
user, issue the ifconfig
operating system command to assign the currently used private network address to the network adapter intended to be used for the interconnect. This usually requires some downtime for the current interface and the new interface. See your platform-specific operating system documentation for more information about issuing the ifconfig
command.
Update the network configuration settings on the operating system.
You must update the operating system configuration changes, because changes made using ifconfig
are not persistent.
See Also:
Your operating system documentation for more information about how to makeifconfig
commands persistentRemove the former subnet, as follows, providing the name and subnet address of the former interface:
oifcfg delif -global interface_name/subnet
For example:
$ oifcfg delif -global eth1/10.10.0.0
Caution:
This step should be performed only after a replacement interface is committed into the Grid Plug and Play configuration. Simple deletion of cluster interfaces without providing a valid replacement can result in invalid cluster configuration.Restart Oracle Clusterware by issuing the following command as root
user on all nodes:
# crsctl start crs
You must restart Oracle Clusterware after running the oifcfg delif
command, because Oracle Clusterware, Oracle ASM, and Oracle RAC continue to use the former subnet until they are restarted.