Oracle® Database 2 Day + Real Application Clusters Guide 11g Release 2 (11.2) Part Number E10743-01 |
|
|
View PDF |
This chapter contains the information that your system administrator and network administrator need to help you, as the DBA, configure the two nodes in your cluster. This chapter assumes a basic understanding of the Linux operating system. In some cases, you may need to refer to details in Oracle Real Application Clusters Installation Guide for Linux and UNIX. In addition, you must have root
or sudo
privileges to perform certain tasks in this chapter.
This chapter includes the following sections:
Before you begin your installation, you should check to make sure that your system meets the requirements for Oracle Real Application Clusters (Oracle RAC). The requirements can be grouped into the following three categories:
Each node that you want to make part of your Oracle Clusterware, or Oracle Clusterware and Oracle RAC installation, must satisfy the minimum hardware requirements of the software. These hardware requirements can be categorized as follows:
Physical memory (at least 1.5 gigabyte (GB) of RAM)
An amount of swap space equal the amount of RAM
Temporary space (at least 1 GB) available in /tmp
A processor type (CPU) that is certified with the version of the Oracle software being installed
At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays correctly
All servers that will be used in the cluster have the same chip architecture, for example, all 32-bit processors or all 64-bit processors
Disk space for software installation locations
You will need at least 4.5 GB of available disk space for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory.
Note:
Refer to the Oracle Grid Infrastructure Installation Guide and the Oracle Real Application Clusters Installation Guide for your operating system for the actual disk space requirements. The amount of disk space used by the Oracle software can vary, and might be higher than what is listed in this guide.Shared disk space
An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all the Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files.
Oracle Clusterware achieves superior scalability and high availability by using the following components:
Voting disk–Manages cluster membership and arbitrates cluster ownership between the nodes in case of network failures. The voting disk is a file that resides on shared storage. For high availability, Oracle recommends that you have more than one voting disk, and that you have an odd number of voting disks. If you define a single voting disk, then use mirroring at the file system level for redundancy.
Oracle Cluster Registry (OCR)–Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR contains information such as which database instances run on which nodes and which services run on which databases. The OCR also stores information about processes that Oracle Clusterware controls. The OCR resides on shared storage that is accessible by all the nodes in your cluster. Oracle Clusterware can multiplex, or maintain multiple copies of, the OCR and Oracle recommends that you use this feature to ensure high availability.
Note:
If you choose not to use Oracle ASM for storing your Oracle Clusterware files, then both the voting disks and the OCR must reside on a cluster file system that you configure before you install Oracle Clusterware in the Grid home.These Oracle Clusterware components require the following disk space on a shared file system:
Three Oracle Clusterware Registry (OCR) files, 280 MB each, or 840 MB total disk space
Three voting disk files, 280 MB each, or 840 MB total disk space
If you are not using Oracle ASM for storing Oracle Clusterware files, then for best performance and protection, you should use multiple disks, each using a different disk controller for voting disk file placement. Ensure that each voting disk is configured so that it does not have share any hardware device or have a single point of failure.
Note:
When you install Oracle software, Oracle Universal Installer (OUI) automatically performs hardware prerequisite checks and notifies you if they are not met.See Also:
Your platform-specific Oracle Clusterware installation guide
An Oracle RAC cluster comprises two or more nodes that are linked by a private interconnect. The interconnect serves as the communication path between nodes in the cluster. Each cluster database instance uses the interconnect for messaging to synchronize the use of shared resources by each instance. Oracle RAC also uses the interconnect to transmit data blocks that are shared between the instances.
Oracle Clusterware requires that you connect the nodes in the cluster to a private network by way of a private interconnect. The private interconnect is a separate network that you configure between cluster nodes. The interconnect used by Oracle RAC is the same interconnect that Oracle Clusterware uses. This interconnect should be a private interconnect, meaning it is not accessible to nodes that are not members of the cluster.
When you configure the network for Oracle RAC and Oracle Clusterware, each node in the cluster must meet the following requirements:
Each node has at least two network interface cards (NIC), or network adapters. One adapter is for the public network interface and the other adapter is for the private network interface (the interconnect). Install additional network adapters on a node if that node meets either of the following conditions:
Does not have at least two network adapters
Has two network interface cards but is using network attached storage (NAS). You should have a separate network adapter for NAS.
Note:
For the most current information about supported network protocols and hardware for Oracle RAC installations, refer to the Certify pages on My Oracle Support (formerly OracleMetaLink), which is located athttps://metalink.oracle.com
Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0
, then you must configure eth0
as the public interface on all nodes.
You should configure the same private interface names for all nodes as well. If eth1
is the private interface name for the first node, then eth1
should be the private interface name for your second node.
The network adapter for the public interface must support TCP/IP.
The network adapter for the private interface must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better).
Note:
You must use a switch for the interconnect. Oracle recommends that you use a dedicated network switch. Token-rings or crossover cables are not supported for the interconnect.For the private network, the end points of all designated interconnect interfaces must be completely reachable on the network. Every node in the cluster should be able to connect to every private network interface in the cluster.
The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.
When performing an advanced installation of the Oracle grid infrastructure software, you can chose to use Grid Naming Service (GNS) and Dynamic Host Configuration Protocol (DHCP) for virtual IPs (VIPs). Grid Naming Service is a new feature in Oracle Database 11g release 2 that uses multicast Domain Name Server (mDNS) to enable the cluster to assign host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional network address configuration in the domain name server (DNS). For more information about GNS, refer to Oracle Grid Infrastructure Installation Guide for Linux.
This guide documents how to perform a typical installation, which does not use GNS. You must configure the following addresses manually in your corporate DNS:
A public IP address for each node
A virtual IP address for each node
Three single client access name (SCAN) addresses for the cluster
Note:
Oracle Clusterware uses interfaces marked as private as the cluster interconnects.During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN must be unique within your network. The SCAN addresses should not respond to ping
commands before installation.
During installation of the Oracle grid infrastructure, a listener is created for each of the SCAN addresses. Clients that access the Oracle RAC database should use the SCAN or SCAN address, not the VIP name or address. If an application uses a SCAN to connect to the cluster database, the network configuration files on the client computer do not need to be modified when nodes are added to or removed from the cluster. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. Clients can connect to the cluster database using the easy connect naming method and the SCAN.
See Also:
Oracle Database Net Services Administrator's Guide for information about the easy connect naming methodRefer to the Oracle Grid Infrastructure Installation Guide and the Oracle Real Application Clusters Installation Guide for your platform for information about exact requirements. These requirements can include any of the following:
The operating system version
The kernel version of the operating system
Modifying the values for kernel parameters
Installed packages, patches, or patch sets
Installed compilers and drivers
Web browser type and version
Additional application software requirements
If you are currently running an operating system version that is not supported by Oracle Database 11g release 2 (11.2), then you must first upgrade your operating system before installing Oracle Real Application Clusters 11g.
If you are using Oracle Enterprise Linux as your operating system, then you can use the Oracle Validated RPM system configuration script to configure your system.
To determine if the operating system requirements for Oracle Enterprise Linux have been met:
To determine which distribution and version of Linux is installed, run the following command at the operating system prompt as the root
user:
# cat /proc/version
To determine which chip architecture each server is using and which version of the software you should install, run the following command at the operating system prompt as the root
user:
# uname -m
This command displays the processor type. For a 64-bit architecture, the output would be "x86_64".
To determine if the required errata level is installed, use the following procedure as the root
user:
# uname -r 2.6.9-55.0.0.0.2.ELsmp
Like most software, the Linux kernel is updated to fix bugs in the operating system. These kernel updates are referred to as erratum kernels or errata levels.
The output in the previous example shows that the kernel version is 2.6.9, and the errata level (EL) is 55.0.0.0.2.ELsmp. Review the required errata level for your distribution. If the errata level is below the required minimum errata level, then install the latest kernel update for your operating system. The kernel updates are available from your operating system vendor.
To ensure there are no operating system issues affecting installation, make sure you have installed all the operating system patch updates and packages that are listed in Oracle Clusterware and Oracle Real Application Clusters Installation Guide for your platform. If you are using Oracle Enterprise Linux, you can determine if the required packages, or programs that perform specific functions or calculations, are installed by using the following command as the root
user:
# rpm -q package_name
The variable package_name
is the name of the package you are verifying, such as setarch
. If a package is not installed, then install it from your Linux distribution media or download the required package version from your Linux vendor's Web site.
You can also use either up2date
or YUM (Yellow dog Updater Modified) to install packages and their dependencies on some Linux systems. YUM uses repositories to automatically locate and obtain the correct RPM packages for your system.
See Also:
"About Installing Oracle RAC on Different Operating Systems"
Oracle Clusterware and Oracle Real Application Clusters Installation and Configuration Guide for your platform
After you have verified that your system meets the basic requirements for installing Oracle RAC, the next step is to configure the server in preparation for installation.
In this section, you will perform the following tasks:
See Also:
Depending on whether or not this is the first time Oracle software is being installed on this server, you may need to create operating system groups.
To install the Oracle grid infrastructure software and Oracle RAC, you must create the following operating system groups and users:
The Oracle Inventory group (typically, oinstall
) for all installations. The Oracle Inventory group must be the primary group for Oracle software installation owners. Members of the Oracle Inventory group have access to the Oracle Inventory directory. This directory is the central inventory record of all Oracle software installations on a server as well as the installation logs and trace files from each installation.
An Oracle software owner. This is the user account you use when installing the software.
If you want to use a single software owner for all installations, then typically this user has a name like oracle
. If you plan to install the Oracle grid infrastructure and the Oracle RAC software using separate software owners to separate grid infrastructure administrative privileges from Oracle Database administrative privileges, then typically you would use grid
for the Oracle grid infrastructure software owner and oracle
for the Oracle RAC software owner.
The OSDBA group (typically, dba
) for Oracle Database authentication
Note:
If installing Oracle RAC on Microsoft Windows, OUI automatically creates theORA_DBA
group for authenticating SYSDBA access. Also, if you install the Oracle RAC software while logged in to an account with administrative privileges, you do not need to create a separate user for the installation.If you want to create separate Oracle software owners so you can use separate users and operating system privileges groups for the different Oracle software installations, then note that each of these users must have the Oracle central inventory group as their primary group. Members of this group have write privileges to the Oracle Inventory directory. In Oracle documentation, this group is represented as oinstall
in code examples. A user created to own only the Oracle grid infrastructure binaries is called the grid
user. This user owns both the Oracle Clusterware and Automatic Storage Management binaries.
Note:
You no longer can have separate Oracle Clusterware and Oracle ASM installation owners.If you use one installation owner for both Oracle grid infrastructure and Oracle RAC, then when you want to perform administration tasks, you need to change the value for ORACLE_HOME
environment variable to match the instance you want to administer (Oracle ASM, in the Grid home, or a database instance in the Oracle home). To change the ORACLE_HOME
environment variable, use a command syntax similar to the following example, where /u01/app/grid
is the Oracle grid infrastructure home:
ORACLE_HOME=/u01/app/grid; export ORACLE_HOME
If you try to administer an instance using sqlplus
, lsnrctl
, or asmcmd
commands while ORACLE_HOME
is set to a different binary path, then you will encounter errors. The Oracle home path does not affect srvctl
commands.
You can create additional users and groups to divide administrative access privileges to the Oracle grid infrastructure installation from other administrative users and groups associated with other Oracle installations. Separating administrative access is implemented by specifying membership in different operating system groups, and separating installation privileges is implemented by using different installation owners for each Oracle installation.
The optional users and groups you can create are:
The OSASM group (for example, asm
) for Oracle Automatic Storage Management (Oracle ASM) authentication. If this option is not chosen, then dba
is the default OSASM group.
The OSDBA group for ASM (typically asmdba
). Members of the OSDBA group for ASM are granted read and write access to files managed by Oracle ASM. The Oracle database software owner (typically oracle
) must be a member of this group, and all users with OSDBA membership on databases that you want to have access to the files managed by Oracle ASM should be members of the OSDBA group for ASM
The OSOPER group for Oracle Database (typically, oper
). Create this group if you want a certain operating system users to have a limited set of database administrative privileges (the SYSOPER
privilege). Members of the OSDBA group automatically have all privileges granted by the SYSOPER
privilege.
Note:
Each Oracle software owner must be a member of the same central inventory group. You cannot have more than one central inventory group on a server.By using different operating system groups for authenticating administrative access to each Oracle Database installation, members of the different groups have SYSDBA
privileges for only one database, rather than for all the databases on the system. Also, if you configure a separate operating system group for Oracle ASM authentication, then you can have users that have SYSASM
access to the Oracle ASM instances but do not have SYSDBA
access to the database instances.
In this guide, a single software owner will be used for all installations, named oracle
. The oracle
user will belong to the oinstall
and dba
operating system groups.
To create one software owner with all operating system-authenticated administration privileges:
Determine the groups that already exist on your server by listing the contents of the /etc/group
file.
cat /etc/group
If this is the first time Oracle software has been installed on your server, and the Oracle Inventory group does not exist, then create the Oracle Inventory group (oinstall
) with a group ID that is currently not in use on all the nodes in your cluster. Enter a command as the root
user that is similar to the following:
# /usr/sbin/groupadd -g 1000 oinstall
Create an OSDBA (dba
) group with a group ID that is currently not in use on all the nodes in your cluster by entering a command as the root
user that is similar to the following:
# /usr/sbin/groupadd -g 1001 dba
If the user that owns the Oracle software (oracle
) does not exist on your server, you must create the user. Select a user ID (UID) that is currently not in use on all the nodes in your cluster. To determine which users have already been created on your server, list the contents of the /etc/passwd
file using the command:
cat /etc/passwd
The following command shows how to create the oracle
user and the user's home directory (/home/oracle
) with the default group as oinstall
and the secondary group as dba
, using a UID of 1100:
# useradd -u 1100 –g oinstall -G dba -d /home/oracle -r oracle
Set the password for the oracle
account using the following command. Replace password
with your own password.
passwd oracle Changing password for user oracle. New UNIX password: password retype new UNIX password: password passwd: all authentication tokens updated successfully.
Repeat Step 1 through Step 5 on each node in your cluster.
Verify that the attributes of the user oracle
are identical on each node of your cluster:
id oracle
The command output should be similar to the following:
uid=1100(oracle) gid=1000(oinstall) groups=1000(oinstall),1001(dba)
To install Oracle software, Secure Shell (SSH) connectivity must be set up between all cluster member nodes. OUI uses the ssh
and scp
commands during installation to run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands do not prompt for a password. SSH is also used by the configuration assistants, Enterprise Manager, and when adding nodes to the cluster.
You can configure SSH from the Oracle Universal Installer (OUI) interface during installation for the user account running the installation. The automatic configuration creates passwordless SSH connectivity between all cluster member nodes.
To enable the script to run, you must remove stty
commands from the profiles of any Oracle software installation owners, and remove other security measures that are triggered during a login, and that generate messages to the terminal. These messages, mail checks, and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer. If they are not disabled, then SSH must be configured manually before an installation can be run.
On Linux systems, to enable Oracle Universal Installer to use the ssh
and scp
commands without being prompted for a pass phrase, you must have user equivalency in the cluster. User equivalency exists in a cluster when the following occurs on all nodes in the cluster:
A given user has the same user name, user ID (UID), and password.
A given user belongs to the same groups.
A given group has the same group ID (GID).
For information on configuring user equivalency, see "Configuring Operating System Users and Groups".
See Also:
Oracle Grid Infrastructure Installation Guide for Linux for more information about manually configuring SSHOn Oracle Enterprise Linux, you run Oracle Universal Installer (OUI) from the oracle
account. Oracle Universal Installer obtains information from the environment variables configured for the oracle
user. Prior to running OUI, you must make the following changes to the Oracle grid infrastructure software owner's shell startup file:
Set the default file mode creation mask (umask
) of the installation user (oracle
) to 022 in the shell startup file. Setting the mask to 022 ensures that the user performing the software installation creates files with 644 permissions.
Set ulimit settings for file descriptors (nofile
) and processes (nproc
) for the installation user (oracle
)
Set the software owner's environment variable DISPLAY
environment variables in preparation for the Oracle grid infrastructure installation
Remove any lines in the file that set values for ORACLE_SID
, ORACLE_HOME
, or ORACLE_BASE
environment variables.
After you have saved your changes, run the shell startup script to configure the environment.
Also, if the /tmp
directory has less than 1 GB of available disk space, but you have identified a different, non-shared file system that has at least 1 GB of available space, you can set the TEMP
and TMPDIR
environment variables to specify the alternate temporary directory on this file system.
To review your current environment settings, use the env | more
command as the oracle
user.
Refer to the Oracle Clusterware Installation Guide for your platform for more information on how to configure the Oracle software owner environment prior to installation.
Note:
Remove anystty
commands from such files before you start the installation. On Linux systems, if there are hidden files (such as logon or profile scripts) that contain stty
commands, when these files are loaded by the remote shell during installation, OUI indicates an error and stops the installation.Oracle Database 11g release 2 (11.2) clients connect to the database using a single client access name (SCAN). The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. A SCAN works by being able to resolve to multiple IP addresses referencing multiple listeners in the cluster that handle public client connections. These listeners, called SCAN listeners, are created during installation.
To configure the network in preparation for installing Oracle grid infrastructure:
Determine your cluster name. The cluster name should satisfy the following conditions:
The cluster name is globally unique throughout your host domain.
The cluster name is at least 1 character long and less than 15 characters long.
The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).
If you use third-party vendor clusterware, then Oracle recommends that you use the vendor cluster name.
Determine the public host name and virtual host name for each node in the cluster.
For the public host name, use the primary host name of each node. In other words, use the name displayed by the hostname
command. This host name can be either the permanent or the virtual host name, for example: racnode1
.
Determine a virtual host name for each node. A virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle recommends that you provide a name in the format <public hostname>
-vip
, for example: racnode1-vip
.
Identify the interface names and associated IP addresses for all network adapters by executing the following command on each node:
# /sbin/ifconfig
From the output, identify the interface name (such as eth0
) and IP address for each network adapter that you want to specify as a public network interface.
If your operating system supports it, you can also use the following command to start a graphical interface that you can use to configure the network adapters and /etc/hosts
file:
/usr/bin/system-config-network &
Note:
When you install Oracle Clusterware and Oracle RAC, you will be asked to provide this network information.On each node in the cluster, assign a public IP address with an associated network name to one network adapter. The public name for each node should be registered with your domain name system (DNS). IP addresses on the subnet you identify as private are assigned as private IP addresses for cluster member nodes. You do not need to configure these addresses manually in the /etc/hosts
file.
You can test whether or not an interconnect interface is reachable using a ping
command.
On each node in the cluster, configure a third IP address that will serve as a virtual IP address. Use an IP address that meets the following requirements:
The virtual IP address and the network name must not be currently in use.
The virtual IP address must be on the same subnet as your public IP address.
The virtual host name for each node should be registered with your DNS.
Define a SCAN that resolves to three IP addresses in your DNS.
When you complete the network configuration, the IP address and network interface configuration should be similar to what is shown in Table 2-1 (your node names and IP addresses might be different):
Table 2-1 Manual Network Configuration Example
Identity | Home Node | Host Node | Given Name | Type | Address | Address Assigned By | Resolved By |
---|---|---|---|---|---|---|---|
Node 1 Public |
Node 1 |
|
|
Public |
192.0.2.101 |
Fixed |
DNS |
Node 1 VIP |
Node 1 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.104 |
Fixed |
DNS, hosts file |
Node 1 Private |
Node 1 |
|
|
Private |
192.168.0.1 |
Fixed |
DNS, hosts file, or none |
Node 2 Public |
Node 2 |
|
|
Public |
192.0.2.102 |
Fixed |
DNS |
Node 2 VIP |
Node 2 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.105 |
Fixed |
DNS, hosts file |
Node 2 Private |
Node 2 |
|
|
Private |
192.168.0.2 |
Fixed |
DNS, hosts file, or none |
SCAN VIP 1 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.201 |
Fixed |
DNS |
SCAN VIP 2 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.202 |
Fixed |
DNS |
SCAN VIP 3 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.203 |
Fixed |
DNS |
Footnote 1 Node hostnames may resolve to multiple addresses.
Even if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts
file on each node, specifying the public IP addresses. Configure the /etc/hosts
file so that it is similar to the following example:
#eth0 - PUBLIC 192.0.2.100 racnode1.example.com racnode1 192.0.2.101 racnode2.example.com racnode2
Note:
Make a note of the addresses you configured in the/etc/hosts
file or registered with DNS. When you install Oracle grid infrastructure and Oracle RAC, you will be prompted for the public, virtual IP, and SCAN addresses.
Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts
file. Use DNS resolution for SCAN VIPs.
The fully qualified SCAN for the cluster defaults to cluster_name
-scan.
GNS_subdomain_name
, for example docrac-scan.example.com
. The short SCAN for the cluster is docrac-scan
. You can use any name for the SCAN, as long as it is unique within your network and conforms to the RFC 952 standard.
If you configured the IP addresses in a DNS server, then, as the root
user, change the hosts search order in /etc/nsswitch.conf
on all nodes as shown here:
Old:
hosts: files nis dns
New:
hosts: dns files nis
After modifying the nsswitch.conf
file, restart the nscd
daemon on each node using the following command:
# /sbin/service nscd restart
After you have completed the installation process, configure clients to use the SCAN to access the cluster. Using the previous example, the clients would use docrac-scan
to connect to the cluster.
See Also:
Your platform-specific Oracle Clusterware installation guide
After you have configured the network, perform verification tests to make sure it is configured properly. If there are problems with the network connection between nodes in the cluster, the Oracle Clusterware installation will fail.
To verify the network configuration on a two-node cluster that is running Oracle Enterprise Linux:
As the root
user, verify the configuration of the public and private networks. Verify that the interfaces are configured on the same network (either private or public) on all nodes in your cluster.
In this example, eth0
is used for the public network and eth1
is used for the private network on each node.
# /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0E:0C:08:67:A9 inet addr: 192.0.2.100 Bcast:192.0.2.255 Mask:255.255.240.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:270332689 errors:0 dropped:0 overruns:0 frame:0 TX packets:112346591 errors:2 dropped:0 overruns:0 carrier:2 collisions:202 txqueuelen:1000 RX bytes:622032739 (593.2 MB) TX bytes:2846589958 (2714.7 MB) Base address:0x2840 Memory:fe7e0000-fe800000 eth1 Link encap:Ethernet HWaddr 00:04:23:A6:CD:59 inet addr: 10.10.10.11 Bcast: 10.10.10.255 Mask:255.255.240.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21567028 errors:0 dropped:0 overruns:0 frame:0 TX packets:15259945 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4091201649 (3901.6 MB) TX bytes:377502797 (360.0 MB) Base address:0x2800 Memory:fe880000-fe8a0000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:52012956 errors:0 dropped:0 overruns:0 frame:0 TX packets:52012956 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:905082901 (863.1 MB) TX bytes:905082901 (863.1 MB)
As the root
user, verify the network configuration by using the ping
command to test the connection from each node in your cluster to all the other nodes. For example, as the root
user, you might run the following commands on each node:
# ping -c 3 racnode1.example.com # ping -c 3 racnode1 # ping -c 3 racnode2.example.com # ping -c 3 racnode2
You should not get a response from the nodes using the ping
command for the virtual IPs (racnode1-vip
, racnode2-vip
) or the SCAN IPs until after Oracle Clusterware is installed and running. If the ping
commands for the public addresses fail, resolve the issue before you proceed.
Ensure that you can access the default gateway with a ping
command. To identify the default gateway, use the route
command, as described in the Oracle Enterprise Linux Help utility.
When you install the Oracle software on your server, Oracle Universal Installer expects the operating system to have specific packages and software applications installed.
This section covers the following topics:
You must also ensure that you have a certified combination of the operating system and the Oracle Database software by referring to My Oracle Support (formerly OracleMetaLink) certification, which is located at the following Web site
https://metalink.oracle.com
You can find this by clicking Certify and then selecting 1.View Certifications by Product.
Note:
Oracle Universal Installer verifies that your server and operating system meet the listed requirements. Check the requirements before you start Oracle Universal Installer, to ensure your server and operating system meet will meet the requirements.Before starting the installation, ensure that the date and time settings on both nodes are set as closely as possible to the same date and time. A cluster time synchronization mechanism ensures that the internal clocks of all the cluster members are synchronized. For Oracle RAC on Linux, you can use either the Network Time Protocol (NTP) or the Oracle Cluster Time Synchronization Service.
NTP is a protocol designed to synchronize the clocks of servers connected by a network. When using NTP, each server on the network runs client software to periodically make timing requests to one or more servers, referred to as reference NTP servers. The information returned by the timing request is used to adjust the server's clock. All the nodes in your cluster should use the same reference NTP server.
If you do not configure NTP, Oracle will configure and use the Cluster Time Synchronization Service (CTSS). CTSS can also be used to synchronize the internal clocks of all the members in the cluster. CTSS keeps the member nodes of the cluster synchronized. CTSS designates the first node in the cluster as the master and then synchronizes all other nodes in the cluster to have the same time as the master node. CTSS does not use any external clock for synchronization.
Note:
Using NTP or CTSS does not protect your system against human error resulting from a change in the system time for a node.See Also:
Your platform-specific Oracle Clusterware installation guide
OUI checks the current settings for various kernel parameters to ensure they meet the minimum requirements for deploying Oracle RAC. For production database systems, Oracle recommends that you tune the settings to optimize the performance of your particular system.
Note:
If you find parameter settings or shell limit values on your system that are greater than the values mentioned by OUI, then do not modify the parameter setting.See Also:
Your platform-specific Oracle Clusterware installation guide
You may be required to perform special configuration steps that are specific to the operating system on which you are installing Oracle RAC, or for the components used with your cluster. The following list provides examples of operating-specific installation tasks:
Configure the use of Huge Pages on SUSE Linux, Red Hat Enterprise Linux, or Oracle Enterprise Linux.
Configure the hangcheck-timer module on SUSE Linux, Red Hat Enterprise Linux, or Oracle Enterprise Linux.
Set shell limits for the oracle
user on Red Hat Linux or Oracle Enterprise Linux systems to increase the number of files and processes available to Oracle Clusterware and Oracle RAC.
Start the Telnet service on Microsoft Windows.
Create X library symbolic links on HP-UX.
Configure network tuning parameters on AIX Based Systems.
See Also:
"About Installing Oracle RAC on Different Operating Systems"
Your platform-specific Oracle Clusterware installation guide
Oracle RAC requires access to a shared file system for storing Oracle Clusterware files. You must also determine where the Oracle software and database files will be installed.
This section describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following topics:
See Also:
Your platform-specific Oracle Clusterware installation guide
The Oracle Inventory (oraInventory
) directory is the central inventory record of all Oracle software installations on a server. The oraInventory
directory contains the following:
A registry of the Oracle home directories (Oracle grid infrastructure and Oracle Database) on the system
Installation logs and trace files from installations of Oracle software. These files are also copied to the respective Oracle homes for future reference
The oraInventory
directory is created by OUI during installation.By default, the Oracle Inventory directory is not installed under the Oracle Base directory. This is because all Oracle software installations share a common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas each user that performs a software installation can use a separate Oracle base directory.
If you have an existing Oracle Inventory, then ensure that you use the same Oracle Inventory for all Oracle software installations, and ensure that all Oracle software users you intend to use for installation have permissions to write to this directory.
To determine if you have an Oracle central inventory directory (oraInventory) on your system:
Run the following command to check for an existing Oracle Inventory directory:
# more /etc/oraInst.loc
If the oraInst.loc
file exists, then the output from this command is similar to the following:
inventory_loc=/u01/app/oraInventory inst_group=oinstall
In the previous output example:
The inventory_loc
group shows the location of the Oracle Inventory.
The inst_group
parameter shows the name of the Oracle Inventory group (in this example, oinstall
).
If the Oracle Inventory directory does not exist, the command returns an error message indicating that the file or directory does not exist.
If the Oracle Inventory directory does not exist, you do not need to create one prior to installing the Oracle software.
During installation, you are prompted to provide a path to a home directory to store Oracle grid infrastructure binaries. OUI installs Oracle Clusterware and Oracle ASM into a directory referred to as Grid_home
. Ensure that the directory path you provide meets the following requirements:
It should be created in a path outside existing Oracle homes.
It should not be located in a user home directory.
It should be created either as a subdirectory in a path where all files can be owned by root
, or in a unique path.
If you create the path before installation, then it should be owned by the installation owner of Oracle grid infrastructure (oracle
or grid
), and set to 775 permissions.
Before you start the installation, you must have sufficient disk space on a file system for the Oracle grid infrastructure directory. The file system that you use for the Grid home directory must have at least 4.5 GB of available disk space.
The path to the Grid home directory must be the same on all nodes. As the root
user, you should create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that OUI can select that directory during installation.
To create the Grid home directory:
To create the Grid home directory as the root
user, enter the following commands:
# mkdir -p /u01/grid # chown -R oracle:oinstall /u01/grid
Note:
Ensure the Grid home directory is not a subdirectory of the Oracle base directory. Installing Oracle Clusterware in an Oracle base directory will cause installation errors.Oracle Universal Installer (OUI) creates the Oracle base directory for you in the location you specify. This directory is owned by the user performing the installation. The Oracle base directory (ORACLE_BASE
) helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration. OFA guidelines recommend that you use a path similar to the following for the Oracle base directory:
/mount_point/app/user
In the preceding path example, the variable mount_point
is the mount point directory for the file system where you intend to install the Oracle software and user
is the Oracle software owner (typically oracle
). For OUI to recognize the path as an Oracle software path, it must be in the form u0[1-9]/app
, for example, /u01/app
.
The path to the Oracle base directory must be the same on all nodes. The permissions on the Oracle base directory should be at least 750.
Prior to installing any Oracle software, you must configure an Oracle base directory, which is used to determine the location of the Oracle Inventory directory, as well as for installing Oracle RAC.
Assume you have determined that the file system mounted as /u01
has sufficient room for both the Oracle grid infrastructure software and the Oracle RAC software. You have decided to make the /u01/app/oracle/11gR2
directory the Oracle base directory. The oracle
user will be used to install all the Oracle software.
To create the Oracle base directory:
Log in as the root
user.
Use the mkdir
command to create the path to the Oracle base directory.
# mkdir -p /u01/app/oracle/11gR2
Change the ownership of the Oracle base path to the Oracle software owner, oracle
.
# chown -R oracle:oinstall /u01/app/oracle/11gR2
Change the permissions on the Oracle base directory to 775.
# chmod -R 775 /u01/app/oracle/11gR2
The Oracle home directory is the location in which the Oracle RAC software is installed. You can use a Oracle home directory created in the local file system, for example, /u01/app/oracle/product/11.2.0/dbhome_1
. The same directory must exist on every node in the cluster. You do not need to create these directories prior to installation. By default, the installer will suggest a subdirectory of the Oracle base directory for the Oracle home.
You can also use a shared Oracle home. The location of the shared Oracle home can be on network storage, or a supported cluster file system such as Oracle Automatic Storage Management Cluster File System (Oracle ACFS). For more information about Oracle ACFS, see Oracle Database Storage Administrator's Guide.
If you use the local file system for the Oracle home directory, and you want to install a different versions of Oracle RAC or Oracle Database on the same server, then you must use a separate Oracle home directory for each software installation. Multiple versions of the same product or different products can run from different Oracle homes concurrently. Products installed in one Oracle home do not conflict or interact with products installed in another Oracle home.
Using different Oracle homes for your installed software allows you to perform maintenance operations on the Oracle software in one home without effecting the software in another Oracle home. However, it also increases your software maintenance costs, because each Oracle home must be upgraded or patched separately.
Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files. The supported types of shared storage depend upon the platform you are using, for example:
A supported cluster file system, such as OCFS2 for Linux, OCFS for Microsoft Windows, or General Parallel File System (GPFS) on IBM platforms
Network file system (NFS), which is not supported on AIX Based Systems, Linux on POWER, or on IBM zSeries Based Linux
(Upgrades only) Shared disk partitions consisting of block devices or raw devices. Block devices are disk partitions that are not mounted using the Linux file system. Oracle Clusterware and Oracle RAC write to these partitions directly.
Note:
You cannot use OUI to install Oracle Clusterware files on block or raw devices. You cannot put Oracle Clusterware binaries and files on Oracle Automatic Storage Management Cluster File System (Oracle ACFS).For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files.
If you decide to use OCFS2 to store the Oracle Clusterware files, you must use the proper version of OCFS2 for your operating system version. OCFS2 works with Oracle Enterprise Linux and Red Hat Linux kernel version 2.6
Note:
For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certify pages on My Oracle Support (formerly OracleMetaLink) at:The examples in this guide use Oracle ASM to store the Oracle Clusterware and Oracle Database files. The Oracle grid infrastructure and Oracle RAC software will be installed on disks local to each node, not on a shared file system.
See Also:
Your platform-specific Oracle Clusterware installation guide if you are using a cluster file system or NFS
To use an NFS file system, it must be on a certified NAS device. If you have a certified network attached storage (NAS) device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.
To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.
Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
A disk group should not contain more than one partition on a single physical disk device.
Using logical volumes as a device in an Automatic Storage Management disk group is not supported with Oracle RAC.
The user account with which you perform the installation (oracle
) must have write permissions to create the files in the path that you specify.
To configure NAS device files for creating disk groups:
Add and configure access to the disks to the NAS device. Make sure each cluster node has been granted access to all the disks that will be used by Oracle grid infrastructure software and Oracle Database software.
Refer to your NAS device documentation for more information about completing this step.
On a cluster node, log in as the root
user (or use sudo
for the following steps).
Configure access to the disks on the NAS devices. The process for completing this step can vary depending on the type of disks and the type of NAS service.
One example of the configuration process is shown here. The first step is to create a mount point directory on the local system:
# mkdir -p /mnt/oracleasm
To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/fstab
.
For more information about editing the mount file for the operating system, refer to the Linux man pages. For more information about recommended mount options, refer to Oracle Grid Infrastructure Installation Guide for Linux.
Enter a command similar to the following to mount the NFS file system on the local system, where host
is the hostname or IP address of the file server, and pathname
is the location of the storage within NFS (for example, /public
):
# mount <host>:<pathname> /mnt/oracleasm
Choose a name for the disk group that you want to create, for example, nfsdg
.
Create a directory for the files on the NFS file system, using the disk group name as the directory name, for example:
# mkdir /mnt/oracleasm/nfsdg
Use commands similar to the following to create the required number of zero-padded files in this directory:
# dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk1 bs=1024k count=1000
This example creates a 1 GB file named disk1
on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.
Enter the following commands to change the owner, group, and permissions on the directory and files that you created:
# chown -R oracle:dba /mnt/oracleasm # chmod -R 660 /mnt/oracleasm
When installing Oracle RAC, if you choose to create an Oracle ASM disk group, you will need to change the disk discovery path to specify a regular expression that matches the file names you created, for example, /mnt/oracleasm/nfsdg/*
.
When you reboot the server, unless you have configured special files for device persistence, a disk that appeared as /dev/sdg
before the system shutdown can appear as /dev/sdh
after the system is restarted. If you use ASMLib to configure the disks, then when you reboot the node:
The disk device names do not change
The ownership and group membership for these disk devices remains the same
You can copy the disk configuration implemented by ASM to other nodes in the cluster by running a simple command
The following sections describe how to configure and use ASMLib for your shared disk devices:
The ASMLib software is available from the Oracle Technology Network. Select the link for your platform on the ASMLib download page at:
http://www.oracle.com/technology/tech/linux/asmlib/index.html
You will see 4 to 6 packages for your Linux platform. The oracleasmlib
package provides the actual ASM library. The oracleasm-support
package provides the utilities used to get the ASM driver up and running. Both of these packages need to be installed.
The remaining packages provide the kernel driver for the ASM library. Each package provides the driver for a different kernel. You must install the appropriate package for the kernel you are running. Use the uname -r
command to determine the version of the kernel on your server. The oracleasm
kernel driver package will have that version string in its name. For example, if you are running Red Hat Enterprise Linux 4 AS, and the kernel you are using is the 2.6.9-55.0.12.ELsmp kernel, you would choose the oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
package.
To install the ASMLib software packages:
Download the ASMLib packages to each node in your cluster.
Change to the directory where the package files were downloaded.
As the root
user, use the rpm
command to install the packages. For example:
# rpm -Uvh oracleasm-support-2.1.3-1.el4.x86_64.rpm # rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm # rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
After you have completed these commands, ASMLib is installed on the system.
Repeat steps 2 and 3 on each node in your cluster.
Now that the ASMLib software is installed, a few steps have to be taken by the system administrator to make the ASM driver available. The ASM driver needs to be loaded, and the driver file system needs to be mounted. This is taken care of by the initialization script, /usr/sbin/oracleasm
.
To configure the ASMLib software after installation:
As the root
user, run the following command:
# /usr/sbin/oracleasm configure
The script will prompt you for the default user and group to own the ASM driver access point. Specify the Oracle Database software owner (oracle
) and the OSDBA group (dba
).
The script will also prompt you to specify whether or not you want to start the ASMLib driver when the node is started and whether or not you want to scan for presence of any Oracle Automatic Storage Management disks when the node is started. Answer yes for both of these questions.
Repeat step 1 on each node in your cluster.
Every disk that is used in an ASM disk group must be accessible on each node. After you make the physical disk available to each node, you can then mark the disk device as an ASM disk. The /usr/sbin/oracleasm
script is again used for this task.
If the target disk device supports partitioning, for example, raw devices, then you must first create a single partition that encompasses the entire disk. If the target disk device does not support partitioning, then you do not need to create a partition on the disk.
To create ASM disks using ASMLib:
As the root
user, use oracleasm to create ASM disks using the following syntax:
# /usr/sbin/oracleasm createdisk disk_name device_partition_name
In this command, disk_name
is the name you choose for the ASM disk. The name you choose must contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an ASM disk is the device_partition_name
. For example:
# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
If you need to unmark a disk that was used in a createdisk
command, you can use the following syntax:
# /usr/sbin/oracleasm deletedisk disk_name
Repeat step 1 for each disk that will be used by Oracle ASM.
After you have created all the ASM disks for your cluster, use the listdisks
command to verify their availability:
# /usr/sbin/oracleasm listdisks DISK1 DISK2 DISK3
On all the other nodes in the cluster, use the scandisks
command to view the newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in the cluster.
# /usr/sbin/oracleasm scandisks Scanning system for ASM disks [ OK ]
After scanning for ASM disks, display the available ASM disks on each node to verify their availability:
# /usr/sbin/oracleasm listdisks DISK1 DISK2 DISK3
Note:
At this point, you should restart each node on which you will be installing the Oracle grid infrastructure software. After the node has restarted, view the configured shared storage on each node. This helps to ensure that the system configuration is complete and persists across node shutdowns.By default, the Linux 2.6 kernel device file naming scheme udev
dynamically creates device file names when the server is started, and assigns ownership of them to root
. If udev
applies default settings, then it changes device file names and owners for voting disks or Oracle Cluster Registry partitions, corrupting them when the server is restarted. For example, a voting disk on a device named /dev/sdd
owned by the user grid
may be on a device named /dev/sdf
owned by root
after restarting the server.
If you use ASMLIB, then you do not need to ensure permissions and device path persistency in udev
. If you do not use ASMLib, then you must create a custom rules file. When udev
is started, it sequentially carries out rules (configuration directives) defined in rules files. These files are in the path /etc/udev/rules.d/
. Rules files are read in lexical order. For example, rules in the file 10-wacom.rules
are parsed and carried out before rules in the rules file 90-ib.rules
.
Where rules files describe the same devices, on Asianux, Red Hat, and Oracle Enterprise Linux, the last file read is the one that is applied. On SUSE 2.6 kernels, the first file read is the one that is applied.
To configure a rules file for disk devices, refer to Chapter 3 in Oracle Grid Infrastructure Installation Guide for Linux.