Oracle® Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX Part Number E10813-02 |
|
|
View PDF |
Most installation errors with Oracle Real Application Clusters (Oracle RAC) are due to a failure to complete all steps required before starting Oracle Universal Installer.
This chapter is intended for database administrators to use in consultation with system administrators and storage administrators to coordinate installation plan tasks for Oracle Clusterware, in preparation for completing an installation of Oracle RAC.
This chapter contains the following topics:
Confirming Cluster Readiness for Oracle RAC Installation with CVU
Stopping Existing Oracle Processes for Upgrades or Coexisting Databases
This section provides a list of tasks Oracle recommends you complete before starting Oracle Clusterware and Oracle RAC installation. Whether your location is a Tier IV data center with a large project team of system administrators, storage administrators, network administrators, database administrators, and third-party hardware and software vendors, or you are a project team of one, planning is important to help ensure that your installation proceeds smoothly.
It is beyond the scope of this documentation set to advise how to determine hardware sizing or capacity planning for your installation. Note that with Oracle Clusterware and Oracle RAC, you can add additional nodes and instances as needed in response to testing, or in response to increased workloads.
Review and complete the following steps as part of your installation plan:
Before you decide whether you want to install Oracle Database 11g release 2 (11.2) on existing hardware, or decide what server and storage hardware to purchase for an installation, log on to My Oracle Support:
Click the Certify tab. Check the Certification Matrix for Oracle RAC for the operating system platform on which you intend to install, to ensure that your hardware configuration is supported for use with Oracle Clusterware and Oracle RAC. You can receive guidance about supported hardware options that can assist you with your purchasing decisions.
At the time of this release, you can check the following URL for direct access to the Certification Matrix:
http://www.oracle.com/technology/support/metalink/index.html
In addition to specific certified hardware configurations, the Certify page provides support and patch information, and general advice about how to proceed with an Oracle Clusterware or Oracle Clusterware with Oracle RAC 11g release 2 (11.2) installation, including important information about vendor clusterware and other configuration issues.
Contact your Oracle Sales Representative if you do not have a My Oracle Support account.
Also, you may want to refer to Oracle.com (http://www.oracle.com
) for additional resources about planning for specific implementation scenarios, best practices, and other information that can help you with your installation plan. In particular, refer to the following Web site:
http://www.oracle.com/technology/products/database/clustering/index.html
The Oracle Technical Network (OTN) contains white papers about deployment options, capacity planning, best practices on various NFS platforms, and extended clusters deployments, which are not addressed in this guide. You can review available papers at the following Web site:
http://www.oracle.com/technology/products/database/clustering/index.html
Installation of Oracle RAC consists of the following steps in order:
Prepare servers (system, storage, and network administration):
Install operating system and install operating system packages and patches to the required version.
Create required groups, users, and software homes.
Set up domain name forwarding for Grid Naming Service (GNS) if you plan to deploy it, and set up network addresses in the DNS and on the server as needed.
Set up required storage.
(optional) Stage all software on one node for installation (the "local node").
Install Oracle grid infrastructure for a cluster, which includes Oracle Clusterware and Oracle Automatic Storage Management (system and storage administration):
Install Oracle grid infrastructure for a cluster. During installation, Fixup scripts perform additional configuration of operating system parameters, secure shell (SSH) for installation and user environment variables.
Patch Oracle Clusterware and Automatic Storage Management to the latest patchset.
Install Oracle RAC (database administration):
Install Oracle Real Application Clusters.
Patch Oracle RAC to the latest patchset.
Complete postinstallation configuration of the Oracle RAC database.
All users intending to install Oracle Clusterware or Oracle RAC should use Cluster Verification Utility to ensure that the cluster is prepared for a successful installation.
Cluster Verification Utility is incorporated into Oracle Universal Installer, so it will run when you start the Oracle RAC installation. However, you can use Cluster Verification Utility to ensure that any packages or configuration required for Oracle RAC are in place before you begin your Oracle RAC installation.
Oracle provides Cluster Verification Utility to perform system checks in preparation for installation, patch updates, or other system changes. In addition, it can generate "fixup scripts," which are scripts run by the root
user that can change many kernel parameters to at least the minimum settings required for a successful installation.
Learning how to use Cluster Verification Utility can help system administrators, storage administrators, and database administrators to ensure that each has completed required system configuration and preinstallation steps, so that installations, updates, or patches complete successfully. You can obtain the latest version of Cluster Verification Utility at the following URL:
http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html
If you have a vendor performing hardware or operating system configuration steps for you, then ask the vendor to complete the relevant Cluster Verification Utility check of the cluster after they complete their work to ensure that your system is configured correctly.
Database administrators should refer to the section "Confirming Cluster Readiness for Oracle RAC Installation with CVU" to confirm that their system is prepared for installation before they start an Oracle RAC installation.
If you have an existing Oracle installation, then document version numbers, patches, and other configuration information, and review upgrade procedures for your existing installation. Review Oracle upgrade documentation before proceeding with installation, to decide how you want to proceed.
Be aware that to install Oracle RAC 11g release 2 (11.2), you must have Oracle Clusterware and Oracle ASM 11g release 2 (11.2) installed on your cluster. The Oracle Clusterware release version must be equal to or greater than the Oracle RAC version that you want to install.
For late-breaking updates and best practices about preupgrade, post-upgrade, compatibility, and interoperability discussions, refer to "Oracle Upgrade Companion." "Oracle Upgrade Companion" is available through Note 785351.1 on My Oracle Support:
For upgrades, note the following:
You can have only one version of Oracle Clusterware running on a cluster at a time. It must be the most recent release of any software (Oracle Clusterware, Oracle Database, Oracle RAC and Automatic Storage Management) running on the cluster.
You can have multiple Oracle homes of Oracle Databases on your cluster. However, the Oracle RAC database software in these homes must be from a version that is equal to or less than the version of Oracle Clusterware that is installed; you cannot have a version of Oracle Database on Oracle Clusterware that was released after the version of Oracle Clusterware that you are running.
For example:
If you have Oracle Clusterware 11g release 2 installed as your clusterware, then you can have an Oracle Database 10g release 1 single-instance database running on one node, and separate Oracle RAC 10g release 1, release 2, and Oracle RAC 11g release 1 or release 2 databases also running on the cluster.
However, you cannot have Oracle Clusterware 10g release 2 installed on your cluster, and install Oracle RAC 11g.
Starting with release 10.1.0.6 and 10.2.0.3, you can use Database Upgrade Assistant (DBUA) for patch set upgrades with Oracle RAC. You can also use DBUA to upgrade between major point releases of Oracle RAC (for example, from 10.1 to 10.2, or 10.2 to 11g).
You cannot change the owner of the Oracle Database home during an upgrade. You must use the same Oracle software owner that owns the existing Oracle Database home.
As with any system change, back up your existing database before attempting to install new software.
See Also:
Oracle Database Upgrade GuideBefore you start an installation where you want to support languages other than English, review Oracle Database Globalization Support Guide.
Note the following:
Oracle recommends that you use Unicode AL32UTF8 as the database character set
Unicode is the universal character set that supports most of the currently spoken languages of the world. It also supports many historical scripts (alphabets). Unicode is the native encoding of many technologies, including Java, XML, XHTML, ECMAScript, and LDAP. Unicode is ideally suited for databases supporting the Internet and the global economy.
The locale setting of your operating system session determines the language of the user interface, and the globalization behavior for components such as Oracle Universal Installer, Oracle Net Configuration Assistant, and Database Configuration Assistant. It also determines the globalization behavior of Oracle Database sessions created by a user application through Oracle JDBC driver, unless overridden by the application.
The NLS_LANG
environment variable determines the language of the user interface and the globalization behavior for components such as SQL*Plus, exp
, and imp
. It sets the language and territory used by the client application and the database. It also declares the character set for entering and displaying data by the client application.
Note:
Oracle Database Installation Guide for your platform contains a fuller discussion of database character sets used with different languages, and provides further information about installing and configuring Oracle Database globalization support.Users planning to install Oracle Clusterware should review Oracle Grid Infrastructure Installation Guide for your operating system platform, particularly the Preinstallation, and the Storage chapters, to complete all required steps needed for a successful installation.
Oracle Grid Infrastructure Installation Guide also contains most tasks requiring root privileges or storage administrator privileges that need to be completed before starting an Oracle RAC installation.
In addition, review the Release Notes and My Oracle Support (https://metalink.oracle.com
) to ensure that you have the most current information about system requirements and other information that can affect your installation. The small time that this review takes can save a much greater amount of time required to track down causes of installation errors later. Also check to make sure that you have the most current version of this document; Oracle Documentation is updated after release.
Oracle recommends that you install a Web browser on your cluster nodes, both to enable Oracle Enterprise Manager and Oracle Application Express, if you install Oracle RAC, and to access online documentation as needed. Online documentation is available in PDF and HTML formats.
Note:
Refer to Oracle Database Concepts for an overview of Oracle Database, and Oracle Real Application Clusters Administration and Deployment Guide for additional information about Oracle Clusterware or Oracle RAC configuration and deployment. Oracle Grid Infrastructure Installation Guide for your platform contains server and storage configuration information for Oracle RAC.Oracle Clusterware must be installed successfully before attempting to install Oracle RAC.
To complete installations successfully, ensure that required hardware, network, and operating system preinstallation steps for Oracle software are complete. Failure to complete required preinstallation steps is the most common reason for failed installations.
Before Oracle Clusterware is installed as part of a grid infrastructure for a cluster installation, you should already have completed installing and configuring CPUs, memory, shared storage, network cards, host bus adaptors, interconnects, and any other networking or server hardware; and you should have installed the operating system, and any vendor clusterware. Review your vendor documentation to complete these tasks, and if relevant, work with your vendor to complete such Oracle preinstallation steps that are listed here to confirm that the vendor hardware and software is correctly configured.
During grid infrastructure for a cluster installation, the person completing the installation identifies the planned use for each global interface, identifying it as a Public interface type (used with public IP addresses and virtual IP addresses), a Private interface type (used with interconnects between cluster member nodes), or a Do not use interface type, which Oracle Clusterware and Oracle RAC should ignore. For example, an interface used as a dedicated interface for a network file system should be marked as aDo not use interface type.
Additional network configuration is not required during Oracle RAC configuration.
Server and network preparation for installation include the following:
The following is a summary of server hardware and software configuration.
Oracle Requires
Each node in a cluster requires the following:
Supported server hardware, including processors and system configuration.
Review My Oracle Support before starting installation on existing hardware, and before purchasing new hardware, to ensure that the hardware is supported with Oracle Clusterware with Oracle RAC 11g release 2 (11.2).
Note:
You must use the same operating system on each node in the cluster. Oracle strongly recommends that you use the same software configurations on each node of the cluster.Operating system packages listed in system requirements.
Customized operating system package installations are not supported, because they may not include required packages. You can add additional packages as needed, but if you subtract packages from the default packages set, then you may encounter "failed dependencies" errors for required software packages.
A supported interconnect software protocol on each node, to support Oracle Clusterware voting disk polling, and to support Cache Fusion with Oracle RAC. Your interconnect must be certified by Oracle for your platform.
Oracle Recommends: System Administrator Tasks
Oracle recommends the following to simplify server installation and maintenance, and prevent service issues
Utilizing a time protocol that ensures all nodes in the cluster use the same reference time. With Oracle Clusterware 11g release 2, if the network time protocol is not enabled at installation, then the Oracle Clusterware installation enables the cluster time synchronization service.
Configuring redundant switches, for all cluster sizes.
Using identical server hardware on each node, to simplify server maintenance.
Additional Options: System Administrators and Vendors
Though you do not need to use vendor clusterware with Oracle Clusterware, Oracle supports the use of many vendor clusterware options. However, you must install Oracle Clusterware to use Oracle RAC. Be aware that when you use vendor clusterware, Oracle Clusterware defers to the vendor clusterware for some tasks, such as node membership decisions.
You may require third-party vendor clusterware if you use a non-ethernet interconnect.
After you have set up server hardware, review "Checking Hardware Requirements" in the preinstallation chapter to ensure that your system has enough RAM, that the TMP
and TEMP_DIR
locations have enough available space for the installation, and that your system meets other hardware requirements.
Configure the Oracle software groups and users, and configure user environments, as described in the Oracle Grid Infrastructure Installation Guide for your platform preinstallation chapters. These include the following:
Creating and configuring the Oracle RAC installation owner (typically oracle
); this is the user account that you should use for installation.
Creating an OSDBA operating system group whose members are granted the SYSDBA privilege through operating system authentication (typically dba
).
Creating binary and storage installation paths.
See Also:
Oracle Grid Infrastructure Installation Guide for your platformThe Oracle base location is the location where Oracle Database binaries are stored. During installation, you are prompted for the Oracle base path. Typically, an Oracle base path for the database is created during Oracle Grid Infrastructure installation.
In preparation for installation, Oracle recommends that you only set the ORACLE_BASE
environment variable to define paths for Oracle binaries and configuration files. Oracle Universal Installer (OUI) creates other paths and environment variables as necessary, in accordance with the Optimal Flexible Architecture (OFA) rules for well-structured Oracle software environments.
For example, with Oracle Database 11g, Oracle recommends that you do not set an Oracle home environment variable, and instead allow OUI to create it. If the Oracle base path is /u01/app/oracle
, then by default, OUI creates the following Oracle home path:
/u01/app/oracle/product/11.2.0/dbhome_1
Ensure that the paths you select for Oracle software, such as Oracle home paths and the Oracle base path, use only ASCII characters. Because installation owner names are used by default for some path, this ASCII character restriction applies to user names, file names, and directory names.
Note:
If you are upgrading from an existing Oracle RAC installation, then you must use the same type of Oracle home that you have in your existing installation. For example, if you have a shared Oracle home in your existing installation, then you must upgrade to a shared Oracle home with Oracle RAC 11g release 2 (11.2). Similarly, if you have local Oracle homes on cluster nodes, then you must upgrade to local Oracle homes on cluster nodes.The following is an overview of network configuration requirements for Grid Naming Service (GNS) in a Grid Plug and Play configuration, and manual network configuration. Network administrators and system administrators can refer to the Preinstallation chapter in Oracle Grid Infrastructure Installation Guide for your platform for detailed configuration information.
The network configuration for Oracle Clusterware and Oracle RAC requires several addresses. The following is a list of those addresses:
GNS virtual IP address (GNS installations only): A static IP address configured in the GNS. The GNS virtual IP listener forwards queries to nodes in the subdomain on the cluster managed by GNS.
Within the subdomain, the GNS uses multicast Domain Name Service (mDNS) to enable the cluster to map hostnames and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP addresses for a subdomain assigned to the cluster (for example, grid.example.com
), and forward DNS requests for that subdomain to the GNS virtual IP address for the cluster, which GNS will serve.
Single Client Access Name (SCAN): A domain name that resolves to all the addresses allocated for the SCAN. Oracle recommends that you allocate three addresses to the SCAN. During Oracle Grid Infrastructure installation, listeners are created for each of the SCAN addresses, and Oracle Clusterware controls which server responds to a SCAN address request.
For high availability, you should provide at least three IP addresses in the DNS to use for SCAN mapping for high availability. A SCAN domain name must be unique within your corporate network.
Virtual IP address: A public internet protocol (IP) address for each node, to be used as the Virtual IP address (VIP) for client connections. If a node fails, then Oracle Clusterware fails over the VIP address to an available node.
During installation, if you do not use Grid Naming Service (which provides the VIP automatically), you provide VIP addresses. The VIP for each node is associated with the same interface name on every node that is part of your cluster. If you have a domain name server (DNS), then your network administrator should register the host names for the VIP with the DNS, so that it is resolvable from any client, as well as the cluster nodes. The VIP should not be in use at the time of the installation, because this is an IP address that Oracle Clusterware manages.
Public IP address: A public host name address for each node, assigned by GNS, or assigned by the system administrator during initial system configuration for manual configurations. The public IP address name must be resolvable to the hostname. Register both the public IP and the VIP address with the DNS. If you do not have a DNS, then you must make sure that both public IP addresses are in the node /etc/hosts
file (for all cluster nodes).
Private IP address: A private IP address for each node to serve as the private interconnect address, dedicated exclusively to internode cluster communication. GNS configures the address automatically. If you select manual configuration, then you must ensure that the following is true for each private IP address:
It must be separate from the public network
It must be accessible on the same network interface on each node
It must be connected to a network switch between the nodes for the private network; crosscable interconnects are not supported
The private interconnect is used for internode communication by both Oracle Clusterware and Oracle RAC. If you use manual configuration, then the private IP address must be available in each node's /etc/hosts
file. Oracle recommends that it is configured on a dedicated switch (or switches) that are not connected to anything other than the nodes in the same cluster.
Note:
All host names must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.Table 1-1 and the content following provides an overview of IP address requirements for manual network connections.
If you choose not to use GNS, then before installation your network administrator must configure SCAN, public, virtual, and private IP addresses, and your network administrator must reconfigure the network when nodes are added or removed from the cluster.
For example, with a two node cluster where each node has one public and one private interface, and you have defined a SCAN domain address to resolve on your DNS to one of three IP addresses, you might have the configuration shown in the following table for your network interfaces:
Table 1-1 Manual Network Configuration Example
Identity | Home Node | Host Node | Given Name | Type | Address | Address Assigned By | Resolved By |
---|---|---|---|---|---|---|---|
Node 1 Public |
Node 1 |
|
|
Public |
192.0.2.101 |
Fixed |
DNS |
Node 1 VIP |
Node 1 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.104 |
Fixed |
DNS and hosts file |
Node 1 Private |
Node 1 |
|
|
Private |
192.168.0.1 |
Fixed |
DNS and hosts file, or none |
Node 2 Public |
Node 2 |
|
|
Public |
192.0.2.102 |
Fixed |
DNS |
Node 2 VIP |
Node 2 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.105 |
Fixed |
DNS and hosts file |
Node 2 Private |
Node 2 |
|
|
Private |
192.168.0.2 |
Fixed |
DNS and hosts file, or none |
SCAN VIP 1 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.201 |
Fixed |
DNS |
SCAN VIP 2 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.202 |
Fixed |
DNS |
SCAN VIP 3 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.203 |
Fixed |
DNS |
Footnote 1 Node hostnames may resolve to multiple addresses.
Secure Shell (SSH) configuration is required for Oracle Clusterware and Oracle RAC.
System administrators can follow the procedure described in "Configuring SSH on All Cluster Nodes" in the Preinstallation chapter of Oracle Grid Infrastructure Installation Guide for your operating system, to configure SSH access for each node on the cluster to all other nodes of the cluster.
Oracle recommends that you ask your system administrator to permit SSH connections on all cluster member nodes for the user account that will own the Oracle RAC installation. When this is enabled, you can use a script within the Oracle Universal Installer to set up SSH configuration for installation. If this script is not able to run because of environment restrictions for the user account you use for installation, then you or your system administrator will need to set up SSH manually for installation.
In addition, note that any messages displayed to the terminal, such as a "message of the day," can disable SSH configuration. Ask your system administrator to review the SSH section in Oracle Grid Infrastructure Installation Guide for your platform.
You need a web browser to access documentation, and to use Oracle Enterprise Manager and Oracle Application Express. Web browsers must support Java Script and the HTML 4.0 and CSS 1.0 standards. The following browsers meet these requirements:
Netscape Navigator 7.2
Netscape Navigator 8.1
Mozilla version 1.7
Microsoft Internet Explorer 6.0 SP2
Microsoft Internet Explorer 7.0 or later
Firefox 1.0.4
Firefox 1.5
Firefox 2.0
Oracle Clusterware and Oracle RAC are tested with specific operating system kernels, and specific operating system components. Oracle requires that you use the operating system kernel and components that are certified for this release.
Oracle recommends that you or your system administrator review the system requirements carefully in Oracle Grid Infrastructure Installation Guide before beginning installation, to ensure that your system meets these requirements. If your system does not meet minimum operating system kernel and component requirements, then your installation may fail to complete, or other errors may develop during Oracle Clusterware or Oracle Database runtime.
In addition to the standard system requirements configuration, deployment on specific server hardware can include additional operating system configuration steps. Review the Preinstallation chapter, and check the My Oracle Support Certify page to ensure that you are aware of any additional requirements or recommendations for your specific hardware platform configuration.
To install Oracle RAC, you need to have configured shared storage for the database files.
Refer to the "Configuring Storage for Grid Infrastructure for a Cluster and Oracle Real Application Clusters (Oracle RAC)" chapter in Oracle Grid Infrastructure Installation Guide for your operating system to review storage options for installation planning. Storage and system administrators can refer to this chapter to configure storage for Oracle Database files on Oracle RAC. Note that when Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area that must be shared.
With this release, Automatic Storage Management (Oracle ASM) has been extended to include a general purpose file system (Automatic Storage Management File System, or ACFS), which can be used for any files that you cannot place directly on Oracle ASM, such as database homes, or application files.
See Also :
The Certify page on My Oracle Support for the most current information about storage options:https://metalink.oracle.com
Oracle Database Storage Administrator's Guide for an overview of storage configuration administration
There are two ways of storing Oracle Database and recovery files:
Note:
Installing on raw or block devices is not supported. Install on a shared file system or on Oracle ASM. Upgrading existing raw or block devices is supported.Oracle Automatic Storage Management: Oracle Automatic Storage Management (Oracle ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files. It performs striping and mirroring of database files automatically.
A supported shared file system: Supported file systems include the following:
A supported cluster file system: Note that if you intend to use a cluster file system for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.
See Also:
The Certify page on My Oracle Support for supported cluster file systemsNAS Network File System (NFS) listed on Oracle Certify: Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.
See Also:
The Certify page on My Oracle Support for supported Network Attached Storage (NAS) devices, and supported cluster file systemsNote:
Oracle RAC software can be installed on OCFS2. However, it cannot be installed on Oracle Cluster File System (OCFS). Oracle RAC and grid infrastructure software can be installed on network-attached storage (NAS).For OCFS2 certification status, refer to the Certify page on My Oracle Support.
For all installations, you must choose the storage option that you want to use for Oracle Database files, or for Oracle Clusterware with Oracle RAC (Oracle RAC). If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the Fast Recovery Area). You do not have to use the same storage option for each file type.
The following table shows the storage options supported for storing Oracle Database files and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.
Note:
For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certify pages on the My Oracle Support Web site:https://metalink.oracle.com
Table 1-2 Supported Storage Options for Oracle Database and Recovery Files
Storage Option | File Types Supported | |
---|---|---|
Database | Recovery | |
Automatic Storage Management |
Yes |
Yes |
OCFS2 |
Yes |
Yes |
Red Hat Global File System (GFS); for Red Hat Enterprise Linux and Oracle Enterprise Linux |
Yes |
Yes |
Local storage |
No |
No |
NFS file system Note: Requires a certified NAS device |
Yes |
Yes |
Shared raw devices being used with existing installation |
Yes for upgrades only, or as a manual configuration |
No |
Yes for upgrades only, or as a manual configuration |
No |
Use the following guidelines when choosing the storage options that you want to use for each file type:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Automatic Storage Management (Oracle ASM) as the storage option for database and recovery files.
For Standard Edition Oracle RAC installations, Oracle ASM is the only supported storage option for database or recovery files.
If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you must ensure that your system meets the following conditions:
Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the Oracle RAC database or Oracle RAC database with Oracle ASM instance is located.
The Oracle RAC database or Oracle RAC database with an Oracle ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing Oracle RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.
See Also:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseIf you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
This section contains additional information about Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM), and Oracle RAC, that may be helpful for your installation plan team to read to decide how you want to configure your installation. It contains the following sections:
In past releases, Oracle ASM was installed as part of the Oracle Database installation. With Oracle Database 11g release 2 (11.2), Oracle ASM is part of an Oracle grid infrastructure installation. If you want to upgrade an existing Oracle ASM installation, then you must upgrade Oracle ASM by running an Oracle grid infrastructure upgrade.
Oracle Clusterware provides clustering services. You do not require vendor clusterware when you use Oracle Clusterware. If you intend to install Oracle RAC, then you must install Oracle Clusterware.
For Oracle RAC, you and your system administrator should note that all instances in Oracle RAC environments share the control file, server parameter file, redo log files, and all data files. These files must be placed on a shared file systems, and all the cluster database instances must have access to them. Each instance also has its own set of redo log files. During failures, shared access to redo log files enables surviving instances to perform recovery.
As part of installation of Oracle Database 11g release 2 (11.2), time zone version files from 1 to 11 are installed in the path $ORACLE_HOME/oracore/zoneinfo/
. You can continue to use the current time zone version or upgrade to the latest version. Oracle recommends that you upgrade the server to the latest timezone version. Upgrading to a new timezone version may cause existing TIMESTAMP WITH TIME ZONE (TSTZ) data to become stale. Using the newly provided DBMS_DST PL/SQL packages, you can update the TSTZ data transparently with minimal manual procedures and system downtime.
Time zone version files are also installed on the clients. Starting with Oracle Database 11g release 2, you no longer need to upgrade Client time zone files immediately. Upgrades can be done at a time when it is most convenient to the system administrator. However, there could be a small performance penalty when client and server use different time zone versions.
See Also:
Oracle Database Upgrade Guide for information about preparing to upgrade Timestamp with Time Zone data, Oracle Database Globalization Support Guide for information about how to upgrade the Time Zone file and Timestamp with Time Zone data, and Oracle Call Interface Programmer's Guide for information about performance effects of clients and servers operating with different versions of Time Zone filesYou can install and operate different releases of Oracle Database software on the same computer:
When you have Oracle Clusterware installed with different release versions of Oracle software, the Oracle Clusterware version must be greater than or equal to the Oracle Database software version. Oracle Clusterware and Oracle Automatic Storage Management are both upgraded with an Oracle grid infrastructure 11g release 2 (11.2) installation.
If you have an existing Oracle home, then you can create a new Oracle home and install Oracle Database 11g release 2 (11.2) into the new Oracle home. You should ensure that Oracle Clusterware in a separate grid infrastructure home. Oracle grid infrastructure for a cluster installations cannot be installed in the Oracle base directory for Oracle Database.
If you are running the Oracle9i release of Oracle RAC, and you want to continue to use that release, then you must run cluster software that is compatible with that release, such as Oracle Cluster Manager or a third party cluster software. Oracle Clusterware release 11g can be installed on the same system as Oracle9i database software, but Oracle9i software cannot be supported by Oracle Clusterware 11g.
Note:
If you want to remove third party cluster software after upgrading your database to Oracle Database 10g or Oracle Database 11g, then you must first remove the third party cluster software, and then re-install Oracle Clusterware.If OUI detects a previous database release, then OUI asks you about your upgrade preferences. You have the option to upgrade one of the previous release databases with DBUA or to create a new database using DBCA. The information collected during this dialog is passed to DBUA or DBCA after the software is installed.
If OUI detects a previous Oracle Clusterware release, then you are asked to upgrade the existing Oracle Clusterware installation. Only one Oracle Clusterware version can exist on a server, and servers must be members of one cluster only.
You cannot install Oracle grid infrastructure for a standalone server and then install Oracle grid infrastructure for a cluster. If you have Oracle grid infrastructure for a standalone server installed, then you must remove that installation before you can install Oracle grid infrastructure for a cluster.
Note:
Do not move Oracle binaries from the Oracle home to another location. Doing so can cause dynamic link failures.Before you start your installation, use Cluster Verification Utility (CVU) to ensure that your system is prepared for Oracle RAC installation. If any checks fail, then fix the errors reported, either manually or by using a generated fixup script, or contact your system or storage administrator to have the cause of the errors addressed.
CVU is available in the Grid home, in the bin
directory. For example, if the Oracle grid infrastructure for a cluster home is /u01/crs
, then the path is /u01/crs/bin
. To start CVU, navigate to the Grid home bin
directory, and use a command similar to the following:
cluvfy stage -pre dbinst -fixup -n nodelist -r release -osdba OSDBA -verbose
In the preceding command, -fixup
and -osdba
are optional flags.
For example, for a two node cluster with nodeA and nodeB, where you are testing the cluster to prepare to install Oracle Database 11g with Oracle RAC, and your OSDBA group is dba
, the following command checks for system readiness:
$ ./cluvfy stage -pre dbinst -fixup -n nodea,nodeb -osdba dba -verbose
For more information about CVU commands, enter ./cluvfy -help
.
See Also:
Oracle Clusterware Administration and Deployment Guide for detailed information about CVUIf you are planning an installation on a system where you have an existing Oracle RAC or Oracle Database installation, then you must perform additional tasks to prepare your system for installation.
Table 1-3 provides an overview of what you need to do if you have an existing Oracle Database installation. Review the table, and perform tasks as required.
See Also:
Oracle Database Upgrade Guide for additional information about preparing for and performing upgradesTable 1-3 Overview of System Preparation for Upgrades or Co-existing Databases
Installation Scenario | What you need to do |
---|---|
Upgrading from Oracle Database 10g release 1 (10.1) to 11g release 2 (11.2) |
No additional tasks. Refer to Installing Oracle Database 11g on a System with Oracle Database 10g |
Installing Oracle Database 11g release 2 (11.2) on a system to coexist with Oracle Database 10g release 1 (10.1) |
No additional tasks. Refer to Installing Oracle Database 11g on a System with Oracle Database 10g |
Upgrading from Oracle9i release 9.2 to Oracle Database 11g release 2 (11.2) |
Shut down the Global Service Daemon, and shut down a default listener on port 1521, if present. Refer to Installing Oracle 11g Database on a System with Oracle9i Database Release 2 |
Installing Oracle Database 11g release 2 (11.2) on a system to co-exist with Oracle9i release 9.2 |
Shut down a default listener on port 1521, if present, and shut down the Global Service Daemon. Refer to Installing Oracle 11g Database on a System with Oracle9i Database Release 2 |
Installing Oracle Database 11g on a System with Oracle Database 10g
If your system has an Oracle Database 10g installation, and you install Oracle Database 11g release 2 (11.2) either to coexist with or to upgrade the Oracle Database 10g installation, then most installation types configure and start a default Oracle Net listener using TCP/IP port 1521 and the IPC key value EXTPROC
. One of the following occurs:
During a co-existing installation, Database Configuration Assistant (DBCA) automatically migrates the listener and related files from the Oracle Database 10g Oracle home to the Oracle Database 11g Oracle home.
During an upgrade, Oracle Database Upgrade Assistant (DBUA) automatically locates the Oracle Database10g listener, and migrates it to Oracle Database 11g.
Installing Oracle 11g Database on a System with Oracle9i Database Release 2
Explanation of Tasks If you are installing an Oracle Database 11g release 2 (11.2) on a system with an existing Oracle9i Database release 2 (9.2), and the Oracle Net listener process is using the same port or key value as the default used with the Oracle Database 11g release 2 (11.2) installation, port 1521, then OUI can only configure the new listener; it cannot start it. To ensure that the new listener process starts during the installation, you must shut down any existing listeners before starting OUI. To do this, refer to "Shutting Down the Listener"
You must shut down the Global Services Daemon (GSD), because otherwise, during Oracle Database 11g installation, the Oracle Database 9i release 9.2 SRVM shared data is upgraded into an Oracle Cluster Registry that the 9.2 GSD will not be able to use. The Oracle grid infrastructure installation starts an 11g release 2 (11.2) GSD to serve the Oracle9i 9.2 clients. To do this, refer to "Shutting down the Global Services Daemon"
Shutting Down the Listener To determine if an existing Oracle Database 9i listener process is running and to shut it down if necessary, follow these steps:
Switch user to the software owner user. For example:
# su - oracle
Enter the following command to determine if an Oracle Database 9i listener process is running and to identify its name and the Oracle home directory in which it is installed:
$ ps -ef | grep tnslsnr
This command displays information about the Oracle Net listeners running on the system:
... oracle_home1/bin/tnslsnr LISTENER -inherit
In this example, oracle_home1
is the Oracle home directory where the listener is installed and LISTENER
is the listener name.
Set the ORACLE_HOME environment variable to specify the appropriate Oracle home directory for the listener:
Bourne, Bash, or Korn shell:
$ ORACLE_HOME=oracle_home1
$ export ORACLE_HOME
C or tcsh shell:
% setenv ORACLE_HOME oracle_home1
Enter the following command to identify the TCP/IP port number and IPC key value that the listener is using:
$ $ORACLE_HOME/bin/lsnrctl status listenername
Note:
If the listener uses the default name LISTENER, then you do not have to specify the listener name in this command.Enter a command similar to the following to stop the listener process:
$ $ORACLE_HOME/bin/lsnrctl stop listenername
Repeat this procedure to stop all listeners running on this system and on all other nodes in the cluster.
Shutting down the Global Services Daemon As the database installation software owner (for example, oracle
), on each node of the cluster, use the following syntax to shut down the GSD:
$ cd 92_Oracle_home
$ bin/gsdctl stop
In the preceding syntax example, the variable 92_Oracle_home
is the Oracle Database 9i release 2 (9.2) home.