Skip Headers
Oracle® Database Administrator's Reference
11g Release 2 (11.2) for Linux and UNIX-Based Operating Systems

Part Number E10839-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

E Very Large Memory on Linux x86

Oracle Database for Linux 32-bit supports Very Large Memory (VLM) configurations, which allows Oracle Database to access more than the 4 GB of RAM traditionally available to Linux applications. The Oracle Very Large Memory mode is used to create a large database buffer cache. Running the computer in Very Large Memory mode requires setup changes to Linux and imposes some limitations on the features, and init.ora parameters.

This appendix includes the following topics:

E.1 Shared Global Area Tuning

The basic memory structures associated with Oracle Database include System Global Area (SGA) and Program Global Area (PGA). The SGA is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. The SGA is shared by all server and background processes. The PGA is a memory region that contains data and control information for a server process.

The following topics are covered in this section:

E.1.1 Manual SGA Tuning

To increase the System Global Area limit on a 32-bit computer, you must manually set the initialization parameters DB_BLOCK_BUFFERS and SHARED_POOL_SIZE to values you have chosen for Oracle Database. The initialization parameter DB_BLOCK_SIZE sets the block size and the buffer cache size for an instance.

A typical 32-bit Linux kernel splits 4 GB address space into 3 GB of user space and 1 GB of kernel. For example, let us consider a configuration with 3 GB user space and 1 GB kernel. On normal kernel, the following is the default memory layout for every process:

1 GB text code

1.7 GB available process memory for address space

0.3 GB stack

1.0 GB kernel

From the preceding example, it is clear that 1.7 GB is left for SGA. This can be effectively used by the shared pool.

On the 1 GB kernel addressable space, the following memory zones are available:

E.1.1.1 ZONE_DMA

This zone is mapped to the first 16 MB of the physical memory. Some devices use this zone for data transfer.

E.1.1.2 ZONE_NORMAL

This zone is mapped from 16 MB to 896 MB of the physical memory. This zone is also known as the low memory zone. It is permanently mapped to the kernel space. This is a performance-critical zone as most of the kernel resources reside in this space and can operate from here. Thus, running kernel resource intensive applications can put pressure in the low memory. Low memory starvation can result in performance degradation or even system hang.

E.1.1.3 ZONE_HIMEM

Linux cannot access memory that is not mapped directly into its address space. The kernel maps the physical pages to the virtual address space before it can use memory greater than 1 GB. To access the pages in the ZONE_HIGHMEM zone, the kernel maps these pages in the ZONE_NORMAL zone.

E.1.2 Automatic SGA Tuning

Starting with 10g, SGA_TARGET parameter is introduced to facilitate automatic tuning of database parameters such as shared pool, java pool, large pool, and database buffer cache. You must set the value of SGA_TARGET parameter to a non zero value to grow and shrink auto-tuned memory components.

With 11g, this feature was reviewed and a new parameter called MEMORY_TARGET was introduced which facilitates auto tuning of SGA and PGA components of the memory.

Dynamic SGA and multiple block size are not supported with Very Large Memory. To enable Very Large Memory on the system, you must ensure that you set the value of SGA_TARGET or MEMORY_TARGET to zero. If Very Large Memory is enabled on the system, then the Database buffer cache can use extra memory.

Note:

Oracle recommends that you unset SGA_TARGET or MEMORY_TARGET parameter before implementing Very Large Memory on the same system.

E.2 Overview of HugePages

HugePages is a feature integrated into the Linux kernel with release 2.6. It is a method to have larger pages where it is useful for working with very large memory. It can be useful for both 32-bit and 64-bit configurations. HugePage sizes vary from 2MB to 256MB, depending on the kernel version and the hardware architecture. For Oracle Databases, using HugePages reduces the operating system maintenance of page states, and increases TLB (Translation Lookaside Buffer) hit ratio.

Without HugePages, the operating system keeps each 4 KB of memory as a page, and when it is allocated to the SGA, then the lifecycle of that page (dirty, free, mapped to a process, and so on) must be kept up to date by the operating system kernel.

With HugePages, the operating system page table (virtual memory to physical memory mapping) is smaller, since each page table entry is pointing to pages from 2 MB to 256 MB. Also, the kernel has fewer pages whose lifecyle must be monitored.

For example, if you use HugePages with 64-bit hardware, and you want to map 256 MB of memory, you may need one page table entry (PTE). If you do not use HugePages, and you want to map 256 MB of memory, then you must have 256 MB * 1024 KB/4 KB = 65536 PTEs.

E.3 Methods To Increase SGA Limits

The following are the methods to increase SGA limits on a 32-bit computer:

E.3.1 Hugemem Kernel

To use large memory system, Linux hugemem kernel can be employed. This enables creating SGA of up to 3.6 GB. The hugemem kernel allows 4GB/4GB split of the kernel and user space address. A 32-bit computer splits the available 4 GB address space into 3 GB virtual memory space that can be used to run user-defined processes and a 1 GB space for the kernel. However, for a larger SGA, you must use hugemem kernel.

The hugemem kernel provides support for accessing memory beyond 4 GB virtual addressable limit of 32-bit kernel to access 16 GB, and up to 64 GB of physical memory. The hugemem kernel on large computers ensures better stability as compared to the performance overhead of address space switching. This kernel supports a 4 GB per process user space (as against 3 GB for the other kernels) and a 4 GB direct kernel space. The hugemem kernel is used when you are installing the software on a multiprocessor computer with more than 4 GB of main memory. It is also useful for running configurations with less memory.

Run the following command to determine if you are using the hugemem kernel:

$ uname -r
2.6.9-5.0.3.ELhugemem

For example, let us consider a configuration with 4 GB user space and 4 GB kernel. If you reduce the SGA attached, then it results in 3.42 GB process memory as follows:

3.42 GB available process memory for address space

0.25 GB kernel "trampoline"

0.33 GB SGA base

4.0 GB kernel

E.3.2 Hugemem Kernel with Very Large Memory

On a 32-bit Physical Address Extension (PAE) system with hugemem, Oracle database can use a shared memory file system, feature called Very Large Memory (VLM). Very Large Memory can increase SGA from 1.7 GB to 62 GB. However, the user space address is still limited to 4 GB.

You must enable the Very Large Memory feature for Oracle to use a shared memory file system. This feature moves the buffer cache of the SGA from the System V shared memory to the shared memory file system. For Red Hat Enterprise Linux 4.0 / Oracle Enterprise Linux 4.0 to use the Very Large Memory option to create a very large buffer cache, you have two options:

  • Use shmfs: mount a shmfs with a certain size to /dev/shm, and set the correct permissions. In Red Hat Enterprise Linux 4.0 / Oracle Enterprise Linux 4.0, shmfs allocated memory is pageable.

  • Use ramfs: ramfs is similar to shmfs, except that pages are not pageable or swappable. This approach provides the commonly desired effect. Ramfs is created by mount -t ramfs ramfs /dev/shm (unmount /dev/shm first). The only difference here is that the ramfs pages are not backed by big pages.

Note:

USE_INDIRECT_DATA_BUFFERS=TRUE must be present in the initialization parameter file for the database instance that use Very Large Memory support. If this parameter is not set, then Oracle Database 11g Release 2 (11.2) or later behaves in exactly the same way as previous releases.

You can increase the SGA to about 62 GB on a 32-bit computer with 64 GB RAM. Very Large Memory configurations improve database performance by caching more database buffers in memory. This reduces disk I/O compared to configurations without Very Large Memory. The Page Address Extension (PAE) feature of the processor enables physical addressing of 64 GB of RAM. However, you cannot attach a program to shared memory directly if it is 4 GB or more in size because PAE does not enable a program to an address, if it is more than 4 GB, directly. To resolve this issue, you can create a shared memory file system that can be as large as the virtual memory supported by the kernel. You can use this shared file system for a program to dynamically attach to the regions of this file system. A shared memory file system enables large applications, such as Oracle, to access a large shared memory on a 32-bit computer.

Very Large Memory uses 512 MB of the non-buffer cache SGA to manage itself. Very Large Memory uses this memory area to map the indirect data buffers, such as shared memory file system buffer, into the process address space.

For example, if you have 2.5 GB of non-buffer cache SGA for shared pool, then only 2 GB of non-buffer cache SGA is available for Shared pool, Large pool, and Redo Log buffer. Remaining 512 MB of the non-buffered cache is used to manage Very Large Memory.

E.4 Configuring Very Large Memory for Oracle Database

Complete the following procedure to configure Very Large Memory on the computer:

  1. Log in as a root user:

    sudo -sh
    Password:
    
  2. Edit the /etc/rc.local file and add the following entries to it to configure the computer to mount ramfs over the /dev/shm directory, whenever you start the computer:

    umount /dev/shm
    mount -t ramfs ramfs /dev/shm
    chown oracle:oinstall /dev/shm
    

    In the preceding commands, oracle is the owner of Oracle software files and oinstall is the group for Oracle owner account.

  3. Restart the server.

  4. Log in as a root user.

  5. Run the following command to check if the /dev/shm directory is mounted with the ramfs type:

    # mount | grep shm
    ramfs on /dev/shm type ramfs (rw)
    
  6. Run the following command to check the permissions on the /dev/shm directory:

    # ls -ld /dev/shm
    drwxr-xr-x  3 oracle oinstall 0 Jan 13 12:12 /dev/shm
    
  7. Edit the /etc/security/limits.conf file and add the following entries to it to increase the max locked memory limit:

    soft    memlock        3145728hard    memlock        3145728
    
  8. Switch to the oracle user:

    # sudo - oracle
    Password:
    
  9. Run the following command to check the max locked memory limit:

    $ ulimit -l
    3145728
    
  10. Complete the following procedure to configure instance parameters for Very Large Memory:

    1. Replace the DB_CACHE_SIZE and DB_xK_CACHE_SIZE parameters with DB_BLOCK_BUFFERS parameter.

    2. Add the USE_INDIRECT_DATA_BUFFERS=TRUE parameter.

    3. Configure SGA size according to the SGA requirements.

    4. Remove SGA_TARGET, if set.

  11. Start the database instance.

  12. Run the following commands to check the memory allocation:

    $ ls -l /dev/shm
    $ ipcs -m
    
  13. Run the following command to display the value of Hugepagesize variable:

    $ grep Hugepagesize /proc/meminfo
    
  14. Complete the following procedure to create a script that computes recommended values for hugepages configuration for the current shared memory segments:

    1. Create a text file named hugepages_settings.sh.

    2. Add the following content in the file:

      #!/bin/bash
      #
      # hugepages_settings.sh
      #
      # Linux bash script to compute values for the
      # recommended HugePages/HugeTLB configuration
      #
      # Note: This script does calculation for all shared memory
      # segments available when the script is run, no matter it
      # is an Oracle RDBMS shared memory segment or not.
      # Check for the kernel version
      KERN='uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }''
      # Find out the HugePage size
      HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
      # Start from 1 pages to be on the safe side and guarantee 1 free HugePage
      NUM_PG=1
      # Cumulative number of pages required to handle the running shared memory segments
      for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
      do
         MIN_PG='echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q'
         if [ $MIN_PG -gt 0 ]; then
            NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
         fi
      done
      # Finish with results
      case $KERN in
         '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
                echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
         '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
          *) echo "Unrecognized kernel version $KERN. Exiting." ;;
      esac
      # End
      
    3. Run the following command to change the permission of the file:

      $ chmod +x hugepages_settings.sh
      
  15. Run the hugepages_settings.sh script to compute the values for hugepages configuration:

    $ ./hugepages_settings.sh
    

    Note:

    Before running this script, ensure that all the applications that must use hugepages run.
  16. Set the following kernel parameter:

    # sysctl -w vm.nr_hugepages=value_displayed_in_step_15
    
  17. To make the value of the parameter available for every time you restart the computer, edit the /etc/sysctl.conf file and add the following entry:

    vm.nr_hugepages=value_displayed_in_step_15
    
  18. Run the following command to check the available hugepages:

    $ grep Huge /proc/meminfo
    
  19. Restart the instance.

  20. Run the following command to check the available hugepages (1 or 2 pages free):

    $ grep Huge /proc/meminfo
    

    Note:

    If the setting of the nr_hugepages parameter is not effective, you might must restart the server.

E.5 Restrictions Involved in Implementing Very Large Memory

Following are the limitations of running a computer in the Very Large Memory mode: