RAC Assessment from Oracle

In our first large-scale Oracle 11g RAC with ASM deployment, we started this project by engaging Oracle. The primary goal of this engagement is to get as much technical input as possible especially when it comes to the areas of best practices. We’ve been advised by our Oracle account manager to use free RAC assessment service provided by a special unit called Oracle RAC Assurance Team. That was really the first time I’ve heard of this kind of service even though we’ve worked with Oracle since our first Oracle 9i RAC many years back. Please note that this assessment service is not available to all customers as a general offering – business justification is required.

The focus of the review is to validate system components, pertinent to designed functionality and implementation, in order to assess capabilities needed to sustain and support an Oracle RAC environment. The goal is to determine whether the configuration has the potential to cause availability or integrity problems.

The assessment does not imply certifications by Oracle of any hardware and software related to infrastructure.

We’ve been asked to provide the following:

  • Architecture diagrams depicting RAC and application components
  • RDA from each node (Metalink Note 314422.1)
  • Oracle Clusterware information
       $ env
       $ id
       $ cluvfy stage -post crsinst -n all -verbose
       $ crs_stat –t
       $ crsctl query css votedisk
       $ ocrcheck
  • Cluster diagnostic information (using diacollection.pl from Metalink Note 330358.1)
  • System logs
  • Public and Private interconnection definitions
       $ oifcfg getif
       $ oifcfg iflist
  • Interconnection traffic
       select * from gv$configured_interconnects;
  • Service definition
       set pages 60 space 2 num 8 lines 132 verify off feedback off
       column name format a15
       column network_name format a12
       column failover_method format a10
       column failover_type format a10

       select service_id, name, network_name, failover_method,
        failover_type, goal,dtp, enabled, AQ_HA_NOTIFICATIONS, CLB_GOAL
       from dba_services;
  • Opatch information
       opatch lsinventory -oh $RDBMS_HOME
       opatch lsinventory -oh $ASM_HOME
       opatch lsinventory -oh $CRS_HOME
  • System definitions (/etc/system)
  • Layer 3/Network Configuration
       # dladm show-link
  • Adaptor summary statistics for public, VIP and interconnect
       kstat -n <NIC name>
       netstat -in
       ifconfig –a
  • UDP parameter settings
       # ndd /dev/udp udp_xmit_hiwat
       # ndd /dev/udp udp_recv_hiwat
  • ASM instance information
       set linesize 1500
       set pagesize 1000
       column name format a20
       column path format a40
       column failgroup format a20

       from v$asm_diskgroup
         order by GROUP_NUMBER;

         TOTAL_MB, FREE_MB
       from v$asm_disk
       order by GROUP_NUMBER, DISK_NUMBER;
  • AWR/Statspack report for peak load period with
  • OSWatcher or OS statistics (top, mpstat, iostat, vmstat, SAR for same corresponding period as AWR/Statspack data) – See Metalink note 301137.1
  • Alert and others logs

After a few weeks, Oracle came back with recommendations which have been evaluated against best practices compiled from its experience with its global RAC customer base. I found these recommendations very helpful. In fact, in some cases, I would not have known of some of the recommendations without this analysis. I’d like to share with you some recommendations given for our specific environment (Oracle with ASM on Solaris 10). Please keep in mind that this may not apply to your environment. So it is advisable to consult Oracle support and/or the provided Metalink Notes if you have any questions.

These are some best practices and recommendations:

  • Whenever possible use a non-shared Oracle Home for ASM and the RDBMS. In most cases the non-shared Oracle Home is the preferred solution due to the ability to patch in a rolling upgrade fashion with zero downtime and the elimination of the single point of failure and binary dependency issues that the shared Oracle Home approach introduces.
  • Do not install the CRS Oracle Home using a shared file system location.
  • Ensure that run-time files in directories /var/tmp/.oracle, or /tmp/.oracle are not removed while Oracle background, or user processes are active. These directories contains a number of “special” socket files that are used by local clients to connect via the IPC protocol (sqlnet) to various Oracle processes including the TNS listener, the CSS, CRS & EVM daemons or even the database instance.
  • Add  slewalways yes and disable pll in /etc/inet/ntp.conf to avoid node reboot due to a leap second event. Refer to Metalink Note: 759143.1.
  • On Solaris, set the UDP buffer settings (udp_xmit_hiwat and udp_recv_hiwat) to 65536. These should improve cluster interconnect throughput. Refer to Metalink Note: 181489.1. Please create an RC script so that it will set udp_xmit_hiwat and udp_recv_hiwat to 65536 everytime the server rebooted.
        ndd -set /dev/udp udp_xmit_hiwat 65536
        ndd -set /dev/udp udp_recv_hiwat 65536
  • Ensure that both OCR and Voting disks are backed up periodically. The OCR backups are taken automatically, and stored on the Master RAC node. Voting disks can be backed up manually using dd with a 4k block size (hot backup).
  • Do not use crs_unregister to remove default CRS resources (nodeapps) unless instructed by Oracle Support.
  • Ensure that the CSS DIAGWAIT parameter is set to 13 seconds on all platforms. DIAGWAIT is a CSS parameter built-into, 10gR2, 11g which will allow enough time for the failed node to flush final trace files to better help debug a node failure. Always keep CSS misscount greater than the setting of DIAGWAIT. Refer to MetaLink Note:559365.1.
  • Increase PARALLEL_EXECUTION_MESSAGE_SIZE from the default (2k), to 4k or 8k in order to provide better recovery slave performance.
  • Set PARALLEL_MIN_SERVERS to CPU_COUNT-1, in order to pre-spawn recovery slaves at startup, instead of when recovery is required.
  • On each RDBMS instance supported by ASM, add the following SHARED_POOL allocations, based on each Disk Group type in use:
  • – For disk groups using external redundancy, every 100GB of space needs 1MB of extra shared pool plus 2MB
    – For disk groups using normal redundancy, every 50GB of space needs 1MB of extra shared pool plus 4MB
    – For disk groups using high redundancy, every 33GB of space needs 1MB of extra shared pool plus 6MB

  • Avoid bad writes to ASM disks by avoiding Raid 5. See NOTE: 30286.1 – I/O Tuning with Different RAID Configurations.
  • Ensure the Network Listener is configured to use an address-list that specifies an IPC connection before the TCP connections. This will avoid fail-over delays associated with TCP timeouts. Refer to Metalink Note: 403743.1.

In conclusion, I’m very impressed with the information and the kind of professionalism Oracle RAC Assessment team has provided. This has definitely given us (as well as management) more confidence with our new setup.

3 thoughts on “RAC Assessment from Oracle”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top