Start a Conversation

Unsolved

This post is more than 5 years old

2073

September 12th, 2012 00:00

Oracle RAC Clusterware and VPLEX Witness Deployment

Often deployments of extended Oracle RAC emphasize deploying a third site for one of the Oracle Clusterware voting files (possible base on NFS). To be clear, the use of Oracle Clusterwear voting disks is still required for Oracle RAC on extended distance clusters with VPLEX Metro.  However, the cluster voting disks themselves will reside on VPLEX virtual volumes. This provides guaranteed alignment of Oracle voting disk access / Oracle RAC behavior and VPLEX Metro failover behavior. With VPLEX, it is the VPLEX Witness alone that is deployed in an independent fault domain (third site in the case of multi-site deployment)

  • In the case of Oracle interconnect partitioning alone (not a true site failure, and no effect on VPLEX interconnect) then Oracle Clusterware will reconfigure based on node majority and access to the voting disk.

  • In the case of VPLEX interconnect partitioning (or a true site failure) VPLEX immediately allows IOs to continue at one cluster based on site preference rules and Cluster Witness guidance.  The Oracle cluster nodes will therefore reconfigure the cluster in accordance. Although the voting disks are still required, they do not need to be deployed in an independent 3rd site as VPLEX Witness provides split-brain protection and guaranteed behavior alignment of Metro and Oracle Clusterware.  Further, since VPLEX witness controls access to the voting files, consistent and deterministic behavior can be guaranteed across independent Oracle RAC deployments and dependent upstream user applications.

5 Practitioner

 • 

274.2K Posts

September 17th, 2012 23:00

Hello!

I need some clarification of requirement for voting disk in 3rd location.

I plan to establish extended RAC upon vplex metro for POC purpose.

However, I am still doubt if we don't need the voting disk in 3rd location when using vplex metro.

As my understanding, the vplex witness is used for guranting the vplex survivality in case of  vplex interconnect failure

How it can gurantee the CRS availiability?

Is there anyone who clarify this?

If anyone, please explain it to me in detail.

Under Extended RAC ,  voting disk is placed in each three locations such as 1st node of RAC , 2nd node of RAC and isolated node to  CRS Cluster.

In case of storage channel failure of remote node , the local node of rac cluster can ping the 3rd voting  and sustain the cluster membership. That's the why 3rd voting disk is required.

Am I wrong?

Thanks.

September 18th, 2012 01:00

Hi, buddies

you are rigth:Vplex witness is used for guranting the survivality in case of vplex interconnect failure.

As you know, vplex is a storage virtualization solution, it make two storage arrays appear as a SINGLE storage array to the hosts. every block written to the vplex cluster will go to both storage arrays. each of the storage arrays has an exact copy of data. so for the CRS and Voting disk files, storage array 1 and storage array 2 all maintain the same copy of data. To clarify, for an Oracle RAC database, the number of CRS and Voting disk files do not change, not matter how many RAC nodes you add to the cluster. so in a extended RAC database deployment, storage array 1 maintain a copy of CRS and Voting disk files, storage array 2 also maintain an exact copy of CRS and Voting disk files as storage array 1. If one of the underlying storage array goes down, the access to the CRS and Voting disks files will be redirect to the surviving storage array, the access to the storage array will not be disrupted, and the instance will not aware what is happening on the underlying storage array.

5 Practitioner

 • 

274.2K Posts

September 18th, 2012 19:00

First of all, thanks for your reply.

My question is that  in realility, we don't need to place 3rd voting disk in  3rd location.

Even if vplex witness server would be added, I guess 3rd location voting disk still is required.

Thanks.

YongDae.

63 Posts

September 19th, 2012 00:00

YongDae

One of the advantages of using the VPLEX witness is that it removes the need for a voting disk at a 3rd site.It is explained in detail in the following whitepaper

https://community.emc.com/servlet/JiveServlet/downloadBody/18796-102-1-66849/h8930-vplex-metro-oracle-rac-wp.pdf

The synchronization services component (CSS) of the Oracle Clusterware, which synchronizes the Oracle RAC nodes, maintains two heartbeat mechanisms

1) the disk heartbeat to the voting device and

2) the network heartbeat  across the interconnect

Both of these heartbeat mechanisms have an associated timeout value.

The disk heartbeat has an internal i/o timeout interval (DTO Disk Timeout), in seconds, where an i/o to the voting disk must complete: 200s

The css misscount parameter (MC), is the maximum time, in seconds, that a network heartbeat  can be missed, it defaults to 45s.  On VPLEX it is recommended that css miscount is set to a value of 45 seconds. It is based on worst case scenario for VPLEX Metro reconfiguration and  RAC will comply based on quorum accessibility only at the site VPLEX determined as surviving.

We have tested and validated this in a number of use cases and through elab we tested,  validated and certified, with Oracle, these failures and more all the way down to the component level in the attached network, arrays and servers.

You can see this in action in at the EMC demo center. The links are below.

The Chinese Language Version is here

The English Language Version is here

No Events found!

Top