Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products

Dell EMC Metro node 7.0.1 Administrator Guide

Displaying consistency group properties

You can display the properties of a consistency group.

Use the ls in the /clusters/*/consistency-groups context to display only the names of consistency groups on all clusters:

VPlexcli:/> ls /clusters/*/consistency-groups/
/clusters/cluster-1/consistency-groups:
TestCG         local_test     test10      test11     test12  test13  test14
test15         test16         test5       test6      test7   test8   test9
vs_RAM_c1wins  vs_RAM_c2wins  vs_oban005  vs_sun190
/clusters/cluster-2/consistency-groups:
TestCG         local_test     test10      test11     test12  test13  test14
test15         test16         test5       test6      test7   test8   test9
vs_RAM_c1wins  vs_RAM_c2wins  vs_oban005  vs_sun190

Use the ls command in the /clusters/cluster-name/consistency-groups context to display the names of consistency groups only on the specified cluster:

VPlexcli:/> ls /clusters/cluster-1/consistency-groups/
/clusters/cluster-1/consistency-groups:
TestCG      test10     test11  test12  test13  test14  test15  test16  test5  test6  test7  test8  test9  vs_RAM_c1wins  vs_RAM_c2wins
vs_oban005  vs_sun190

Use the ll command in the /clusters/cluster-name/consistency-groups context to display an overview of the consistency groups.

Use this command to monitor the overall health of consistency groups and identify poorly-configured rules:

VPlexcli:/clusters/cluster-1/consistency-groups> ll
Name                 Operational Status            Active      Passive     Detach Rule          Cache Mode
-------------------  ----------------------------  Clusters    Clusters    -------------------  ------------
-------------------  ---------------------------  ----------  ----------  -------------------  ------------
D850-008_view1       (cluster-1,{ summary:: ok,    cluster-1   cluster-2   active-cluster-wins  synchronous
                     details:: [] }),
                     (cluster-2,{ summary:: ok,
                     details:: [] })
D850-008_view2       (cluster-1,{ summary:: ok,                cluster-1,  active-cluster-wins  synchronous
                     details:: [] }),                          cluster-2
                     (cluster-2,{ summary:: ok,
                     details:: [] })
RAM_LR_cluster-1     (cluster-1,{ summary:: ok,                            -                    synchronous
                     details:: [] }),
                     (cluster-2,{ summary::
                     unknown, details:: [] })
RAM_RR_cluster-2     (cluster-1,{ summary:: ok,                            no-automatic-winner  synchronous
                     details:: [] }),
                     (cluster-2,{ summary:: ok,
                     details:: [] })
.
.
.

Use the ls command in the /clusters/cluster-name/consistency-groups/consistency-group context to display the operational status of the groups.

In the following example, the command displays the operational status of a consistency group on a healthy metro node:

VPlexcli:/> ls /clusters/cluster-1/consistency-groups/cg1
/clusters/cluster-1/consistency-groups/cg1:
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name                 Value
-------------------  ----------------------------------------------------------
active-clusters      [cluster-1, cluster-2]
cache-mode           synchronous
detach-rule          no-automatic-winner
operational-status   [(cluster-1,{ summary:: ok, details:: [] }),
                      (cluster-2,{ summary:: ok, details:: [] })]
passive-clusters     []
read-only            false
storage-at-clusters  [cluster-1, cluster-2]
virtual-volumes      [dd1_vol, dd2_vol]
visibility           [cluster-1, cluster-2]
Contexts:
Name          Description
------------  -----------
advanced      -
Use the ll command in the /advanced context of the consistency group to display the advanced properties of a specified consistency group.
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced> ls
Name                       Value
-------------------------- --------
auto-resume-at-loser       true
current-queue-depth        -
current-rollback-data      -
default-closeout-time      -
delta-size                 -
local-read-override        true
max-possible-rollback-data -
maximum-queue-depth        -
potential-winner           -
write-pacing               disabled

The following example displays output of the ls command in the /clusters/cluster-name/ consistency-groups/consistency-group context during an inter-cluster link outage.

  • The detach-rule is no-automatic-winner, so I/O stops at both clusters. metro node remains in this state until either the inter-cluster link restarts, or you intervene using the consistency-group choose-winner command.
  • Status summary is suspended, showing that I/O has stopped.
  • Status details contain cluster-departure, indicating that the clusters can no longer communicate with one another.
    VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
    Attributes:
    Name                 Value
    -------------------  ----------------------------------------------------------
    active-clusters      [cluster-1, cluster-2]
    cache-mode           synchronous
    detach-rule          no-automatic-winner
    operational-status   [(cluster-1,{ summary:: suspended, details:: [cluster-departure] }),
                          (cluster-2,{ summary:: suspended, details:: [cluster-departure] })]
    passive-clusters     []
    recoverpoint-enabled false
    storage-at-clusters  [cluster-1, cluster-2]
    virtual-volumes      [dd1_vol, dd2_vol]
    visibility           [cluster-1, cluster-2]
    Contexts:
    advanced  recoverpoint
  • The ls command shows consistency group cg1 as suspended, requires-resume-at-loser on cluster-2 after cluster-2 is declared the losing cluster during an inter-cluster link outage.
  • The resume-at-loser command restarts I/O on cluster-2.
  • The ls command displays the change in operational status:
    VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
    Attributes:
    Name                 Value
    -------------------  ----------------------------------------------------------
    active-clusters      [cluster-1, cluster-2]
    cache-mode           synchronous
    detach-rule          no-automatic-winner
    operational-status   [(cluster-1,{ summary:: ok, details:: [] }),
                          (cluster-2,{ summary:: suspended, details:: [requires-resume-at-loser] })]
    passive-clusters     []
    recoverpoint-enabled false
    storage-at-clusters  [cluster-1, cluster-2]
    virtual-volumes      [dd1_vol, dd2_vol]
    visibility           [cluster-1, cluster-2]
    Contexts:
    advanced  recoverpoint
    VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resume-at-loser -c cluster-2
    This may change the view of data presented to applications at cluster cluster-2. You should first stop applications at that cluster. Continue? (Yes/No) Yes
    VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
    Attributes:
    Name                 Value
    -------------------  ----------------------------------------------------------
    active-clusters      [cluster-1, cluster-2]
    cache-mode           synchronous
    detach-rule          no-automatic-winner
    operational-status   [(cluster-1,{ summary:: ok, details:: [] }),
                          (cluster-2,{ summary:: ok, details:: [] })]
    passive-clusters     []
    recoverpoint-enabled false
    storage-at-clusters  [cluster-1, cluster-2]
    virtual-volumes      [dd1_vol, dd2_vol]
    visibility           [cluster-1, cluster-2]
    Contexts:
    advanced  recoverpoint
Table 1. Consistency group field descriptions
Property Description
Standard properties
cache mode synchronous (default) - Writes are done synchronously. Writes are not acknowledged to a host unless they have been sent to back-end storage at all clusters.
detach-rule Policy for automatically picking a winning cluster when there is an inter-cluster link outage. A winning cluster is intended to resume I/O operations when the link fails.
  • no-automatic-winner - The consistency group does not select a winning cluster.
  • winner - The cluster specified by cluster-name will be declared the winner if an inter-cluster link outage lasts more than the number of seconds specified by delay.
storage-at-clusters The cluster where the physical storage associated with a consistency group is located.
  • Modifiable using the set command. If cluster names are cluster-1 and cluster-2 valid values are:
    • cluster-1 - Storage associated with this consistency group is located only at cluster-1.
    • cluster-2 - Storage associated with this consistency group is located only at cluster-2.
    • cluster-1,cluster-2 - Storage associated with this consistency group is located at both cluster-1 and cluster-2.
  • When modified, the new value cannot be incompatible with the volumes that are already in the consistency group. Change storage-at-clusters only when the consistency group has no member volumes.
visibility Lists the clusters at which this consistency group is visible.
  • Modifiable using the set command. If cluster names are cluster-1 and cluster-2 valid values are:
    • cluster-1 - This consistency group is visible only at cluster-1.
    • cluster-2 - This consistency group is visible only at cluster-2.
    • cluster-1,cluster-2 - This consistency group is visible at both cluster-1 and cluster-2.
  • Changing this property changes where the consistency group is visible, and may cause contexts to appear or disappear in the context tree.
virtual-volume Lists the virtual volumes that are members of the consistency group. Modifiable using the following commands:
  • consistency-group add-virtual-volumes - Add one or more virtual volumes to a consistency group.
  • consistency-group remove-virtual-volumes - Remove one or more virtual volumes from a consistency group.
Advanced properties
auto-resume-at-loser Determines whether I/O automatically resumes at the detached cluster for the volumes in a consistency group when the cluster regains connectivity with its peer cluster.
  • Relevant only for multi-cluster consistency groups that contain distributed volumes.
  • Modifiable using the set command. Set this property to true to allow the volumes to resume I/O without user intervention (using the resume-at-loser command).
  • true - I/O automatically resumes on the losing cluster after the inter-cluster link has been restored.
  • false (default) - I/O must be resumed manually after the inter-cluster link has been restored.
  • Leave this property set to false to give administrators time to restart the application. Otherwise, dirty data in the host’s cache is not consistent with the image on disk to which the winning cluster has been actively writing. Setting this property to true can cause a spontaneous change of the view of data presented to applications at the losing cluster. Most applications cannot tolerate this data change. If the host flushes those dirty pages out of sequence, the data image may be corrupted.
Display-only properties
active-clusters For synchronous consistency groups, this property is always empty ([ ]).
operational status Current status for this consistency group with respect to each cluster on which it is visible.
  • ok - I/O can be serviced on the volumes in the consistency group.
  • suspended - I/O is suspended for the volumes in the consistency group. The reasons are described in operational status: details.
  • degraded - I/O is continuing, but there are other problems as described in operational status: details.
  • unknown - The status is unknown, mostly because of lost management connectivity.
operational status: details If operational status is ok, this field is empty: [ ]. Otherwise, it displays additional information, which may be any of the following:
  • cluster-departure - Not all the visible clusters are in communication.
  • data-safe-failure - A single director has failed. The volumes are still crash-consistent, and remain in this state, unless a second failure occurs before the first is recovered.
  • rebuilding-across-clusters - One or more distributed member volumes is being rebuilt. At least one volume in the group is out of date at that cluster and is re-syncing. If the link goes out at this time, the entire consistency group is suspended. Use the rebuild status command to display which volume is out of date at which cluster.
  • rebuilding-within-cluster - One or more local rebuilds is in progress at this cluster.
  • requires-resolve-conflicting-detach - After the inter-cluster link is restored, two clusters have discovered that they have detached from one another and resumed I/O independently. The clusters are continuing to service I/O on their independent versions of the data. The consistency-group resolve-conflicting-detach command must be used to make the view of data consistent again at the clusters.
  • requires-resume-after-rollback - A cluster has detached its peer cluster and rolled back the view of data, but is awaiting the consistency-group resume-after-rollback command before resuming I/O. Displayed:
    • There is no detach-rule
    • If the detach-rule is no-automatic-winner, or
    • If the detach-rule cannot fire because its conditions are not met.
      • unhealthy-devices - I/O has stopped in this consistency group because one or more volumes are unhealthy and cannot perform I/O.
      • will-rollback-on-link-down - If there were a link-down now, the winning cluster would have to roll back the view of data in order to resume I/O.
virtual-volumes List of virtual volumes that are members of the consistency group.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\