Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products

VPLEX: How to detach and reattach a mirror (leg)

Summary: This article describes how to detach and reattach a mirror leg.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Instructions

Before detaching or attaching a mirror, ensure you identify the correct leg and cluster name.

  1. Run an array rediscover on the array to ensure that we are seeing the latest status of the array.
Example:
VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-APM00111501539/
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-APM00111501539/> array re-discover
  1. Run show-use-hierarchy to determine if the virtual-volume is within a consistency-group and must be removed. (No changes can be made on a virtual-volume within a consistency-group)
Example below indicates that the virtual-volume is part of consistency-group (TEST01) and must be removed first before we can detach the bad mirror leg:
VPlexcli:/> show-use-hierarchy /clusters/cluster-1/virtual-volumes/test_DR1_vol_vol
consistency-group: TEST01 (synchronous)
  virtual-volume: test_DR1_vol_vol (1G, major-failure, distributed @ cluster-1, unexported)
    distributed-device: test_DR1_vol (1G, raid-1, major-failure)
      distributed-device-component: local_device_C1_DR1_leg (1G, raid-0, cluster-1)
        extent: extent_C1_DR1_leg_1 (1G)
          storage-volume: C1_DR1_leg (1G)
            logical-unit: VPD83T3:60060160c9c02c00a49a72452eaae811
              storage-array: EMC-CLARiiON-APM00111501539
      distributed-device-component: local_device_C2_DR1_leg (1G, raid-0, critical-failure, cluster-2)
        extent: extent_C2_DR1_leg_1 (1G, critical-failure)
          storage-volume: C2_DR1_leg (1G, critical-failure)
            logical-unit: VPD83T3:60060160c9c02c00c6243381ffa5e811
              storage-array: EMC-CLARiiON-APM00111501539
  1. Remove the virtual-volume from the consistency group using the below command.
Example:
VPlexcli:/> consistency-group remove-virtual-volumes --consistency-group TEST01 --virtual-volumes test_DR1_vol_vol

Note:
When using RecoverPoint enabled consistency-groups, if virtual-volume cannot be removed from consistency-group due to being in a consistency-group protected by RecoverPoint, you must first remove the volume from the RecoverPoint consistency-group within and also remove/destroy any R-sets that are involved with that RecoverPoint consistency-group and this must be done on the RecoverPoint side. After the volume has been removed from the RecoverPoint consistency-group and is no longer associated with any R-Sets, you can continue to remove the virtual-volume from the VPLEX consistency-group.
  1. Rerun show-use-hierarchy to confirm that the volume is no longer in a consistency-group so that we can proceed further.
Example below indicates that the virtual-volume is no longer part of the consistency-group (TEST01):
VPlexcli:/> show-use-hierarchy /clusters/cluster-1/virtual-volumes/test_DR1_vol_vol
virtual-volume: test_DR1_vol_vol (1G, major-failure, distributed @ cluster-1, unexported)
  distributed-device: test_DR1_vol (1G, raid-1, major-failure)
    distributed-device-component: local_device_C1_DR1_leg (1G, raid-0, cluster-1)
      extent: extent_C1_DR1_leg_1 (1G)
        storage-volume: C1_DR1_leg (1G)
          logical-unit: VPD83T3:60060160c9c02c00a49a72452eaae811
            storage-array: EMC-CLARiiON-APM00111501539
    distributed-device-component: local_device_C2_DR1_leg (1G, raid-0, critical-failure, cluster-2)
      extent: extent_C2_DR1_leg_1 (1G, critical-failure)
        storage-volume: C2_DR1_leg (1G, critical-failure)
          logical-unit: VPD83T3:60060160c9c02c00c6243381ffa5e811
            storage-array: EMC-CLARiiON-APM00111501539
  1. We can now remove the distributed-device-component/mirror leg (local_device_C2_DR1_leg) from the distributed-device (test_DR1_vol) by using the below command. If detaching mirror was successful like below, go to step 7. However, if detaching the mirror fails due to error "can't remove mirror as this would lead to conflict with existing loser settings," go to step 6.
Example:
VPlexcli:/> device detach-mirror --device test_DR1_vol --mirror local_device_C2_DR1_leg --discard --force
Detached mirror local_device_C2_DR1_leg.
Mirror local_device_C2_DR1_leg is below /clusters/cluster-2/devices.


Notes regarding the above command to detach the bad leg:
  • Always run the device detach at distributed-device level using the flag --device and then specify the distributed-device-component to detach using the flag --mirror.
  • Use the flag --discard in order to prevent a virtual-volume to be automatically created on top of the distributed-device-component/mirror which we are detaching.
  • Use the flag --force to skip any prompts.
  1. If you have received an error when attempting to detach the mirror due to conflicts with existing loser settings, you must first change the detach rule-set-name to have the opposing cluster as the winner. This step can be skipped if detaching the mirror leg was successful.
Example showing the error received due to conflicting loser settings:
VPlexcli:/> device detach-mirror --device test_DR1_vol --mirror local_device_C2_DR1_leg --discard --force
device detach-mirror:  Evaluation of <<device detach-mirror --device test_DR1_vol --mirror local_device_C2_DR1_leg --discard --force>> failed.
cause:                 Unable to detach 'local_device_C2_DR1_leg' from device 'test_DR1_vol.
cause:                 Unable to detach mirror 'local_device_C2_DR1_leg' from distributed Device 'test_DR1_vol'.
cause:                 Can't remove the mirror, as this would lead to a conflict with the existing loser settings.


To bypass the conflicting loser settings, you first must set the distributed-device rule-set to have the opposite leg as winner.

Example on how to check the current rule-set-name attribute:
VPlexcli:/> cd /distributed-storage/distributed-devices/test_DR1_vol/
VPlexcli:/distributed-storage/distributed-devices/test_DR1_vol> ll

Attributes:
Name                    Value
----------------------  ----------------------
application-consistent  false
auto-resume             true
block-count             262144
block-size              4K
capacity                1G
clusters-involved       [cluster-1, cluster-2]
consistency-group       -
geometry                raid-1
health-indications      []
health-state            ok
locality                distributed
operational-status      ok
rebuild-allowed         true
rebuild-eta             -
rebuild-progress        -
rebuild-status          done
rebuild-type            full
rule-set-name           cluster-2-detaches <--
service-status          running
storage-array-family    clariion
stripe-depth            -
system-id               test_DR1_vol
thin-capable            true
transfer-size           128K
virtual-volume          test_DR1_vol_vol

Contexts:
Name                           Description
-----------------------------  ------------------------------------------------
at-cluster                     Contains cluster-specific information on the
                               enclosing distributed-device.
distributed-device-components  Contains information about one cluster-local leg
                               of the enclosing distributed-device.


Example on how to set the rule-set-name attribute to the opposite cluster:
VPlexcli:/distributed-storage/distributed-devices/test_DR1_vol> set rule-set-name cluster-1-detaches
VPlexcli:/distributed-storage/distributed-devices/test_DR1_vol> ll

Attributes:
Name                    Value
----------------------  ----------------------
application-consistent  false
auto-resume             true
block-count             262144
block-size              4K
capacity                1G
clusters-involved       [cluster-1, cluster-2]
consistency-group       -
geometry                raid-1
health-indications      []
health-state            ok
locality                distributed
operational-status      ok
rebuild-allowed         true
rebuild-eta             -
rebuild-progress        -
rebuild-status          done
rebuild-type            full
rule-set-name           cluster-1-detaches  <--
service-status          running
storage-array-family    clariion
stripe-depth            -
system-id               test_DR1_vol
thin-capable            true
transfer-size           128K
virtual-volume          test_DR1_vol_vol

Contexts:
Name                           Description
-----------------------------  ------------------------------------------------
at-cluster                     Contains cluster-specific information on the
                               enclosing distributed-device.
distributed-device-components  Contains information about one cluster-local leg
                               of the enclosing distributed-device.


Now you can detach the mirror leg without the error for conflicting loser settings.
VPlexcli:/> device detach-mirror --device test_DR1_vol --mirror local_device_C2_DR1_leg --discard --force
Detached mirror local_device_C2_DR1_leg.
Mirror local_device_C2_DR1_leg is below /clusters/cluster-2/devices.
  1. Rerun show-use-hierarchy to verify that the distributed-device-component/mirror has been removed successfully.
Example below indicates that we no longer see distributed-device-component (local_device_C2_DR1_leg) attached any longer:
VPlexcli:/> show-use-hierarchy /clusters/cluster-1/virtual-volumes/test_DR1_vol_vol
virtual-volume: test_DR1_vol_vol (1G, distributed @ cluster-1, unexported)
  distributed-device: test_DR1_vol (1G, raid-1)
    distributed-device-component: local_device_C1_DR1_leg (1G, raid-0, cluster-1)
      extent: extent_C1_DR1_leg_1 (1G)
        storage-volume: C1_DR1_leg (1G)
          logical-unit: VPD83T3:60060160c9c02c00a49a72452eaae811
            storage-array: EMC-CLARiiON-APM00111501539
  1. We can now reattach the previously detached leg again to the distributed-device (this will automatically trigger a full rebuild).
Example:
VPlexcli:/> device attach-mirror --device test_DR1_vol --mirror local_device_C2_DR1_leg

Note regarding this step:
Attaching the mirror leg triggers a full rebuild. Always run the device attach at the distributed-device level using the flag --device and then specify the distributed-device-component to attach using the flag --mirror.
  1. Rerun show-use-hierarchy to verify that the distributed-device-component/mirror has been reattached successfully.
Example below indicates that the distributed-device-component (local_device_C2_DR1_leg) is now reattached to the distributed-device (test_DR1_vol).
VPlexcli:/> show-use-hierarchy /clusters/cluster-1/virtual-volumes/test_DR1_vol_vol
virtual-volume: test_DR1_vol_vol (1G, minor-failure, distributed @ cluster-1, unexported)
  distributed-device: test_DR1_vol (1G, raid-1, minor-failure)
    distributed-device-component: local_device_C1_DR1_leg (1G, raid-0, cluster-1)
      extent: extent_C1_DR1_leg_1 (1G)
        storage-volume: C1_DR1_leg (1G)
          logical-unit: VPD83T3:60060160c9c02c00a49a72452eaae811
            storage-array: EMC-CLARiiON-APM00111501539
    distributed-device-component: local_device_C2_DR1_leg (1G, raid-0, critical-failure, cluster-2)
      extent: extent_C2_DR1_leg_1 (1G, critical-failure)
        storage-volume: C2_DR1_leg (1G, critical-failure)
          logical-unit: VPD83T3:60060160c9c02c00c6243381ffa5e811
            storage-array: EMC-CLARiiON-APM00111501539


Note regarding this step:
The virtual-volume is showing minor-failure status is expected to be seen due to rebuilds occurring on the critical-failure leg. Once rebuilds have finished, the status changes.
  1. Check the rebuild status. The output from this displays the length of time for the rebuild to complete, the percentage finished, and the throughput value (M/s).
Example:
VPlexcli:/> rebuild status
[1] storage_volumes marked for rebuild

Global rebuilds:
device                rebuild type  rebuilder director  rebuilt/total  percent finished  throughput  ETA
--------------------  ------------  ------------------  -------------  ----------------  ----------  ----------
C2_DR1_leg            full          s2_1ce9_spa               0.5G/1G            50.00%     87.5M/s       8.0hr

Local rebuilds:
  No active local rebuilds.
  1. Readd the virtual-volume back to the original consistency-group like below.
Example:
VPlexcli:/> consistency-group add-virtual-volumes --consistency-group TEST01 --virtual-volumes test_DR1_vol_vol
  1. Run show-use-hierarchy one last time once rebuilds have finished to confirm that both mirror legs are healthy once again. (Healthy volumes will not output minor-failure, major-failure, or critical-failure.)
Example:
VPlexcli:/> show-use-hierarchy /clusters/cluster-1/virtual-volumes/test_DR1_vol_vol
consistency-group: TEST01 (synchronous)
 virtual-volume: test_DR1_vol_vol (1G, distributed @ cluster-1, unexported)
   distributed-device: test_DR1_vol (1G, raid-1)
     distributed-device-component: local_device_C1_DR1_leg (1G, raid-0, cluster-1)
       extent: extent_C1_DR1_leg_1 (1G)
         storage-volume: C1_DR1_leg (1G)
           logical-unit: VPD83T3:60060160c9c02c00a49a72452eaae811
             storage-array: EMC-CLARiiON-APM00111501539
     distributed-device-component: local_device_C2_DR1_leg (1G, raid-0, cluster-2)
       extent: extent_C2_DR1_leg_1 (1G)
         storage-volume: C2_DR1_leg (1G)
           logical-unit: VPD83T3:60060160c9c02c00c6243381ffa5e811
             storage-array: EMC-CLARiiON-APM00111501539

Additional Information

You can view the 'bad-leg' in the VPLEX UI from the following context:

  • Provision Storage
  • Distributed
  • Distributed devices
  • From the provision storage pane > virtualized storage > distributed devices, choose the device name.
  • There is an Icon to view a graphical map for a visual representation of the distributed device. 
kA5j0000000fxlmCAA_2_0
 

Affected Products

VPLEX Series

Products

VPLEX VS2
Article Properties
Article Number: 000158230
Article Type: How To
Last Modified: 02 Sep 2021
Version:  5
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.