Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell FluidFS NAS Solutions Administrator's Guide

Phase 3 — Restore Cluster A Fail Back From Cluster B To Cluster A

  1. Fix the reason that caused cluster A to fail (replace hardware, replace disk, and so on), and if required reinstall FluidFS.
  2. Rebuild the cluster (use the settings for cluster A that you saved earlier), format the NAS reserve, and set up the network (client, SAN, and IC) as before.
  3. Log on to cluster B and set up the replication partnership between cluster B and cluster A. For more information on setting up replication partners, see Setting Up A Replication Partner .
  4. Create replication policy for all the source volumes in cluster B to target volumes in cluster A. For more information on creating replication policies, see Adding A Replication Policy .
    • NOTE: Replication policy is a one to one match on volume base, for example:

      Source volume B1 (cluster B) to target volume A1 (cluster A)

      Source volume B2 (cluster B) to target volume A2 (cluster A)

      …………………………

      Source volume B n (cluster B) to target volume A n (cluster A)

    • NOTE: FluidFS v2 supports auto generate target volume during addition of the replication policy. For FluidFS 1.0, you must create the target volumes in cluster B and make sure that the volume size is big enough to accommodate the corresponding source volume data in cluster A.
  5. In the NAS Manager web interface, select Data Protection > Replication > NAS Replication and click Replicate Now for all the volumes in cluster B (B1, B2, .., B n). If the replication fails, fix the problems encountered and restart the replication process. Ensure that all the volumes are successfully replicated to cluster A.
  6. Delete the replication policy for all the volumes (B1, B2, .. B n) and apply source volume configuration from cluster B to cluster A. Repeat this procedure to delete all the replication policies and bring all target volumes in cluster A to standalone volumes.
    • When deleting the replication policy from the destination cluster B — FluidFS replication manager tries to contact source cluster A, which fails. The volume on destination cluster B must have its configuration restored using Cluster Management > Restore NAS Volume Configuration.
    • When deleting the replication policy from the source cluster A — You will be given an option to apply the source volumes configuration to the destination volume. If you do not remember to select this, or it fails, the configuration of the source volume from cluster A can be restored onto the destination volume on cluster B using Cluster Management > Restore NAS Volume Configuration.
  7. Log on to cluster A.
  8. From the NAS Manager web interface, restore the NAS system configuration from cluster B.

    For more information on restoring the NAS system configuration, see Restoring Cluster Configuration .

    This changes cluster A global configuration settings, like, protocol setting, time setting, authentication parameters, and so on to cluster B settings.
    • NOTE: If system configuration restore fails, manually set them back to the original settings (use the settings for cluster A that you saved earlier).
    Cluster A is restored to its original settings.
  9. Start using cluster A to serve client requests. Administrators must perform the following steps to set up DNS and authentication:
    1. Point the DNS names from customer DNS server to cluster A instead of cluster B. Ensure that the DNS server on cluster A is the same as the DNS server or in the same DNS farm as the DNS server of cluster B. Existing client connections may break and need to re-establish during this process.
      • NOTE: Complete steps b, c, and d only for single volume failovers.
    2. On DNS, manually update the DNS entry for the NAS volume that was failed over. This step repoints end users that are accessing this volume from cluster B to cluster A, while the end users keep accessing it using the same DNS name.
      • NOTE: Client systems may need to refresh DNS cache.
    3. To force CIFS and NFS clients to cluster A, we also must delete the CIFS shares and NFS exports on cluster B. This forces the CIFS and NFS clients to reconnect, at such time they are connected to cluster A. After restoring the source volume’s configuration on cluster A, all of the shares and exports will be present on the destination volume (on cluster A), so no share/export configuration information is lost.
    4. The failed over volume now can be accessed using the exact same DNS name and share name as it was when hosted on cluster B, except now it is hosted on cluster A.
      • NOTE: NFS mounts must be un-mounted and mounted again. Active CIFS transfers fail during this process, but if CIFS shares are mapped as local drives, they automatically reconnect once the replication is deleted, DNS is updated, and NFS/CIFS shares are deleted on cluster B.
    5. Join AD server or LDAP/NIS. Ensure that the AD and LDAP are in the same AD/LDAP farm or same server.
  10. Build up replication structure between source cluster A and backup cluster B, to set up replication policy between cluster A and cluster B, use cluster B volumes as replication target volumes, to prepare for next disaster recover.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\