Use this procedure to redistribute the MDM cluster manually.
It is critical that the
MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and availability of the cluster. The location of the
MDM components should be checked and validated during every engagement, and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access switches, you should relocate the
MDM cluster components.
When adding new
MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by the PowerFlex hyperconverged nodes.
Prerequisites
Identify new nodes to use as MDM or tiebreaker.
Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and enter the
IP addr command).
Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2. For example, for a
PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an
SVM, it is eth3 and eth4.
Identify the primary MDM.
Steps
SSH to each new node or
SVM and assign the proper role (MDM or tiebreaker) to each.
Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE:The following steps contain sample versions of
PowerFlex files as examples only. Use the appropriate
PowerFlex files for your deployment.
To install the LIA, enter
TOKEN=<flexos password> rpm -ivh EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm.
To install the MDM service:
For the MDM role, enter
MDM_ROLE_IS_MANAGER=1 rpm -ivh EMC-ScaleIO-mdm-3.x-x.xxx.el7.x86_64.rpm
For the tiebreaker role, enter
MDM_ROLE_IS_MANAGER=0 rpm-ivh EMC-ScaleIO-mdm-3.x-x.xxx.el7.x86_64.rpm.
Open an SSH terminal to the primary MDM and log in to the operating system.
Log in to
PowerFlex by entering
scli –-login --username admin –-password
<powerflex password>.
Add new standby MDM by entering
scli --add_standby_mdm --mdm_role manager --new_mdm_ip <new mdm data1, data2 ip’s> --new_mdm_management_ip <mdm management IP> --new_mdm_virtual_ip_interfaces <list both interface, comma seperated> --new_mdm_name <new mdm name>.
Add a new standby tiebreaker by entering
scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb data1, data2 ip’s> --new_mdm_name <new tb name>.
Repeat Steps 7 and 8 for each new MDM and tiebreaker that you are adding to the cluster.
Enter
scli –-query_cluster to find the
ID for the current MDM and tiebreaker. Note the IDs of the MDM and tiebreaker being replaced.
To replace the MDM, enter
scli --replace_cluster_mdm --add_slave_mdm_id <mdm id to add> --remove_slave_mdm_id <mdm id to remove>.
Repeat this step for each MDM.
To replace the tiebreaker, enter
scli --replace_cluster_mdm --add_tb_id <tb id to add> --remove_tb_id <tb id to remove>.
Repeat this step for each tiebreaker.
Enter
scli -–query_cluster to find the IDs for MDMs and tiebreakers being removed.
Using IDs to remove the old MDM, enter
scli --remove_standby_mdm --remove_mdm_id <mdm id to remove>.
NOTE:This step might not be necessary if this MDM remains in service as a standby.
To remove the old tiebreaker, enter
scli --remove_standby_mdm --remove_mdm_id <mdm id to remove>.
NOTE:This step might not be necessary if this tiebreaker remains in service as a standby.
Repeat these steps as needed.
Data is not available for the Topic
Please provide ratings (1-5 stars).
Please provide ratings (1-5 stars).
Please provide ratings (1-5 stars).
Please select whether the article was helpful or not.
Comments cannot contain these special characters: <>()\