Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell ObjectScale 1.3 Administration Guide

Perform a node replacement service procedure (ObjectScale on OpenShift)

To manually replace a node, complete these steps. Use this process for ObjectScale instances on a Red Hat OpenShift cluster.

Prerequisites

  • Ensure that the replacement node has the same name and IP address as the node being replaced.
  • If the replacement process takes longer than 1 hour (which is likely), recovery begins to run for the data on the replaced node. However, it should stop once the node is replaced and operational.

Steps

  1. Prepare the node for removal:
  2. Remove the node from the cluster:
    kubectl delete node <NODE_NAME>
  3. Physically remove and replace the failed node hardware. As you do so, ensure that:
    • You move all the drives from the failed node into the new compute node. Then install and join it back to the OpenShift cluster, following the steps outlined in the OpenShift documentation.
    • The new node satisfies the requirements that are listed in the "Deployment pre-requisites for ObjectScale on OpenShift" of the Dell ObjectScale Application Installation Guide for Red Hat OpenShift.
    All the PVC bindings remain, all the stateful pods start on the new node.
  4. Ensure that the new node has been added to the cluster, and all nodes are ready. For example:
     kubectl get nodes
    NAME                   STATUS   ROLES    AGE   VERSION
    master0.ocp4.cmo.com   Ready    master   15d   v1.19.0+e49167a
    master1.ocp4.cmo.com   Ready    master   15d   v1.19.0+e49167a
    master2.ocp4.cmo.com   Ready    master   15d   v1.19.0+e49167a
    worker0.ocp4.cmo.com   Ready    worker   15d   v1.19.0+e49167a
    worker1.ocp4.cmo.com   Ready    worker   46m   v1.19.0+e49167a
    worker2.ocp4.cmo.com   Ready    worker   15d   v1.19.0+e49167a
  5. Verify that CSI recognizes the node and it appears in the Bare-Metal node list. For example:
    kubectl get csibmnodes
    NAME                               UUID                     ADDRESSES
    csibmnode-4f19a3e9-9c9b-40a8-...   4f19a3e9-9c9b-40a8-...   {"Hostname":"master0.ocp4.cmo.com","InternalIP":"10.236.224.60"}
    csibmnode-a0dba2b4-5eab-4c34-...   a0dba2b4-5eab-4c34-...   {"Hostname":"worker0.ocp4.cmo.com","InternalIP":"10.236.224.66"}
    csibmnode-bb7dcedc-139b-4d8f-...   bb7dcedc-139b-4d8f-...   {"Hostname":"master2.ocp4.cmo.com","InternalIP":"10.236.224.64"}
    csibmnode-bdb9f0b8-f52d-4aaf-...   bdb9f0b8-f52d-4aaf-...   {"Hostname":"worker1.ocp4.cmo.com","InternalIP":"10.236.224.68"}
    csibmnode-de3eebf0-dfcd-41e9-...   de3eebf0-dfcd-41e9-...   {"Hostname":"worker3.ocp4.cmo.com","InternalIP":"10.236.224.72"}
    csibmnode-e820eea6-3145-4fb8-...   e820eea6-3145-4fb8-...   {"Hostname":"master1.ocp4.cmo.com","InternalIP":"10.236.224.62"}
    csibmnode-f21e396b-2d91-43d5-...   f21e396b-2d91-43d5-...   {"Hostname":"worker2.ocp4.cmo.com","InternalIP":"10.236.224.70"}
    
  6. Verify that the cluster is available. For example:
     kubectl get ecs
    NAME          PHASE       READY COMPONENTS   S3 ENDPOINT         MGMT API
    ecs-cluster   Available   23/23              10.236.228.53:443   10.236.228.52:4443
  7. Verify other features and components, including:
    • S3 I/O.
    • ObjectScale Portal.
    • Kubectl command output.
    • All pods, including any previously in the pending state, are now running.
    • Pod restarts have not occurred or increased.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\