Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell ObjectScale 1.3 Administration Guide

Proactive Disk Removal Service Procedure (for Appliance and Software Bundle)

About this task

You can proactively trigger a disk removal procedure within ObjectScale for Appliance and Software Bundle deployment. This helps in removing a healthy disk that is suspected to be failed.

Steps

  1. From the ObjectScale Portal user interface, click Disks.
    The list of Disks that the user is authorized to view is displayed.
  2. Select the disk to be removed, and click Remove.
    A dialogue box is displayed to acknowledge the risks and confirm disk removal.
  3. Click Remove disk.
    The health of the disk is changed to "BAD".
    admin@lehi-dirt:~> kubectl get drive 69d56273-b15b-49d5-bd65-b75b525b3425 -o yaml
    apiVersion: csi-baremetal.dell.com/v1
    kind: Drive
    metadata:
      annotations:
        health: bad
      creationTimestamp: "2023-07-24T13:14:34Z"
      generation: 10
      labels:
        app: csi-baremetal
        app.kubernetes.io/name: csi-baremetal
    name: 69d56273-b15b-49d5-bd65-b75b525b3425
      resourceVersion: "775292"
      uid: 35edbfa5-9e69-4a21-8504-97d22d88e485
    spec:
      Firmware: 1.1.1
      Health: BAD
      IsClean: true
      NodeId: 8a4f12a5-e4b4-406f-b272-f920f1e826e4
    ......
    Status: ONLINE
      Type: NVME
      UUID: 69d56273-b15b-49d5-bd65-b75b525b3425
      Usage: IN_USE
    
  4. The cluster operation CR gets created.
    admin@lehi-dirt:~> kubectl get co -A
    NAMESPACE     NAME                                                                         TYPE               STATUS             OBJECT NAME                                                AGE
    objectscale   diskremoval-cfb0478d-cacb-4c3c-b612-659701949847   DiskRemoval   TriggerRemoval   69d56273-b15b-49d5-bd65-b75b525b3425   19s
    
  5. The object store status moves to "ReplacingPV".
    NOTE:This change is applicable only for SS or NVMeEngine pods. For non-SS pods, the object store status does not change.
    admin@lehi-dirt:~> kubectl get ecs -A
    NAMESPACE     NAME                 PHASE              READY COMPONENTS   S3 ENDPOINT         MGMT API
    objectscale          dirt-objectstore   ReplacingPV      21/21                                 10.249.248.25:443   10.249.248.27:4443
    
  6. The recovery of the disk starts.
    admin@lehi-dirt:~> kubectl get serviceprocedure -A
    NAMESPACE     NAME                                                                       TYPE         STATUS       OBJECT NAME                                             AGE
    objectscale         recovery-dirt-objectstore-nvmeengine-1-541fc74e   Recovery   Recovering    objectscale/dirt-objectstore-nvmeengine-1   17m
    
  7. Once the recovery is completed, the status changes to "ReadytoEject".
    admin@lehi-dirt:~> kubectl get co -A
    NAMESPACE     NAME                                                                               TYPE               STATUS            OBJECT NAME                                              AGE
    objectscale         diskremoval-cfb0478d-cacb-4c3c-b612-659701949847   DiskRemoval   ReadyToEject   69d56273-b15b-49d5-bd65-b75b525b3425   128m
    
  8. When the status turns "ReadytoEject", click Eject to blink the disk on the rack.
    A dialogue box is displayed to confirm the eject disk procedure.
  9. Click Eject disk.
    The disk is ready to be physically removed.
  10. Remove the disk physically.
    You can physically locate the correct disk in two ways:
    1. By checking the CSI-01 alert; see the Monitoring Events, Audits, and Alerts section for more details.
    2. By using the following commands:
    First define the drive name driveName=<drive name> , and then get the node and slot information using the below commands:
    #Node
    nodeUUID=$(kubectl get drive ${driveName} -o yaml | grep NodeId | awk -F ': ' '{print$2}')
    kubectl get csibmnode | grep $nodeUUID | awk '{print $3}'
    #Slot
    kubectl get drive ${driveName} -o yaml | grep Slot | awk -F ': ' '{print$2}'
    Usage of the disk is changed to "Removed", and the procedure is complete.
    admin@lehi-dirt:~> kubectl get co -A
    NAMESPACE     NAME                                                                               TYPE               STATUS    OBJECT NAME                                              AGE
    objectscale         diskremoval-cfb0478d-cacb-4c3c-b612-659701949847   DiskRemoval   Success     69d56273-b15b-49d5-bd65-b75b525b3425   138m
    
    admin@lehi-dirt:~> kubectl get drive 69d56273-b15b-49d5-bd65-b75b525b3425 -o yaml
    apiVersion: csi-baremetal.dell.com/v1
    kind: Drive
    metadata:
      annotations:
        health: bad
        removal: ready
      creationTimestamp: "2023-07-24T13:14:34Z"
      generation: 10
    labels:
    .....
      name: 69d56273-b15b-49d5-bd65-b75b525b3425
    spec:
      Firmware: 1.1.1
      Health: BAD
      IsClean: true
    Status: ONLINE
      Type: NVME
      UUID: 69d56273-b15b-49d5-bd65-b75b525b3425
      Usage: REMOVED
      VID: "0x1179"
    

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\