Use the following procedure to power on the
PowerFlex management controller 2.0.
Steps
Log in to iDRAC and power on all the PowerFlex controller nodes. Monitor the virtual console and wait for some time for the VMware ESXi server to appear.
Log in to each VMware ESXi server and verify that all the
SVMs have started with the host.
Power on all the
PowerFlex management platform nodes (management virtual machines from VMware ESXi host client):
Log in to VMware ESXi using the host client.
Click
Virtual Machines, select the management virtual machine, and click
Power on.
Repeat step 3b to power on all the management virtual machines.
Log in to the nodes running
PowerFlex management platform processes (all three management virtual machines):
Run the following command to check the status of the rke2-server:
#systemctl status rke2-server
Do the following depending on the rke2-server status:
Table 1. Status of the rke2-server The following table provides information about status of the rke2-server:
Status of the rke2-server
Do the following
active
Go to the next step.
activating
Repeat the command to check the rke2-server status until active.
failed
Attempt to start the service by running the following command:
#systemctl start rke2-server
Once the rke2-server is running on all the three
PowerFlex management platform nodes, ensure that all nodes are in
ready state:
Log in to the
PowerFlex management platform primary node using SSH and run the following command:
#kubectl get nodes
If you see an error message, wait for a few minutes and try again. Once the nodes are in
ready state, go to the next step.
Restore the cluster monitoring operator (CMO) database:
#alias k="kubectl -n $(kubectl get pods -A | grep -m 1 -E 'platform|pgo|helmrepo' | cut -d' ' -f1)"
#kubectl config set-context default --namespace=$(kubectl get pods -A | grep -m 1 -E 'platform|pgo|helmrepo|docker' | cut -d' ' -f1)
#k patch $(k get postgrescluster -o name) --type merge --patch '{"spec":{"shutdown": false}}'
Verify the CMO database:
#echo $(kubectl get pods -l="postgres-operator.crunchydata.com/control-plane=pgo" --no-headers -o name && kubectl get pods -l="postgres-operator.crunchydata.com/instance" --no-headers -o name) | xargs kubectl get -o wide
Monitor the
PowerFlex management platform status:
Run the following command to identify the port number for the
PowerFlex management platform monitor utility:
#kubectl get services monitor-app -n powerflex -o jsonpath="{.spec.ports[0].nodePort}{\"\n\"}"
Wait for 20-30 minutes and check the overall health status of the
PowerFlex management platform.
Go to
http://<node IP>:port/
Where, the node IP address is a management IP address that is configured on any of the management virtual machines (not the Ingress or
PowerFlex Manager IP address).
Click
PFMP status and wait for all entries to turn green.
Contact Dell Technical Support if the
PowerFlex management platform status persists as red, or the main UI is not accessible after 20-30 minutes.
Wait for the rebalance to complete before resuming workload. Contact Dell Technical Support if the cluster status is not in
Normal state.
To check the rebuild or rebalance status, run the following:
#scli --query_all |grep -i reb
Verify that all volumes are available:
#scli --query_all_volumes
Power on the vCenter VM from the VMware ESXi host client, if vCenter high availability is configured, power on active, passive, and witness nodes (nodes can start in any order):
Log in to VMware ESXi using the host client.
Click
Virtual Machines, select the management virtual machine, and click
Power on.
Modify the startup order of SVMs to manual to enable the vSphere high availability. This is applicable for all SVMs in the management controller PowerFlex cluster:
In the vSphere Client, select the host where the VM is located.
Click the
Configure tab.
Under
Virtual Machines, select
VM Startup/Shutdown, and click
Edit.
The
Edit VM Startup and Shutdown window opens.
To modify the startup order of the virtual machines, select a VM from the
Automatic Startup category and use the up arrow to move the VM to the
Manual Startup category.
Select the SVM and move back to
Manual Startup category.
Clear the
Automatically start and stop the virtual machines with the system check box and click
OK.
Repeat the steps for all the SVMs.
Enable vSphere high availability:
Log in to
VMware vSphere Client.
Click
vSphere Client > Shortcuts > Hosts and Clusters.
Browse to the cluster.
Click the
Configure tab.
Select
vSphere Availability and click
Edit.
Click the toggle button to enable
vSphere HA.
Click
OK.
Power on all other VMs, such as, CloudLink and secure connect gateway:
Log in to
VMware vSphere Client.
From
vSphere Client > Shortcuts > Hosts and Clusters.
Browse and right-click the VM and click
Power > Power on.
Verify that all the VMs are up and running.
Data is not available for the Topic
Please provide ratings (1-5 stars).
Please provide ratings (1-5 stars).
Please provide ratings (1-5 stars).
Please select whether the article was helpful or not.
Comments cannot contain these special characters: <>()\