Unsolved
30 Posts
0
552
EKS Anywhere, Part-1 Dell EMC PowerStore CSI 2.2.0
This article is part of the EKS Anywhere series EKS Anywhere., extending the Hybrid cloud momentum
The article is a deep-dive and hence split into 2 parts. This is the Part-1 of the PowerStore CSI and EKS Anywhere article. In this article, we are going to explore the implementation, integration and testing of Dell PowerStore CSI with EKS Anywhere for an iSCSI-based use-case.
As an introductory context, one can observe DellEMC’s CSI coverage CSI Drivers | Dell Technologies. In addition to the CSI implementations, DellEMC has also innovated and instrumented CSM (Container Storage Modules), which further heighten the range of experiences by adding authentication, authorization, observability, replication, etc.
A short note on Dell PowerStore is imminent and you can obviously google more for technicalities. As a summary, PowerStore is Dell Technologies award winning1 and a revolutionary storage appliance that’s designed for the data era. It provides customers with a data-centric, intelligent, and adaptable infrastructure solution. In tandem with the CSI support and a unified storage platform for Block, File and vVols, it packs a strong punch for the traditional and cloud native persistent workloads.
Our use-case:
We will be leveraging an iSCSI-based implementation pattern, where the persistence layer for our stateful workload is implemented over the Dell EMC PowerStore CSI. The below visual represents a high-level summary of the same
Goals:
- Implement PowerStore CSI drivers on EKS Anywhere cluster
- Implement snapshotting capabilities via external-snapshotter
- Deploy a MySQL workload with persistence layer via iSCSI PowerStore CSI with a web-frontend based on Adminer
- Test various use-cases around persistence, snapshotting (backup & restore)
Pre-requisites:
- EKS Administrative machine is setup as per the earlier article
- The ubuntu OS template for the clusters has been created as per this article
- One can do this on a new or an existing cluster either standalone or a dedicated management cluster mode
- At-least one static IP address from the same range of EKS Anywhere cluster network. This will be used to expose adminer web application via a load balancer acting as a front-end web application for MySQL database
- A PowerStore array configured appropriately to be used for persistent storage via iSCSI.
Let's begin
Assuming, I am starting with a fresh standalone workload cluster (you can have an existing one or create a new one). My cluster’s name will be eksa1 and the static IP for the API server will be 172.24.165.11
CLUSTER_NAME=eksa1API_SERVER_IP=172.24.165.11cd $HOMEcp $HOME/eks-anywhere/cluster-samples/cluster-sample.yaml $CLUSTER_NAME-eks-a-cluster.yamlsed -i "s/workload-cluster-name/$CLUSTER_NAME/g" $HOME/$CLUSTER_NAME-eks-a-cluster.yamlsed -i "s/management-cluster-name/$CLUSTER_NAME/g" $HOME/$CLUSTER_NAME-eks-a-cluster.yamlsed -i "s/api-server-ip/$API_SERVER_IP/g" $HOME/$CLUSTER_NAME-eks-a-cluster.yamleksctl anywhere create cluster -f $HOME/$CLUSTER_NAME-eks-a-cluster.yaml
The respective output is shown below as the cluster gets created
Once the cluster is created, we can observe the storage classes loaded. As one can see from the below kubectl output for my EKS Anywhere workload cluster, AWS EKS Anywhere ships with the standard storage class which is mapped to VMware’s CNS CSI. This default CSI is also covered in the other article
kubectl get storageclassNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEstandard (default) csi.vsphere.vmware.com Delete Immediate false 7d4h
Automating the entire CSI installation for PowerStore
A handy script has been created and placed while the EKS Anywhere administrative machine was setup using Terraform. This script is called "install-powerstore-csi.sh" and is located under $HOME/eks-anywhere/powerstore sub-directory.
The script automates the entire process of CSI driver installation for iSCSI-based scenarios. In addition, it also creates the necessary storage class that can be used for persistent volumes rendered on PowerStore.
You will need to gather the following details for your PowerStore array before proceeding: IP address, username, password and Global Array Id. Retrieve the globalID of the PowerStore array by login into your PowerStore console > settings > properties
Once you have gathered the PowerStore parameters, SSH into the EKS Administrative machine and execute the below script
chmod +x $HOME/eks-anywhere/powerstore/*.shsource $HOME/eks-anywhere/powerstore/install-powerstore-csi-driver.shEnter Cluster Name on which CSI driver needs to be installedclusterName: eksa1Enter IP or FQDN of the PowerStore arrayipOrFqdnOfPowerStoreArray: 172.24.185.106Enter Global Id of the PowerStore ArrayglobalIdOfPowerStoreArray: PS4ebb8d4e8488Enter username of the PowerStore ArrayuserNameOfPowerStoreArray: XXXXXXXXXEnter password of the PowerStore ArraypasswordOfPowerStoreArray: XXXXXXXXX
What follows is a series of logs and the last snippet is shown below, wherein the CSI driver is successfully installed and a storage class is created
------------------------------------------------------ > Installing CSI Driver: csi-powerstore on 1.21 ------------------------------------------------------ ------------------------------------------------------ > Checking to see if CSI Driver is already installed ------------------------------------------------------ Skipping verification at user request | |- Installing Driver Success | |--> Waiting for Deployment powerstore-controller to be ready Success | |--> Waiting for DaemonSet powerstore-node to be ready Success ------------------------------------------------------ > Operation complete ------------------------------------------------------ storageclass.storage.k8s.io/powerstore-ext4 created
Time for observations
As a part of the installation process the script will deploy the iSCSI packages and initiators on the cluster nodes. It will also deploy the external-snapshotter resources along with a helm chart that creates the below resources
- ReplicaSet named powerstore-controller (2 pods with 5containers each)
- DaemonSet named powerstore-node (2 pods with 2containers each)
Finally, it creates a storageclass
One can use the below commands to have a detailed observation
# CSI POWERSTORE PODSkubectl get pods -n csi-powerstoreNAME READY STATUS RESTARTS AGEpowerstore-controller-75b885cb6b-lc8ql 5/5 Running 0 5h47mpowerstore-controller-75b885cb6b-wt4fm 5/5 Running 0 5h47mpowerstore-node-bfl2q 2/2 Running 0 5h47mpowerstore-node-p87jd 2/2 Running 0 5h47m# STORAGE CLASSkubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEpowerstore-ext4 csi-powerstore.dellemc.com Delete Immediate true 5h46mstandard (default) csi.vsphere.vmware.com Delete Immediate false 5h59m# EXTERNAL SNAPSHOTTERkubectl get pods -n kube-system -l app=snapshot-controllerNAME READY STATUS RESTARTS AGEsnapshot-controller-75fd799dc8-77ttw 1/1 Running 0 5h52msnapshot-controller-75fd799dc8-jkqgk 1/1 Running 0 5h52m#POWERSTORE CONTROLLER PODSObserve the logs controller pods for powerstore CSIpstorecontrollerpod1=$(kubectl get pods --selector=name=powerstore-controller -n csi-powerstore -o=jsonpath='{.items[0].metadata.name}')pstorecontrollerpod2=$(kubectl get pods --selector=name=powerstore-controller -n csi-powerstore -o=jsonpath='{.items[1].metadata.name}')kubectl logs $pstorecontrollerpod1 -n csi-powerstore -c driverkubectl logs $pstorecontrollerpod2 -n csi-powerstore -c driverYou can also run the logs command using -c and changing the container name to any of the below mentioned 5 containers>>> attacher, provisioner, snapshotter, resizer, driverAlternatively, one can also describe the deployment or the pod usingkubectl describe pod $pstorecontrollerpod1 -n csi-powerstorekubectl describe pod $pstorecontrollerpod2 -n csi-powerstore# POWERSTORE NODE PODSpstorenodepod1=$(kubectl get pods --selector=app=powerstore-node -n csi-powerstore -o=jsonpath='{.items[0].metadata.name}')pstorenodepod2=$(kubectl get pods --selector=app=powerstore-node -n csi-powerstore -o=jsonpath='{.items[1].metadata.name}')kubectl logs $pstorenodepod1 -n csi-powerstore -c driverkubectl logs $pstorenodepod2 -n csi-powerstore -c driver# EXTERNAL SNAPSHOTTER PODSkubectl get pods -n kube-system -l app=snapshot-controller
We can also view our EKS Anywhere cluster nodes registered in the PowerStore console. It can also be observed that the nodes are prefixed with eksa so that they are easy to identify (defined as a custom value inside the installation script)
One can further inspect detailed information by running the below commands.
#POWERSTORE CONTROLLER PODSObserve the logs controller pods for powerstore CSIpstorecontrollerpod1=$(kubectl get pods --selector=name=powerstore-controller -n csi-powerstore -o=jsonpath='{.items[0].metadata.name}')pstorecontrollerpod2=$(kubectl get pods --selector=name=powerstore-controller -n csi-powerstore -o=jsonpath='{.items[1].metadata.name}')kubectl logs $pstorecontrollerpod1 -n csi-powerstore -c driverkubectl logs $pstorecontrollerpod2 -n csi-powerstore -c driverYou can also run the logs command using -c and changing the container name to any of the below mentioned 5 containers>>> attacher, provisioner, snapshotter, resizer, driverAlternatively, one can also describe the deployment or the pod usingkubectl describe pod $pstorecontrollerpod1 -n csi-powerstorekubectl describe pod $pstorecontrollerpod2 -n csi-powerstore# POWERSTORE NODE PODSpstorenodepod1=$(kubectl get pods --selector=app=powerstore-node -n csi-powerstore -o=jsonpath='{.items[0].metadata.name}')pstorenodepod2=$(kubectl get pods --selector=app=powerstore-node -n csi-powerstore -o=jsonpath='{.items[1].metadata.name}')kubectl logs $pstorenodepod1 -n csi-powerstore -c driverkubectl logs $pstorenodepod2 -n csi-powerstore -c driver# EXTERNAL SNAPSHOTTER PODSkubectl get pods -n kube-system -l app=snapshot-controller
A below snip can be interestingly seen of how the host registration occurs for the EKS Anywhere Cluster nodes for iSCSI
kubectl logs $pstorecontrollerpod1 -n csi-powerstore -c driver{"level":"info","msg":"iSCSI Protocol is requested","time":"2022-05-25T04:45:03.244839368Z"}{"level":"info","msg":"setting up host on 172.24.185.106","time":"2022-05-25T04:45:03.244846314Z"}{"level":"debug","msg":"REQUEST: GET /api/rest/host?name=eq.eksa-node-7257e0d1896f409d9a35c7c09e38eeab-172.24.167.33\u0026select=id%2Cname%2Cdescription%2Chost_group_id%2Cos_type%2Chost_initiators HTTP/1.1 Host: 172.24.185.106 Application-Type: CSI Driver for Dell EMC PowerStore/2.2.0 Authorization: ****** ","time":"2022-05-25T04:45:03.245021954Z"}{"level":"debug","msg":"acquire a lock","time":"2022-05-25T04:45:03.245044727Z"}{"level":"debug","msg":"RESPONSE: HTTP/1.1 200 OK Content-Length: 2 Cache-Control: no-cache Cache-Control: no-store Cache-Control: must-revalidate Cache-Control: max-age=0 Content-Language: en-US Content-Type: application/json Dell-Emc-Token: ****** Expires: -1 Set-Cookie: auth_cookie=******; Path=/; Secure; HTTPOnly X-Content-Type-Options: nosniff []\n","time":"2022-05-25T04:45:03.30190787Z"}{"level":"debug","msg":"release a lock","time":"2022-05-25T04:45:03.302284994Z"}{"level":"debug","msg":"REQUEST: GET /api/rest/host?limit=1000\u0026offset=0\u0026order=name\u0026select=id%2Cname%2Cdescription%2Chost_group_id%2Cos_type%2Chost_initiators HTTP/1.1 Host: 172.24.185.106 Application-Type: CSI Driver for Dell EMC PowerStore/2.2.0 Authorization: ****** ","time":"2022-05-25T04:45:03.302521065Z"}
OK, we are all done with implementing the PowerStore CSI on our EKS Anywhere cluster. With this step completed you can move over to the Part-2 where we will deploy and test use-cases leveraging the PowerStore CSI.
cheers,
Ambar Hassani
#iwork4dell