Start a Conversation

Unsolved

30 Posts

991

October 23rd, 2022 20:00

EKS Anywhere, and the default storage class (VMware CSI/CNS)

EKS Anywhere, and the default storage class (VMware CSI/CNS)

dellambarhassani_1-1666579954463.png

This article is part of the EKS Anywhere series EKS Anywhere., extending the Hybrid cloud momentum

EKS Anywhere clusters when deployed., ship with a default storage class that is based on VMware’s vSphere CNS/CSI (Cloud Native Storage). The main goal of CNS is to enable vSphere and vSphere storage, including vSAN, as a platform to run persistent stateful Kubernetes workloads.

As a default choice, this may be a viable consideration when customers are consolidating all their resources via Hyper Converged Infrastructure like VxRail or alike. And to that note, you may not wish to host any other separate physical infrastructure for external storage.

In such cases, the use-cases might require persistence that can be satisfied using an abstracted Cloud Native Storage via VMware’s CNS and CSI implementation.

dellambarhassani_2-1666580046479.png

Image credit: Feel the AWS Kubernetes love — Dell EMC adds EKS Anywhere to VxRail using VMware on-ramp — Blocks and Files

In a nutshell, if you have deployed EKS Anywhere on a Hyper Converged Infrastructure, you can start spinning up persistent volumes without installing vendor specific CSI drivers and associated dependencies.

It may however be true that certain workloads may or would like to consume an external storage array via iSCSI or NFS for highly performant needs or IO intensive scales. In addition, if you find the existing limitations in VMware CNS a nuisance, then you still have the option to deploy non-CNS based persistent volumes based on individual vendor’s implementation of the CSI.

For now, let’s just observe what is the default shipped standard storage class

kubectl get scNAME                 PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGEstandard (default)   csi.vsphere.vmware.com   Delete          Immediate           false                  7h32mkubectl describe scName:            standardIsDefaultClass:  YesAnnotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"standard"},"parameters":{"storagePolicyName":"vSAN Default Storage Policy"},"provisioner":"csi.vsphere.vmware.com"},storageclass.kubernetes.io/is-default-class=trueProvisioner:           csi.vsphere.vmware.comParameters:            storagePolicyName=vSAN Default Storage PolicyAllowVolumeExpansion:MountOptions:ReclaimPolicy: Delete VolumeBindingMode: Immediate Events:

As you can see from the above outputs, the storageClass is associated with vSAN Default Storage Policy. You can read more on the vSAN Default Storage Policy here About the vSAN Default Storage Policy (vmware.com)

If you have a vSAN implementation, then nothing else is required to spin up the persistent volumes via the respective claims that are defined in your cluster.

VMFS Datastores (Non-VSAN)

Now in my case, I do not have a VSAN datastore. In other words, I do not have a compatible datastore that be associated with the “vSAN Default Storage Policy”. Although, the VMware CNS CSI does support VMFS, however it cannot be done via the “vSAN Default Storage Policy” referenced in the default shipped standard storage class.

So., what’s the resolve? There is no need to worry as the same CNS CSI can be leveraged by altering the default storage class. We have to do some additional configurations by creating a new storage policy in vCenter and associating it with the compatible datastore via a tag-based mechanism.

To do so., simply follow the below procedure

To create a storage policy for local storage, apply a tag to the storage and create a storage policy based on the tag as follows:

  1. From the top-level vSphere menu, select Tags & Custom Attributes
  2. In the Tags pane, select Categories and click New.
  3. Enter a category name, such as eksa. Use the checkboxes to associate it with Datacenter and the storage objects, Folder and Datastore. Click Create.
  4. From the top-level Storage view, select your VMFS volume, and in its Summary pane, click Tags > Assign
  5. From the Assign Tag popup, click Add Tag.
  6. From the Create Tag popup, give the tag a name, such as eksa and assign it the Category you created. Click OK.
  7. From Assign Tag, select the tag and click Assign.
  8. From top-level vSphere, select VM Storage Policies > Create a Storage Policy. A configuration wizard starts.
  9. In the Name and description pane, enter a name for your storage policy. Record the storage policy name for reference as the storagePolicyName value in StorageClass objects.
  10. In the Policy structure pane, under Datastore specific rules, select Enable tag-based placement rules.
  11. In the Tag based placement pane, click Add Tag Rule and configure:
  • Tag category: Select your category name
  • Usage option: Use storage tagged with
  • Tags: Browse and select your tag name

Confirm and configure other panes or accept defaults as needed, then click Review and finish. Finish to create the storage policy.

The below snapshots validate the above procedure

dellambarhassani_3-1666580440880.png

Click on new category and enter as shown below to save it.

dellambarhassani_4-1666580483622.png

Switch to Tags and click on new., add a tag as shown below selecting the above created category

dellambarhassani_5-1666580538853.png

dellambarhassani_6-1666580581508.png

Navigate to Top-level menu > storage and select your VMFS datastore. In my case it’s called as CommonDS

dellambarhassani_7-1666580621286.png

Scroll down to the Tags section for the datastore and click on “assign”. Select the above created tag and click on assign to save.

dellambarhassani_8-1666580659479.png

Navigate through the top-menu > policies and profiles > VM storage policies > Create VM storage policy

dellambarhassani_9-1666580703558.png

Click next and in the policy structure select the option “enable tag-based placement rules”

dellambarhassani_10-1666580752403.png

Under Rule-1 select the category and browse for the tag that we created earlier

dellambarhassani_11-1666580851706.png

The compatible datastore/s are listed. In my case there is just one “CommonDS”., so I click on next

dellambarhassani_12-1666580897921.png

Review and Finish to complete the procedure at the vCenter end.

dellambarhassani_13-1666580943833.png

Once done, we move over to our EKS Anywhere Administrative machine and set the kubectl context accordingly to target our testwk01 workload cluster

source /home/ubuntu/eks-anyhwere/cluster-ops/switch-cluster.shclusterName: testwk01

Next, we will delete the default storage class

kubectl delete sc standard

And then apply a new YAML file to recreate a new default storage class. This file called vmfs-default-storage-class.yaml is already placed inside of the $HOME/eks-anywhere/vmfs-persistence sub-directory.

You can see it below and notice the presence of storagePolicyName as “eksa”. This is the same name as we have defined it in the above steps. Now every time a persistent volume claim is raised for the standard storage class, it will leverage this “eksa” storagePolicyName in vSphere, which in turn is associated via the tag-based mechanism with my VMFS data store named “CommonDS”

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:annotations:storageclass.kubernetes.io/is-default-class: "true"name: standardparameters:storagePolicyName: eksaprovisioner: csi.vsphere.vmware.comreclaimPolicy: DeletevolumeBindingMode: Immediate

The above is just for reference. Execute the below command to create the new standard storage class that is also referenced as default.

cd $HOME/eks-anywhere/vmfs-persistencekubectl create -f vmfs-default-storage-class.yamlstorageclass.storage.k8s.io/standard createdkubectl describe sc standardName:                  standardIsDefaultClass:        YesAnnotations:           storageclass.kubernetes.io/is-default-class=trueProvisioner:           csi.vsphere.vmware.comParameters:            storagePolicyName=eksaAllowVolumeExpansion:MountOptions:ReclaimPolicy: Delete VolumeBindingMode: Immediate Events:

Next let’s create a Persistent volume claim to test the default storage class for VMFS datastore. A sample pvc YAML file named demo-pvc-vmfs.yaml has already been placed inside the same sub-directory of vmfs-persistence

cd $HOME/eks-anywhere/vmfs-persistencekubectl create -f demo-pvc-vmfs.yamlpersistentvolumeclaim/demo-pvc-vmfs createdkubectl get pvcNAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEdemo-pvc-vmfs   Bound    pvc-9908630b-dd17-41ce-991b-160d4f8622a6   2Gi        RWO            standard       4skubectl get pvNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGEpvc-9908630b-dd17-41ce-991b-160d4f8622a6   2Gi        RWO            Delete           Bound    default/demo-pvc-vmfs   standard                24skubectl describe pvcName:          demo-pvc-vmfsNamespace:     defaultStorageClass:  standardStatus:        BoundVolume:        pvc-9908630b-dd17-41ce-991b-160d4f8622a6Labels:Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com Finalizers: [kubernetes.io/pvc-protection] Capacity: 2Gi Access Modes: RWO VolumeMode: Filesystem Used By: demo-pod-vmfs-persistence Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 6m36s (x2 over 6m36s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator Normal Provisioning 6m36s csi.vsphere.vmware.com_vsphere-csi-controller-65d86cf8d7-4xfmw_b35a3648-9e49-45ed-973e-7b0876c010f5 External provisioner is provisioning volume for claim "default/demo-pvc-vmfs" Normal ProvisioningSucceeded 6m34s csi.vsphere.vmware.com_vsphere-csi-controller-65d86cf8d7-4xfmw_b35a3648-9e49-45ed-973e-7b0876c010f5 Successfully provisioned volume pvc-9908630b-dd17-41ce-991b-160d4f8622a6 kubectl describe pv Name: pvc-9908630b-dd17-41ce-991b-160d4f8622a6 Labels:Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com Finalizers: [kubernetes.io/pv-protection external-attacher/csi-vsphere-vmware-com] StorageClass: standard Status: Bound Claim: default/demo-pvc-vmfs Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 2Gi Node Affinity:Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi.vsphere.vmware.com FSType: ext4 VolumeHandle: 3e530339-2b13-4226-aaa7-de58a0f18cbf ReadOnly: false VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1652072029164-8081-csi.vsphere.vmware.com type=vSphere CNS Block Volume Events:

As you can see the persistent volume has been created through the support of the standard & default storage class, which is now based on the VMFS datastore. The volume creation as seen from the above output is done via the provisioner of csi.vsphere.vmware.com

You can use this Persistent volume for a test POD by leveraging a deployment file named demo-pod-vmfs-persistence.yaml placed in the same sub-directory of vmfs-persistence

A snapshot of this file referencing the persistent volume claim named demo-pvc-vmfs is shown below

apiVersion: v1kind: Podmetadata:name: demo-pod-vmfs-persistencespec:containers:- name: busyboximage: "k8s.gcr.io/busybox"volumeMounts:- name: demo-volmountPath: "/demo"command: [ "sleep", "1000000" ]volumes:- name: demo-volpersistentVolumeClaim:claimName: demo-pvc-vmfs

Simply apply the deployment YAML for this demo POD

cd $HOME/eks-anywhere/vmfs-persistencekubectl apply -f demo-pod-vmfs-persistence.yamlkubectl get podsNAME                           READY   STATUS        RESTARTS   AGEdemo-pod-vmfs-persistence      1/1     Running       0          26skubectl describe pod demo-pod-vmfs-persistenceName:         demo-pod-vmfs-persistenceNamespace:    defaultPriority:     0Node:         testworkload01-md-0-79cc6b47bf-g6s7n/172.24.167.77Start Time:   Mon, 09 May 2022 17:13:21 +0000Labels:Annotations:Status: Running IP: 192.168.2.213 IPs: IP: 192.168.2.213 Containers: busybox: Container ID: containerd://521d4fc2c018d80299b20bca10c0a48fb8fc44be8cb319cc6afedc9f176fdef7 Image: k8s.gcr.io/busybox Image ID: sha256:36a4dca0fe6fb2a5133dc11a6c8907a97aea122613fa3e98be033959a0821a1f Port:Host Port:Command: sleep 1000000 State: Running Started: Mon, 09 May 2022 17:13:44 +0000 Ready: True Restart Count: 0 Environment:Mounts: /demo from demo-vol (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8nt5 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: demo-vol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: demo-pvc-vmfs ReadOnly: false kube-api-access-k8nt5: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional:DownwardAPI: true QoS Class: BestEffort Node-Selectors:Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m52s default-scheduler Successfully assigned default/demo-pod-vmfs-persistence to testworkload01-md-0-79cc6b47bf-g6s7n Normal SuccessfulAttachVolume 6m49s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9908630b-dd17-41ce-991b-160d4f8622a6" Normal Pulling 6m34s kubelet Pulling image "k8s.gcr.io/busybox" Normal Pulled 6m29s kubelet Successfully pulled image "k8s.gcr.io/busybox" in 4.503541247s Normal Created 6m29s kubelet Created container busybox Normal Started 6m29s kubelet Started container busybox

As you can see from the above highlighted text, the POD’s volume has been mounted via the persistent volume that was created above.

That’s it for now! Hopefully this brings insights to how the default shipped CSI and storage class has been implemented in EKS Anywhere Clusters. In addition, you would have also noted of how to steer it in case of a VMFS based datastore!

cheers

Ambar Hassani

#iwork4dell

No Responses!

Top