Start a Conversation

Unsolved

30 Posts

561

October 30th, 2022 19:00

EKS Anywhere, Part-2 Dell EMC PowerStore CSI 2.2.0

EKS Anywhere, Part-2 Dell EMC PowerStore CSI 2.2.0

dellambarhassani_24-1667183865899.png

 

This article is part of the EKS Anywhere series EKS Anywhere., extending the Hybrid cloud momentum

Recall our use-case from the Part-1 of this article. We will be leveraging an iSCSI-based implementation pattern, where the persistence layer for our stateful workload is implemented over the Dell EMC PowerStore CSI. The below visual represents a high-level summary of the same

dellambarhassani_0-1667179596760.png

The below visual explains the end-to-end approach since we will be validating quite a few scenarios for persistence testing

dellambarhassani_1-1667179716865.png

Let’s begin the deployment of the sample MySQL workload that will persist over the CSI volumes via iSCSI on Dell PowerStore array

Step-1 Deploy Persistent Volume based on PowerStore CSI

In Part-1, we have already implemented a storage class named powerstore-ext4 in our EKS Anywhere cluster. We will leverage this storage class to create a persistent volume

 

# CREATE THE PERSISTENT VOLUME CLAIMkubectl create -f \$HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/pvc.yamlOnce applied, we can see a new persistent volume being created with the storage class of powerstore-ext4. Interesting enough, we can see the volume name is also prefixed with our cluster name. Recall we had setup the prefixes in the my-powerstore-settings.yaml file (Part-1)# OBSERVE THE CREATED PVC AND PVmysqlpvc=$(kubectl get pvc --selector=name=mysql-pv-claim-powerstore-ext4 -o=jsonpath='{.items[0].metadata.name}')msqlpv=$(kubectl get pvc --selector=name=mysql-pv-claim-powerstore-ext4 -o=jsonpath='{.items[0].spec.volumeName}')kubectl get pvc $mysqlpvcNAME                             STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS      AGEmysql-pv-claim-powerstore-ext4   Bound    eksa1-vol-37f85727b0   8Gi        RWO            powerstore-ext4   118mkubectl get pv $mysqlpvNAME                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                    STORAGECLASS      REASON   AGEeksa1-vol-37f85727b0   8Gi        RWO            Delete           Bound    default/mysql-pv-claim-powerstore-ext4   powerstore-ext4            118mkubectl describe pvc $mysqlpvcName:          mysql-pv-claim-powerstore-ext4Namespace:     defaultStorageClass:  powerstore-ext4Status:        BoundVolume:        eksa1-vol-37f85727b0Labels:        csi=powerstore-ext4name=mysql-pv-claim-powerstore-ext4Annotations:   pv.kubernetes.io/bind-completed: yespv.kubernetes.io/bound-by-controller: yesvolume.beta.kubernetes.io/storage-provisioner: csi-powerstore.dellemc.comFinalizers:    [kubernetes.io/pvc-protection]Capacity:      8GiAccess Modes:  RWOVolumeMode:    FilesystemUsed By:Events:kubectl describe pv $mysqlpv Name: eksa1-vol-37f85727b0 Labels:Annotations: pv.kubernetes.io/provisioned-by: csi-powerstore.dellemc.com Finalizers: [kubernetes.io/pv-protection] StorageClass: powerstore-ext4 Status: Bound Claim: default/mysql-pv-claim-powerstore-ext4 Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 8Gi Node Affinity: Required Terms: Term 0: csi-powerstore.dellemc.com/172.24.185.106-nfs in [true] csi-powerstore.dellemc.com/172.24.185.106-iscsi in [true] Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi-powerstore.dellemc.com FSType: ext4 VolumeHandle: 4c9b7271-ee18-4bb0-b904-279770e7d2d2/PS4ebb8d4e8488/scsi ReadOnly: false VolumeAttributes: Name=eksa1-vol-37f85727b0 Protocol=scsi arrayID=PS4ebb8d4e8488 csi.storage.k8s.io/pv/name=eksa1-vol-37f85727b0 csi.storage.k8s.io/pvc/name=mysql-pv-claim-powerstore-ext4 csi.storage.k8s.io/pvc/namespace=default storage.kubernetes.io/csiProvisionerIdentity=1656399266529-8081-csi-powerstore.dellemc.com Events:

 

Now that the persistent volume has been created, we can verify the same in our PowerStore console. Note our volume can be easily identified with the cluster name which is prefixed to the volume name.

In addition, you can observe the Host Mapping is still at 0 since no pod is using the volume.

dellambarhassani_2-1667180036071.png

Step-2 Deploy a sample MySQL workload with persistent volume

With the above Persistent volume in place, we can now start defining a sample persistent workload and other operations around the persistency, backup and recovery, etc.

 

# CREATE THE MYSQL DEPLOYMENT AND SERVICEkubectl create -f \$HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/mysql.yamlservice/mysql createddeployment.apps/mysql created# OBSERVE THE MYSQL DEPLOYMENTkubectl get pods --selector=app=mysqlNAME                     READY   STATUS    RESTARTS   AGEmysql-59bd49b967-zhcsm   1/1     Running   0          2m8sOnce the pod is in running status, we can see that the persistent volume has been successfully mounted via the claim name defined in the mysql yaml filekubectl describe pods --selector=app=mysqlEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m10s default-scheduler Successfully assigned default/mysql-59bd49b967-zhcsm to eksa1-md-0-d684f49cd-xqj74 Normal SuccessfulAttachVolume 3m9s attachdetach-controller AttachVolume.Attach succeeded for volume "eksa1-vol-37f85727b0" Normal Pulling 2m57s kubelet Pulling image "mysql:5.6" Normal Pulled 2m39s kubelet Successfully pulled image "mysql:5.6" in 18.950450539s Normal Created 2m38s kubelet Created container mysql Normal Started 2m37s kubelet Started container mysql

 

At the same time, we can observe the the Host Mapping against our volume in PowerStore console has changed from 0 to 1

dellambarhassani_3-1667180166432.png

We can see that the volume is mapped to our respective node mapping by click on the Host Mapping value

dellambarhassani_4-1667180214746.png

Step-3 Create MetalLB load balancer

The MetalLB load balancer will be required to front-end the adminer web client through which we will manage the MySQL database. As a pre-requisite we should have collected at-least one Static IP addresses for the exposed load balanced service of adminer pod

 

Keep the static IP as mentioned in the pre-requisites handy!!!In my case the static IP range is 172.24.165.42 to 42, just a single IP for testing purposes. You can have a range defined per your comforthelm upgrade --install --wait --timeout 15m   --namespace metallb-system --create-namespace   --repo https://metallb.github.io/metallb metallb metallbNow you can configure it via its CRs. Please refer to the metallb official docs on how to use the CRs.Next we will use the Custom Resources to create the IP address pools and advertise themcat <

 

You should see a flurry of logs similar to the below snip

 

 Release "metallb" does not exist. Installing it now. W0711 06:55:51.507363 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0711 06:55:51.510195 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0711 06:55:51.606228 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0711 06:55:51.607914 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME: metallb LAST DEPLOYED: Mon Jul 11 06:55:48 2022 NAMESPACE: metallb-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MetalLB is now running in the cluster. ipaddresspool.metallb.io/first-pool created l2advertisement.metallb.io/example created # Verify if Metal LB is running properly kubectl get pods -n metallb-system NAME READY STATUS RESTARTS AGE metallb-controller-69bbb4669c-jtzks 1/1 Running 0 4m3s metallb-speaker-7xn5n 1/1 Running 0 4m3s metallb-speaker-gz5bw 1/1 Running 0 4m3s metallb-speaker-n4llr 1/1 Running 0 4m3s metallb-speaker-wgwwv 1/1 Running 0 4m3s 

 

Step-4 Deploy Adminer application

The adminer application is a web-based front end for MySQL database. It runs as a PHP web application and we will use it to create and update our database in MySQL for persistence testing.

 

# CREATE THE ADMINER DEPLOYMENT AND SERVICE kubectl create -f \ $HOME/eks-anywhere/adminer/adminer-deployment.yaml kubectl create -f \ $HOME/eks-anywhere/adminer/adminer-service.yaml Give it some time and we will see the kube-vip will perform the background magic to finally allocate the external static IP which will be used for the LoadBalanced service kubectl get pods NAME READY STATUS RESTARTS AGE adminer-5f64885f68-dww7g 1/1 Running 0 13m mysql-59bd49b967-zhcsm 1/1 Running 0 36m kubectl get svc --selector=app=adminer NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE adminer LoadBalancer 10.107.80.186 172.24.165.42 80:31209/TCP 80s

 

As you can see that the External IP supplied by MetalLB for adminer application is 172.24.165.42. Open the browser and hit the External IP as seen in the adminer service and enter the following values

  • server: mysql
  • username: root
  • password: password

dellambarhassani_5-1667180743755.png

Click on Create Database and we will name the DB as csi-powerstore-test

dellambarhassani_6-1667180793342.png

dellambarhassani_8-1667180833727.png

dellambarhassani_10-1667180870904.png

Click on SQL command and paste the below statement populating the csi-powerstore-test with sample data. This will result into a table named “car” populated with dummy data.

 

SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; START TRANSACTION; SET time_zone = "+00:00"; /*!40101 SET @old_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @old_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @old_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8mb4 */;-- CREATE TABLE `car` ( `id` int(11) NOT NULL, `type` text NOT NULL, `country` text NOT NULL, `manufacturer` text NOT NULL, `create_date` date NOT NULL, `model` text NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `car` (`id`, `type`, `country`, `manufacturer`, `create_date`, `model`) VALUES (1, 'Small', 'Japon', 'Acura', '1931-02-01', 'Integra'), (2, 'Midsize', 'Japon', 'Acura', '1959-07-30', 'Legend'), (3, 'Compact', 'Germany', 'Audi', '1970-07-30', '90'), (4, 'Midsize', 'Germany', 'Audi', '1963-10-04', '100'), (5, 'Midsize', 'Germany', 'BMW', '1931-09-08', '535i'), (6, 'Midsize', 'USA', 'Buick', '1957-02-20', 'Century'), (7, 'Large', 'USA', 'Buick', '1968-10-23', 'LeSabre'), (8, 'Large', 'USA', 'Buick', '1970-08-17', 'Roadmaster'), (9, 'Midsize', 'USA', 'Buick', '1962-08-02', 'Riviera'), (10, 'Large', 'USA', 'Cadillac', '1956-12-01', 'DeVille'), (11, 'Midsize', 'USA', 'Cadillac', '1957-07-30', 'Seville'), (12, 'Compact', 'USA', 'Chevrolet', '1952-06-18', 'Cavalier'), (13, 'Compact', 'USA', 'Chevrolet', '1947-06-26', 'Corsica'), (14, 'Sporty', 'USA', 'Chevrolet', '1940-05-27', 'Camaro'), (15, 'Midsize', 'USA', 'Chevrolet', '1949-02-21', 'Lumina'), (16, 'Van', 'USA', 'Chevrolet', '1944-11-02', 'Lumina_APV'), (17, 'Van', 'USA', 'Chevrolet', '1962-06-07', 'Astro'), (18, 'Large', 'USA', 'Chevrolet', '1951-01-11', 'Caprice'), (19, 'Sporty', 'USA', 'Chevrolet', '1966-11-01', 'Corvette'), (20, 'Large', 'USA', 'Chrysler', '1964-07-10', 'Concorde'), (21, 'Compact', 'USA', 'Chrysler', '1938-05-06', 'LeBaron'), (22, 'Large', 'USA', 'Chrysler', '1960-07-07', 'Imperial'), (23, 'Small', 'USA', 'Dodge', '1943-06-02', 'Colt'), (24, 'Small', 'USA', 'Dodge', '1934-02-27', 'Shadow'), (25, 'Compact', 'USA', 'Dodge', '1932-02-26', 'Spirit'), (26, 'Van', 'USA', 'Dodge', '1946-06-12', 'Caravan'), (27, 'Midsize', 'USA', 'Dodge', '1928-03-02', 'Dynasty'), (28, 'Sporty', 'USA', 'Dodge', '1966-05-20', 'Stealth'), (29, 'Small', 'USA', 'Eagle', '1941-05-12', 'Summit'), (30, 'Large', 'USA', 'Eagle', '1963-09-17', 'Vision'), (31, 'Small', 'USA', 'Ford', '1964-10-22', 'Festiva'), (32, 'Small', 'USA', 'Ford', '1930-12-02', 'Escort'), (33, 'Compact', 'USA', 'Ford', '1950-04-19', 'Tempo'), (34, 'Sporty', 'USA', 'Ford', '1940-06-18', 'Mustang'), (35, 'Sporty', 'USA', 'Ford', '1941-05-24', 'Probe'), (36, 'Van', 'USA', 'Ford', '1935-01-27', 'Aerostar'), (37, 'Midsize', 'USA', 'Ford', '1947-10-08', 'Taurus'), (38, 'Large', 'USA', 'Ford', '1962-02-28', 'Crown_Victoria'), (39, 'Small', 'USA', 'Geo', '1965-10-30', 'Metro'), (40, 'Sporty', 'USA', 'Geo', '1955-07-07', 'Storm'), (41, 'Sporty', 'Japon', 'Honda', '1955-06-08', 'Prelude'), (42, 'Small', 'Japon', 'Honda', '1967-09-16', 'Civic'), (43, 'Compact', 'Japon', 'Honda', '1938-06-26', 'Accord'), (44, 'Small', 'South Korea', 'Hyundai', '1940-02-25', 'Excel'); ALTER TABLE `car` ADD PRIMARY KEY (`id`); ALTER TABLE `car` MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=45; COMMIT;/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

 

dellambarhassani_11-1667180943862.png

And hit the EXECUTE button.

dellambarhassani_12-1667180972090.png

Once done, the csi-powerstore-test database is populated, and we can click on the top blue-ribbon link for the csi-powerstore-test to view the table “car”. Click on the table “car” and then select data

dellambarhassani_13-1667181080718.png

At this stage we have a populated MySQL database running as a POD with persistent volume on the EKS Anywhere Cluster and PowerStore CSI using iSCSI

Step-5 Testing persistence for various scenarios

MySQL POD Deletion: Herein we will delete the MySQL pod and verify reattachment of the persistent volume on restored pod

 

To test the persistence... delete the mysql pod. Kubernetes will restore the POD and our savior persistent volume is anyway helping us with the data being intact. kubectl delete pod --selector=app=mysql pod "mysql-59bd49b967-zhcsm" deleted kubectl get pods NAME READY STATUS RESTARTS AGE adminer-5f64885f68-dww7g 1/1 Running 0 18m mysql-59bd49b967-jhxhx 1/1 Running 0 55s

 

Close the browser session for the adminer application. Open a new browser session for the adminer application via the External IP and we can verify if the table “car” within the csi-powerstore-test database that we populated earlier still exists with all the entries.

Follow the navigation to database csi-powerstore-test, table car and select data

dellambarhassani_14-1667181259519.png

We can confirm our restored MySQL pod has successfully remounted the persistent volume by looking at the data

MySQL Deployment Deletion: We will delete the deployment mysql itself and not just the pod.

 

We will delete the deployment mysql itself and not just the pod. Note that deleting the deployments have not effect on the persistent volume claims., as they have an independent lifecycle outside of the deployments. Also note that our persistent volume is setup with the reclaim policy of Delete. So unless we delete the pvc itself, our pv should remain intact and upon recreation of the mysql deployment, the re-attachment should automatically happen with the underlying data present (table car in the csi-powerstore-test db). kubectl delete -f \ $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/mysql.yaml service "mysql" deleted deployment.apps "mysql" deleted mysqlpvc=$(kubectl get pvc --selector=name=mysql-pv-claim-powerstore-ext4 -o=jsonpath='{.items[0].metadata.name}') msqlpv=$(kubectl get pvc --selector=name=mysql-pv-claim-powerstore-ext4 -o=jsonpath='{.items[0].spec.volumeName}') kubectl get pvc $mysqlpvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim-powerstore-ext4 Bound eksa1-vol-37f85727b0 8Gi RWO powerstore-ext4 171m The volume is still intact. Let's recreate MySQL instance kubectl create -f \ $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/mysql.yaml service/mysql created deployment.apps/mysql created kubectl get pods NAME READY STATUS RESTARTS AGE adminer-5f64885f68-dww7g 1/1 Running 0 23m mysql-59bd49b967-c8g52 1/1 Running 0 12s kubectl describe pod --selector=app=mysql Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 49s default-scheduler Successfully assigned default/mysql-59bd49b967-c8g52 to eksa1-md-0-d684f49cd-xqj74 Normal SuccessfulAttachVolume 49s attachdetach-controller AttachVolume.Attach succeeded for volume "eksa1-vol-37f85727b0" Normal Pulled 43s kubelet Container image "mysql:5.6" already present on machine Normal Created 43s kubelet Created container mysql Normal Started 43s kubelet Started container mysql Our volume has been successfully attached to the new deployment for MySQL instance. And you can repeat validation via the adminer interface as to whether the database csi-powerstore-test along with the table "car" and the data itself exists or not 

 

Volume Snapshots: Next, we move on to the snapshot capabilities introduced in Kubernetes via the external snapshotter GA project. In this scenario, we will observe creation of snapshots for the persistent volume created above.
Before we begin, let’s understand some key terms that are used in combination to create the snapshots

  • VolumeSnapshotClass (storage class for creating snapshots)
  • VolumeSnapshot (Snapshots that will target the above snapshot class)
  • VolumeSnapshotContent (The actual snapshot content)

The above concepts can be understood in further details available at Volume Snapshots | Kubernetes

And now on to the important files that are used for creating these snapshots

  • powerstore-ext4-iscsi-snapclass.yaml (template to create the volume snapshot class)
  • snapshot-sample.yaml (template for volume snapshots)
  • create-snapshot.sh (handy script to create unique datetime based snapshots)

Volume Snapshot Class

 

#LET'S CREATE VOLUME SNAPSHOT CLASS First let's observe the snapshot class file more $HOME/eks-anywhere/powerstore/powerstore-ext4-iscsi-snap-class.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: powerstore-ext4-iscsi-snapclass driver: csi-powerstore.dellemc.com # Configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object # it is bound to is to be deleted # Allowed values: # Delete: the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. # Retain: both the underlying snapshot and VolumeSnapshotContent remain. # Default value: None # Optional: false # Examples: Delete deletionPolicy: Delete As you can see this volume snapshot class leverages the powerstore csi driver to provision all the snapshots kubectl create -f $HOME/eks-anywhere/powerstore/powerstore-ext4-iscsi-snap-class.yaml volumesnapshotclass.snapshot.storage.k8s.io/powerstore-ext4-iscsi-snapclass created kubectl get volumesnapshotclass --field-selector \ metadata.name=powerstore-ext4-iscsi-snapclass NAME DRIVER DELETIONPOLICY AGE powerstore-ext4-iscsi-snapclass csi-powerstore.dellemc.com Delete 6m46s The below command helps in describing the volumesnapshotclass. Note kubectl describe does not take --field-selector as an argument, so we just use a handy variable and extract the name to pass it via the kubectl describe command volumesnapshotclass=$(kubectl get volumesnapshotclass --field-selector \ metadata.name=powerstore-ext4-iscsi-snapclass -o=jsonpath='{.items[0].metadata.name}') kubectl describe volumesnapshotclass $volumesnapshotclass Name: powerstore-ext4-iscsi-snapclass Namespace: Labels: Annotations: API Version: snapshot.storage.k8s.io/v1 Deletion Policy: Delete Driver: csi-powerstore.dellemc.com Kind: VolumeSnapshotClass Metadata: Creation Timestamp: 2022-06-29T05:42:19Z Generation: 1 Managed Fields: API Version: snapshot.storage.k8s.io/v1 Fields Type: FieldsV1 fieldsV1: f:deletionPolicy: f:driver: Manager: kubectl-create Operation: Update Time: 2022-06-29T05:42:19Z Resource Version: 909939 UID: f15f8a65-1103-42d5-9df2-c5dc4dbfb46b Events: At this stage we have the Volume Snapshot class created that can be used to create the PowerStore CSI based snapshots 

 

Volume Snapshots and Snapshot contents

It is also important to understand the interaction of the below mentioned files, which are used to create the snapshots

 

more $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/snapshot-sample.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysql-snapshot-powerstore-ext4-iscsi-datetime labels: name: mysql-snapshot-powerstore-ext4-iscsi-datetime spec: volumeSnapshotClassName: powerstore-ext4-iscsi-snapclass source: persistentVolumeClaimName: mysql-pv-claim-powerstore-ext4 # The datetime text is automatically replaced by the below script to create unique snapshots# The template has two references a) The volume snapshot class will be used to create the snapshots b) The persistent volume claim with which the snapshot is associated more $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/create-snapshot.sh #!/bin/bash NOW=$(date "+%Y%m%d%H%M%S") rm -rf snapshot.yaml cp snapshot-sample.yaml snapshot.yaml sed -i "s/datetime/$NOW/g" snapshot.yaml kubectl create -f snapshot.yaml As you can see the above script will insert a unique datetime reference into the snapshot template and then use kubectl to create the unique snapshot

 

Next let’s begin to create the baseline snapshot. Note that at this stage, we have a database called csi-powerstore-test with a table car populated with dummy data

dellambarhassani_15-1667181658864.png

We will create the first baseline snapshot such that all the existing data is preserved in the VolumeSnapshotContent object via the VolumeSnapshot

 

# Navigate to the correct directory chmod +x $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/create-snapshot.sh Next, we will execute the below script three to four times so that it creates multiple snapshots at different time intervals source $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/create-snapshot.sh # Declare a variable to find all snapshots that include the string mysql-snapshot-powerstore-ext4-iscsi volumesnapshotlist=$(kubectl get volumesnapshot --no-headers=true | awk '/mysql-snapshot-powerstore-ext4-iscsi/{print $1}'| xargs) # Retrieve a nice table of all snapshots created from the above script kubectl get volumesnapshot $volumesnapshotlist | awk {'print $1" " $3'} | column -t NAME SOURCEPVC mysql-snapshot-powerstore-ext4-iscsi-20220629070207 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092634 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092637 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092639 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092711 mysql-pv-claim-powerstore-ext4 The volume snapshot creation has also resulted into a volume snapshot content that holds the actual data. Let's declare another variable to find all snapshot contents that have string named mysql-snapshot-powerstore-ext4-iscsi volumesnapshotcontentlist=$(kubectl get volumesnapshotcontent --no-headers=true | awk '/mysql-snapshot-powerstore-ext4-iscsi/{print $1}'| xargs) # Retrieve a nice table of all snapshot content created kubectl get volumesnapshotcontent $volumesnapshotcontentlist | awk {'print $1" " $7'} | column -t NAME VOLUMESNAPSHOT snapcontent-6d94b8eb-44d0-4ff8-b992-3121dd66d765 mysql-snapshot-powerstore-ext4-iscsi-20220629092637 snapcontent-acbe6c83-6e8c-4d3f-a66d-b8e98ec9c633 mysql-snapshot-powerstore-ext4-iscsi-20220629070207 snapcontent-b0392419-90a1-4547-811f-8e3b5df7554d mysql-snapshot-powerstore-ext4-iscsi-20220629092634 snapcontent-b0fe2c6c-e564-47a0-a040-b63b64dd64f4 mysql-snapshot-powerstore-ext4-iscsi-20220629092639 snapcontent-de66f7c8-a772-4bfe-968a-df67c1225f65 mysql-snapshot-powerstore-ext4-iscsi-20220629092711

 

Next, we can see if this snapshot is actually reflected in the PowerStore console. Let’s visit the monitoring > jobs tab in the PowerStore console and we can observe that a successful snapshot creation has been completed against our volume

dellambarhassani_16-1667181792493.png

Next., the actual snapshot is also visible if we navigate to Dashboard > volumes > select our volume > protection. Most importantly, we can also visualize the cluster prefix the snapshot name (recall our prefix settings in the my-powerstore-settings.yaml while installing the CSI drivers)

dellambarhassani_17-1667181823434.png

 

dellambarhassani_18-1667181848128.png

The baseline snapshot is in place with a database “csi-powerstore-test” along with the table “car” and dummy data. We can now can start a chaos test wherein we drop the table “car” and restore the same via the baseline snapshot.

Visit the adminer application, select the database and the table “car” within it.

dellambarhassani_19-1667181875953.png

Drop the table “car”

dellambarhassani_20-1667181896947.png

dellambarhassani_21-1667181913802.png

The above should delete the table “car” and all the associated data within it.

Now for the obvious act of restoring our data by leveraging the baseline snapshot created above. To do so we have to create a new persistent volume claim that targets an existing VolumeSnapshot object as a datasource

To do so, we already have a template called restore-pvc-sample.yaml, which is executed via a small bash script called restore-pvc. The contents of the files are shown below

 

more $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/restore-pvc-sample.yaml # DO NOT CHANGE DATASOURCE NAME, AS IT IS SET AUTOMATICALLY VIA THE SCRIPT apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-restored-pv-claim-powerstore-ext4 spec: storageClassName: powerstore-ext4 dataSource: name: volumeSnapshotName < 

 

Now let’s recover the data for the deleted table “car” in the database “csi-powerstore-test”. This is done by resurrecting a new persistent volume claim from the previously created baseline snapshot.

 

# FIRST STEP: IDENTIFY THE SNAPSHOT NAME THAT YOU WANT TO RECOVER FROM volumesnapshotlist=$(kubectl get volumesnapshot --no-headers=true | awk '/mysql-snapshot-powerstore-ext4-iscsi/{print $1}'| xargs) kubectl get volumesnapshot $volumesnapshotlist | awk {'print $1" " $3'} | column -t NAME SOURCEPVC mysql-snapshot-powerstore-ext4-iscsi-20220629070207 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092634 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092637 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092639 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629092711 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629094426 mysql-pv-claim-powerstore-ext4 mysql-snapshot-powerstore-ext4-iscsi-20220629123004 mysql-pv-claim-powerstore-ext4

 

The above table of snapshots will be sorted from oldest to latest snapshots.

 

# SECOND STEP: RUN THE PVC RESTORATION SCRIPT. It will prompt you for the snapshot name. Provide the value from the above step source $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/restore-pvc.sh volumeSnapshotName: mysql-snapshot-powerstore-ext4-iscsi-20220629123004 persistentvolumeclaim/mysql-restored-pv-claim-powerstore-ext4 created kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim-powerstore-ext4 Bound eksa1-vol-37f85727b0 8Gi RWO powerstore-ext4 10h mysql-restored-pv-claim-powerstore-ext4 Bound eksa1-vol-d64b451184 8Gi RWO powerstore-ext4 3m39s

 

As you can see from the above log, we have reconstructed a new persistent volume via the baseline snapshot. Next, it’s time to use this restored persistent volume for our MySQL deployment and verify if the data is in-tact via the baseline snapshot. To do so., we will need to alter the persistent volume claim name in the mysql.yaml file as shown below

 

The persistent volume claim name is referenced as claimName in the last line of the mysql.yaml file. vi $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/mysql.yaml THE CLAIM NAME IS THE LAST LINE IN THE YAML FILE. PLEASE BE CAREFUL WITH YAML FORMATTING WHILE EDITING THE VALUES # Original claimName claimName: mysql-pv-claim-powerstore-ext4 # New claimName claimName: mysql-restored-pv-claim-powerstore-ext4 kubectl delete deployment mysql deployment.apps "mysql" deleted kubectl create -f $HOME/eks-anywhere/mysql/standalone/powerstore-iscsi/mysql.yaml deployment.apps/mysql created <<< MySQL has been recreated Error from server (AlreadyExists): error when creating "mysql.yaml": services "mysql" already exists <<< IGNORE THIS ERROR AS WE ONLY DELETED THE DEPLOYMENT kubectl get pods NAME READY STATUS RESTARTS AGE adminer-5f64885f68-dww7g 1/1 Running 0 7h59m mysql-7f475d48d-bhgtj 1/1 Running 0 33s kubectl describe pod --selector=app=mysql Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 91s default-scheduler Successfully assigned default/mysql-7f475d48d-bhgtj to eksa1-md-0-d684f49cd-xqj74 Normal SuccessfulAttachVolume 89s attachdetach-controller AttachVolume.Attach succeeded for volume "eksa1-vol-d64b451184" Normal Pulled 78s kubelet Container image "mysql:5.6" already present on machine Normal Created 78s kubelet Created container mysql Normal Started 78s kubelet Started container mysql 

 

From the above logs in kubectl describe pod, you can see the persistent volume created from the snapshot has been attached to the newly created MySQL pod.

In addition, we can also verify the creation of the new persistent volume in our PowerStore console. You can also observe that our original volume has now a Host mapping of “0” and the newly created volume from the snapshot has a Host mapping of “1”

dellambarhassani_22-1667183389523.png

Let’s verify the data by logging into the adminer web interface and navigating to the database csi-powerstore-test and selecting the table “car”

dellambarhassani_23-1667183412119.png

The database csi-powerstore-test hosted via the MySQL pod has been restored based on the baseline snapshot and the associated persistent volume.

With this we come to a close for this article hoping that you have been able to comprehend the installation of PowerStore CSI, workload deployment, scenario testing, etc.

cheers,

Ambar Hassani

#iwork4dell

No Responses!

Top