Start a Conversation

Unsolved

30 Posts

459

October 30th, 2022 23:00

EKS Anywhere, Part-2 Dell EMC Unity-XT CSI 2.2.0

EKS Anywhere, Part-2 Dell EMC Unity-XT CSI 2.2.0

dellambarhassani_0-1667187850694.png

This article is part of the EKS Anywhere series EKS Anywhere., extending the Hybrid cloud momentum

In Part-1, we reached to a point of successfully deploying the CSI drivers for Unity-XT. In this part-2, we will deploy test-use cases and observe various functionality especially around snapshots, recovery, etc. The test use-case is a MySQL deployment wherein the data volume will be persisted via the Unity-XT CSI over NFS protocol. To begin...

Step-1 Create NFS Storage Class

First, we start off by creating the Unity-XT storage class that will be used by our persistent volumes for NFS protocol

 

cd $HOME/csi-unity
cp $HOME/eks-anywhere/unity/unity-xt-nas-storage-class.yaml .

 

Now if you open this unity-xt-nas-storage-class.yaml file, the contents would be like this.

 

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: unity-nas
  annotations:
    description: unity storage class for eksa testing
provisioner: csi-unity.dellemc.com
parameters:
  arrayId: VIRT2148DRW2V6
  isDataReductionEnabled: 'false'
  nasServer: 'nas_2'
  protocol: NFS
  storagePool: 'pool_1'
  thinProvisioned: 'true'
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate

 

You will need to edit the highlighted portion of the file., namely the arrayId, nasServer & storagePool values

The most important thing to highlight is that these are CLI ID references as you can see from the underscored values, e.g. nas_2 and pool_1

How do I get these values. Simply login to the Unity-XT console and navigate as shown in the below visuals to get the arrayId, nasServer CLI ID and storagePool CLI ID

dellambarhassani_1-1667188210728.png

dellambarhassani_2-1667188223466.png

dellambarhassani_4-1667188248378.png

Replace these values in the unity-xt-nas-storage-class.yaml file and then apply it to create the storage class

 

cd $HOME/csi-unity
kubectl create -f unity-xt-nas-storage-class.yaml
Once applied, we can observe the Storage class created with the provisioner as csi-unity.dellemc.com
kubectl get sc
NAME                 PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   csi.vsphere.vmware.com   Delete          Immediate           false                  4h7m
unity-nas            csi-unity.dellemc.com    Delete          Immediate           true                   21m

 

The next sections will focus on workload and persistence testing. Nothing too fancy, as the intent is to mainly validate all of the above implementation in a workload scenario

Step-2 Create Persistent Volume Claim

Once the storage class is created then it’s time to create the Persistent Volume Claim. We will be evaluating a test MySQL workload to use the persistent volumes created on the Unity-XT system.

 

cd $HOME/csi-unity
cp -r $HOME/eks-anywhere/mysql .
cd $HOME/csi-unity/mysql/standalone/unity-nas/
kubectl create -f pvc.yaml
Once applied, we can see the pvc will transition from a pending to a bound state
kubectl get pvc
NAME                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim-unity-nas   Pending                                      unity-nas      15s
After some time
kubectl get pvc
NAME                       STATUS   VOLUME                                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim-unity-nas   Bound    testunitycsicluster01-vol-2e295adfce   8Gi        RWO            unity-nas      28s
kubectl get pv
NAME                                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE
testunitycsicluster01-vol-2e295adfce   8Gi        RWO            Delete           Bound    default/mysql-pv-claim-unity-nas   unity-nas               19m
kubectl describe pv testunitycsicluster01-vol-2e295adfce
Name:              testunitycsicluster01-vol-2e295adfce
Labels:            
Annotations:       pv.kubernetes.io/provisioned-by: csi-unity.dellemc.com
Finalizers:        [kubernetes.io/pv-protection external-attacher/csi-unity-dellemc-com]
StorageClass:      unity-nas
Status:            Bound
Claim:             default/mysql-pv-claim-unity-nas
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          8Gi
Node Affinity:
  Required Terms:
    Term 0:        csi-unity.dellemc.com/virt2148drw2v6-nfs in [true]
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            csi-unity.dellemc.com
    FSType:            ext4
    VolumeHandle:      testunitycsicluster01-vol-2e295adfce-NFS-virt2148drw2v6-fs_20
    ReadOnly:          false
    VolumeAttributes:      arrayId=virt2148drw2v6
                           protocol=NFS
                           storage.kubernetes.io/csiProvisionerIdentity=1653024108543-8081-csi-unity.dellemc.com
                           volumeId=fs_20
Events:                

 

We can see a volume named testunitycsicluster01-vol-2e295adfce of 8 Gigabytes has been successfully provisioned. As you can note the prefix that was configured in the myvalues.yaml helps herein to identify the volume.

Let’s see this volume in the Unity-XT console

dellambarhassani_5-1667188648042.png

With the above Persistent volume in place, we can now start defining a sample persistent workload and other operations around the persistency, backup and recovery, etc.

Step-3 Create MySQL instance with persistence

 

cd $HOME/csi-unity/mysql/standalone/unity-nas/
kubectl create -f mysql.yaml
kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
mysql-86586c7497-w7qj2   1/1     Running   0          18m
Once the pod is in running status, we can see that the persistent volume has been successfully mounted via the claim name defined in the mysql yaml file
kubectl describe pod mysql-86586c7497-w7qj2
Name:         mysql-86586c7497-w7qj2
Namespace:    default
Priority:     0
Node:         testunitycsicluster01-md-0-59f9568584-sfhf9/172.24.167.23
Start Time:   Fri, 20 May 2022 05:31:29 +0000
Labels:       app=mysql
              pod-template-hash=86586c7497
Annotations:  
Status:       Running
IP:           192.168.1.64
IPs:
  IP:           192.168.1.64
Controlled By:  ReplicaSet/mysql-86586c7497
Containers:
  mysql:
    Container ID:   containerd://89151c03b9aa25e2af21c71be663c4c405d226465ce85e854462be8214d0f650
    Image:          mysql:5.6
    Image ID:       docker.io/library/mysql@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 20 May 2022 05:32:12 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      MYSQL_ROOT_PASSWORD:  password
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7wcw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pv-claim-unity-nas
    ReadOnly:   false
  kube-api-access-p7wcw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               18m   default-scheduler        Successfully assigned default/mysql-86586c7497-w7qj2 to testunitycsicluster01-md-0-59f9568584-sfhf9
  Normal  SuccessfulAttachVolume  18m   attachdetach-controller  AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-2e295adfce"
  Normal  Pulling                 18m   kubelet                  Pulling image "mysql:5.6"
  Normal  Pulled                  18m   kubelet                  Successfully pulled image "mysql:5.6" in 31.013962761s
  Normal  Created                 18m   kubelet                  Created container mysql
  Normal  Started                 18m   kubelet                  Started container mysql

 

Step-4 Create MetalLB load balancer

The MetalLB load balancer will be required to front-end the adminer web client through which we will manage the MySQL database. As a pre-requisite we should have collected at-least one Static IP addresses for the exposed load balanced service of adminer pod

 

Keep the static IP as mentioned in the pre-requisites handy!!!
In my case the static IP range is 172.24.165.41 to 41, just a single IP for testing purposes. You can have a range defined per your comfort
helm upgrade --install --wait --timeout 15m   --namespace metallb-system --create-namespace   --repo https://metallb.github.io/metallb metallb metallb
Now you can configure it via its CRs. Please refer to the metallb official docs on how to use the CRs.Next we will use the Custom Resources to create the IP address pools and advertise them
cat < 

 

Step-5 Deploy Adminer application

The adminer application is a web-based front end for MySQL database. It runs as a PHP web application, and we will use it to create and update our database in MySQL for persistence testing.

 

cd $HOME/csi-unity/
cp -r $HOME/eks-anywhere/adminer .
cd $HOME/csi-unity/adminer/
kubectl create -f adminer-deployment.yaml
kubectl create -f adminer-service.yaml
Give it some time and we will see the kube-vip will perform the background magic to finally allocate the external static IP which will be used for the LoadBalanced service
kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
adminer-7fcdb7c8d-856gq   1/1     Running   0          25s
mysql-86586c7497-w7qj2    1/1     Running   0          27m
kubectl get svc
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)        AGE
adminer      LoadBalancer   10.97.105.4   172.24.165.41   80:32302/TCP   57s
kubernetes   ClusterIP      10.96.0.1               443/TCP        17h
mysql        ClusterIP      None                    3306/TCP       28m

 

Open the browser and hit the External IP as seen in the adminer service and enter the following values

  • server: mysql
  • username: root
  • password: password

dellambarhassani_6-1667194103182.png

dellambarhassani_7-1667194115495.png

Click on Create Database and for simplicity sake we will name the DB as csi-unity-test

dellambarhassani_8-1667194139425.png

Click on SQL command and paste the below statement populating the csi-unity-test with sample data

 

SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
/*!40101 SET @old_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @old_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @old_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;
--
-- Database: `DB`
--
-- --------------------------------------------------------
--
-- Table structure for table `car`
--
CREATE TABLE `car` (
  `id` int(11) NOT NULL,
  `type` text NOT NULL,
  `country` text NOT NULL,
  `manufacturer` text NOT NULL,
  `create_date` date NOT NULL,
  `model` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Dumping data for table `car`
--
INSERT INTO `car` (`id`, `type`, `country`, `manufacturer`, `create_date`, `model`) VALUES
(1, 'Small', 'Japon', 'Acura', '1931-02-01', 'Integra'),
(2, 'Midsize', 'Japon', 'Acura', '1959-07-30', 'Legend'),
(3, 'Compact', 'Germany', 'Audi', '1970-07-30', '90'),
(4, 'Midsize', 'Germany', 'Audi', '1963-10-04', '100'),
(5, 'Midsize', 'Germany', 'BMW', '1931-09-08', '535i'),
(6, 'Midsize', 'USA', 'Buick', '1957-02-20', 'Century'),
(7, 'Large', 'USA', 'Buick', '1968-10-23', 'LeSabre'),
(8, 'Large', 'USA', 'Buick', '1970-08-17', 'Roadmaster'),
(9, 'Midsize', 'USA', 'Buick', '1962-08-02', 'Riviera'),
(10, 'Large', 'USA', 'Cadillac', '1956-12-01', 'DeVille'),
(11, 'Midsize', 'USA', 'Cadillac', '1957-07-30', 'Seville'),
(12, 'Compact', 'USA', 'Chevrolet', '1952-06-18', 'Cavalier'),
(13, 'Compact', 'USA', 'Chevrolet', '1947-06-26', 'Corsica'),
(14, 'Sporty', 'USA', 'Chevrolet', '1940-05-27', 'Camaro'),
(15, 'Midsize', 'USA', 'Chevrolet', '1949-02-21', 'Lumina'),
(16, 'Van', 'USA', 'Chevrolet', '1944-11-02', 'Lumina_APV'),
(17, 'Van', 'USA', 'Chevrolet', '1962-06-07', 'Astro'),
(18, 'Large', 'USA', 'Chevrolet', '1951-01-11', 'Caprice'),
(19, 'Sporty', 'USA', 'Chevrolet', '1966-11-01', 'Corvette'),
(20, 'Large', 'USA', 'Chrysler', '1964-07-10', 'Concorde'),
(21, 'Compact', 'USA', 'Chrysler', '1938-05-06', 'LeBaron'),
(22, 'Large', 'USA', 'Chrysler', '1960-07-07', 'Imperial'),
(23, 'Small', 'USA', 'Dodge', '1943-06-02', 'Colt'),
(24, 'Small', 'USA', 'Dodge', '1934-02-27', 'Shadow'),
(25, 'Compact', 'USA', 'Dodge', '1932-02-26', 'Spirit'),
(26, 'Van', 'USA', 'Dodge', '1946-06-12', 'Caravan'),
(27, 'Midsize', 'USA', 'Dodge', '1928-03-02', 'Dynasty'),
(28, 'Sporty', 'USA', 'Dodge', '1966-05-20', 'Stealth'),
(29, 'Small', 'USA', 'Eagle', '1941-05-12', 'Summit'),
(30, 'Large', 'USA', 'Eagle', '1963-09-17', 'Vision'),
(31, 'Small', 'USA', 'Ford', '1964-10-22', 'Festiva'),
(32, 'Small', 'USA', 'Ford', '1930-12-02', 'Escort'),
(33, 'Compact', 'USA', 'Ford', '1950-04-19', 'Tempo'),
(34, 'Sporty', 'USA', 'Ford', '1940-06-18', 'Mustang'),
(35, 'Sporty', 'USA', 'Ford', '1941-05-24', 'Probe'),
(36, 'Van', 'USA', 'Ford', '1935-01-27', 'Aerostar'),
(37, 'Midsize', 'USA', 'Ford', '1947-10-08', 'Taurus'),
(38, 'Large', 'USA', 'Ford', '1962-02-28', 'Crown_Victoria'),
(39, 'Small', 'USA', 'Geo', '1965-10-30', 'Metro'),
(40, 'Sporty', 'USA', 'Geo', '1955-07-07', 'Storm'),
(41, 'Sporty', 'Japon', 'Honda', '1955-06-08', 'Prelude'),
(42, 'Small', 'Japon', 'Honda', '1967-09-16', 'Civic'),
(43, 'Compact', 'Japon', 'Honda', '1938-06-26', 'Accord'),
(44, 'Small', 'South Korea', 'Hyundai', '1940-02-25', 'Excel');
--
-- Indexes for dumped tables
--
--
-- Indexes for table `car`
--
ALTER TABLE `car`
  ADD PRIMARY KEY (`id`);
--
-- AUTO_INCREMENT for dumped tables
--
--
-- AUTO_INCREMENT for table `car`
--
ALTER TABLE `car`
  MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=45;
COMMIT;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

 

dellambarhassani_9-1667194190663.png

And hit the EXECUTE button.

dellambarhassani_10-1667194211618.png

Once done, the csi-unity-test database is populated, and we can click on the top blue-ribbon link for the csi-unity-test to view the table “car”. Click on the table “car” and then select data

dellambarhassani_11-1667194236781.png

dellambarhassani_13-1667194275621.png

dellambarhassani_14-1667194288315.png

Step-6 Testing persistence for various scenarios

MySQL POD Deletion: Herein we will delete the MySQL pod and verify reattachment of the persistent volume on restored pod

 

kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
adminer-5f64885f68-9wh7b   1/1     Running   0          6m33s
mysql-86586c7497-w7qj2     1/1     Running   0          78m
To test the persistence... delete the mysql pod. Kubernetes will restore the POD and our savior persistent volume is anyway helping us with the data being intact.
kubectl delete pod mysql-86586c7497-w7qj2
pod "mysql-86586c7497-w7qj2" deleted
kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
adminer-5f64885f68-9wh7b   1/1     Running   0          8m29s
mysql-86586c7497-smts8     1/1     Running   0          58s
It takes approximately 60 seconds or slightly more to recreate the pod and also attach the persistent volume
kubectl describe pod mysql-86586c7497-smts8

  Normal   SuccessfulAttachVolume  2m7s   attachdetach-controller  AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-2e295adfce"

We can see our persistent volume has been attached to the restored pod

 

Close the browser session for the adminer application. Open a new browser session for the adminer application via the External IP and we can verify if the table “car” within the csi-unity-test database that we populated earlier still exists with all the entries.

Follow the navigation to database csi-unity-test, table car and select data

dellambarhassani_15-1667194377892.png

MySQL Deployment Deletion: We will delete the deployment mysql itself and not just the pod.

 

We will delete the deployment mysql itself and not just the pod. Note that deleting the deployments have not effect on the persistent volume claims., as they have an independent lifecycle outside of the deployments.
Also note that our persistent volume is setup with the reclaim policy of Delete. So unless we delete the pvc itself, our pv should remain intact and upon recreation of the mysql deployment, the re-attachment should automatically happen with the underlying data present (table car in the csi-unity-test db).
cd $HOME/csi-unity/mysql/standalone/unity-nas/
kubectl delete -f mysql.yaml
service "mysql" deleted
deployment.apps "mysql" deleted
kubectl get pvc
NAME                       STATUS   VOLUME                                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim-unity-nas   Bound    testunitycsicluster01-vol-2e295adfce   8Gi        RWO            unity-nas      103m
The volume is still intact
Let's recreate MySQL instance
cd $HOME/csi-unity/mysql/standalone/unity-nas/
kubectl create -f mysql.yaml
service/mysql created
deployment.apps/mysql created
kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
adminer-5f64885f68-9wh7b   1/1     Running   0          30m
mysql-86586c7497-jpfcc     1/1     Running   0          42s
kubectl describe pod mysql-86586c7497-jpfcc

 Normal  SuccessfulAttachVolume  63s   attachdetach-controller  AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-2e295adfce"
Our volume has been successfully attached to the new deployment for MySQL instance.
And you can repeat validation via the adminer interface as to whether the database csi-unity-test along with the table "car" and the data itself exists or not

 

Snapshots: Next, we move on to the snapshot capabilities introduced in Kubernetes via the external snapshotter GA project. In this scenario, we will observe creation of snapshots for the persistent volume created above

Before we begin, let’s understand some key terms that are used in combination to create the snapshots

  • VolumeSnapshotClass (storage class for creating snapshots)
  • VolumeSnapshot (Snapshots that will target the above snapshot class)
  • VolumeSnapshotContent (The actual snapshot content)

The above concepts can be understood in further details available at Volume Snapshots | Kubernetes

And now on to the important files that are used for creating these snapshots

  • unity-xt-nas-snapclass.yaml (template to create the volume snapshot class)
  • snapshot-sample.yaml (template for volume snapshots)
  • create-snapshot.sh (handy script to create unique datetime based snapshots)

 

#LET'S CREATE VOLUME SNAPSHOT CLASS
cd $HOME/csi-unity
cp $HOME/eks-anywhere/unity/unity-xt-nas-snapclass.yaml .
more unity-xt-nas-snapclass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: unity-nas-snapclass
driver: csi-unity.dellemc.com
# Configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object
# it is bound to is to be deleted
# Allowed values:
#   Delete: the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object.
#   Retain: both the underlying snapshot and VolumeSnapshotContent remain.
# Default value: None
# Optional: false
# Examples: Delete
deletionPolicy: Delete
As you can see this volume snapshot class leverages the unity csi driver to provision all the snapshots
kubectl create -f unity-xt-nas-snapclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/unity-nas-snapclass created
kubectl get volumesnapshotclass
NAME                  DRIVER                  DELETIONPOLICY   AGE
unity-nas-snapclass   csi-unity.dellemc.com   Delete           38s
kubectl describe volumesnapshotclass unity-nas-snapclass
Name:             unity-nas-snapclass
Namespace:
Labels:           
Annotations:      
API Version:      snapshot.storage.k8s.io/v1
Deletion Policy:  Delete
Driver:           csi-unity.dellemc.com
Kind:             VolumeSnapshotClass
Metadata:
  Creation Timestamp:  2022-05-20T08:31:21Z
  Generation:          1
  Managed Fields:
    API Version:  snapshot.storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:deletionPolicy:
      f:driver:
    Manager:         kubectl-create
    Operation:       Update
    Time:            2022-05-20T08:31:21Z
  Resource Version:  816587
  UID:               9c8bacf5-7bda-4e65-b6f4-c2aba02a6aed
Events:              
At this stage we have the Volume Snapshot class created that can be used to create the Unity XT based snapshots

 

It is also important to understand the interaction of the below mentioned files, which are used to create the snapshots

 

# Navigate to the correct directory
cd $HOME/csi-unity/mysql/standalone/unity-nas/
more snapshot-sample.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: mysql-snapshot-unity-nas-datetime
spec:
  volumeSnapshotClassName: unity-nas-snapclass
  source:
    persistentVolumeClaimName: mysql-pv-claim-unity-nas
# The datetime text is automatically replaced by the below script to create unique snapshots
# The template has two references 
a) The volume snapshot class will be used to create the snapshots
b) The persistent volume claim with which the snapshot is associated
more create-snapshot.sh
#!/bin/bash
NOW=$(date "+%Y%m%d%H%M%S")
rm -rf snapshot.yaml
cp snapshot-sample.yaml snapshot.yaml
sed -i "s/datetime/$NOW/g" snapshot.yaml
kubectl create -f snapshot.yaml
As you can see the above script will insert a unique datetime reference into the snapshot template and then use kubectl to create the unique snapshot

 

Next let’s begin to create the baseline snapshot. Note that at this stage, we have a database called csi-unity-test with a table car populated with dummy data

dellambarhassani_16-1667194627804.png

We will create the first baseline snapshot such that all the existing data is preserved in the VolumeSnapshotContent object via the VolumeSnapshot

 

# Navigate to the correct directory
cd $HOME/csi-unity/mysql/standalone/unity-nas/
chmod +x create-snapshot.sh
Let's verify if there are any existing snapshots
kubectl get volumesnapshot
No resources found in default namespace.
Next, we will execute the script
./create-snapshot.sh
volumesnapshot.snapshot.storage.k8s.io/mysql-snapshot-unity-nas-20220520134942 created
As you can see the snapshot is successfully created
kubectl get volumesnapshot
NAME                                      READYTOUSE   SOURCEPVC                  SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS         SNAPSHOTCONTENT                                    CREATIONTIME   AGE
mysql-snapshot-unity-nas-20220520134942   true         mysql-pv-claim-unity-nas                           8Gi           unity-nas-snapclass   snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59   65s            38s
The volume snapshot creation has also resulted into a volume snapshot content that holds the actual data
kubectl get volumesnapshotcontent
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                  VOLUMESNAPSHOTCLASS   VOLUMESNAPSHOT                            VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59   true         8589934592    Delete           csi-unity.dellemc.com   unity-nas-snapclass   mysql-snapshot-unity-nas-20220520134942   default                   98s

 

Let’s describe the volume snapshot content and we will see an interesting observation that highlights the prefix of our cluster name in the Snapshot handle. If you recall we had altered our myvalues.yaml file during the initial steps for CSI installation to include a specific prefix for our volumes and snapshots

 

kubectl describe volumesnapshotcontent snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59
Name:         snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59
Namespace:
Labels:       
Annotations:  
API Version:  snapshot.storage.k8s.io/v1
Kind:         VolumeSnapshotContent
Metadata:
  Creation Timestamp:  2022-05-20T13:49:42Z
  Finalizers:
    snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection
  Generation:  1
  Managed Fields:
    API Version:  snapshot.storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection":
      f:spec:
        .:
        f:deletionPolicy:
        f:driver:
        f:source:
          .:
          f:volumeHandle:
        f:volumeSnapshotClassName:
        f:volumeSnapshotRef:
          .:
          f:apiVersion:
          f:kind:
          f:name:
          f:namespace:
          f:resourceVersion:
          f:uid:
    Manager:      snapshot-controller
    Operation:    Update
    Time:         2022-05-20T13:49:42Z
    API Version:  snapshot.storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:creationTime:
        f:readyToUse:
        f:restoreSize:
        f:snapshotHandle:
    Manager:         csi-snapshotter
    Operation:       Update
    Time:            2022-05-20T13:49:46Z
  Resource Version:  1067690
  UID:               0804c76e-5040-4870-9b0c-2c140e4b7d15
Spec:
  Deletion Policy:  Delete
  Driver:           csi-unity.dellemc.com
  Source:
    Volume Handle:             testunitycsicluster01-vol-2e295adfce-NFS-virt2148drw2v6-fs_20
  Volume Snapshot Class Name:  unity-nas-snapclass
  Volume Snapshot Ref:
    API Version:       snapshot.storage.k8s.io/v1
    Kind:              VolumeSnapshot
    Name:              mysql-snapshot-unity-nas-20220520134942
    Namespace:         default
    Resource Version:  1067639
    UID:               2ff09247-07d4-4eab-b9ed-fe4ac187ce59
Status:
  Creation Time:    1653054555024000000
  Ready To Use:     true
  Restore Size:     8589934592
  Snapshot Handle:  testunitycsicluster01-snap-2ff0924707-NFS-virt2148drw2v6-171798691875
Events:             
# Notice that the cluster name is prefixed in the Snapshot Handle

 

Next, we can see if this snapshot is actually reflected in the Unity-XT console

dellambarhassani_17-1667194738966.png

We can see 1 snapshot created against our persistent volume. Let’s see some more details by clicking on the count hyperlink. We can see the Snapshot Handle with our cluster prefix is visible against the snapshot. This helps us track our snapshots appropriately

dellambarhassani_18-1667194769531.png

Now we will test the restore capability via our baseline snapshot. To do so we will make some changes in our database named csi-unity-test. Head back to the adminer application via the External IP exposed and navigate to

dellambarhassani_19-1667194786030.png

Next select the “car” table and drop it to delete the data

dellambarhassani_20-1667194805412.png

Once this operation is executed, all our data in the database is deleted simulating a sort of corruption, etc.

dellambarhassani_21-1667194827269.png

Now for the obvious act of restoring our data by leveraging the baseline snapshot created above. To do so we have to create a new persistent volume claim that targets an existing VolumeSnapshot object as a datasource

To do so, we already have a template located in the existing sub-directory for mysql called restore-pvc-sample.yaml, which is executed via a small script called ./restore-pvc.sh. The contents of the files are shown below

 

# Navigate to the correct directory
cd $HOME/csi-unity/mysql/standalone/unity-nas/
more restore-pvc-sample.yaml
# DO NOT CHANGE DATASOURCE NAME, AS IT IS SET AUTOMATICALLY VIA THE SCRIPT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-restored-pv-claim-unity-nas
spec:
  storageClassName: unity-nas
  dataSource:
    name: volumeSnapshotName <<< Script will change this value
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
more restore-pvc.sh
#!/bin/bash
read -p 'volumeSnapshotName: ' volumeSnapshotName
rm -rf restore-pvc.yaml
cp restore-pvc-sample.yaml restore-pvc.yaml
sed -i "s/volumeSnapshotName/$volumeSnapshotName/g" restore-pvc.yaml
kubectl create -f restore-pvc.yaml

 

Now let’s recover the data for the deleted table “car” in the database “csi-unity-test”. This is done by resurrecting a new persistent volume claim from the previously created baseline snapshot.

 

# FIRST STEP: GET THE SNAPSHOT NAME
kubectl get volumesnapshot
NAME                                      READYTOUSE   SOURCEPVC                  SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS         SNAPSHOTCONTENT                                    CREATIONTIME   AGE
mysql-snapshot-unity-nas-20220520134942   true         mysql-pv-claim-unity-nas                           8Gi           unity-nas-snapclass   snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59   15h            15h
# SECOND STEP: RUN THE PVC RESTORATION SCRIPT. It will prompt you for the snapshot name. Provide the value from the above step
./restore-pvc.sh
volumeSnapshotName: mysql-snapshot-unity-nas-20220520134942
kubectl get pvc
NAME                                STATUS   VOLUME                                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim-unity-nas            Bound    testunitycsicluster01-vol-2e295adfce   8Gi        RWO            unity-nas      23h
mysql-restored-pv-claim-unity-nas   Bound    testunitycsicluster01-vol-a0faa7f465   8Gi        RWO            unity-nas      5s

 

As you can see from the above log, we have reconstructed a new persistent volume via the baseline snapshot.

Next, it’s time to use this persistent volume for our MySQL deployment and verify if the data is in-tact via the baseline snapshot. To do so., we will need to alter the persistent volume claim name in the mysql.yaml file as shown below

 

The persistent volume claim name is referenced as claimName in the last line of the mysql.yaml file. Edit the file and change it as per below value
PLEASE BE CAREFUL WITH YAML FORMATTING WHILE EDITING THE VALUES
cd $HOME/csi-unity/mysql/standalone/unity-nas/
Edit mysql.yaml
# Original claimName
claimName: mysql-pv-claim-unity-nas
# New claimName
claimName: mysql-restored-pv-claim-unity-nas
kubectl get deployment
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
adminer   1/1     1            1           29h
mysql     1/1     1            1           28h
kubectl delete deployment mysql
deployment.apps "mysql" deleted
kubectl create -f mysql.yaml
deployment.apps/mysql created
Error from server (AlreadyExists): error when creating "mysql.yaml": services "mysql" already exists <<< IGNORE THIS ERROR AS WE ONLY DELETED THE DEPLOYMENT
kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
adminer-5f64885f68-9wh7b   1/1     Running   0          29h
mysql-6c4598fccd-9wsc2     1/1     Running   0          2m22s
kubectl describe pod mysql-6c4598fccd-9wsc2

Volumes:
  mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-restored-pv-claim-unity-nas
    ReadOnly:   false
Normal  SuccessfulAttachVolume  2m55s  attachdetach-controller  AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-a0faa7f465"

 

As you can see from the above log, we have reconstructed a new persistent volume via the baseline snapshot.

Next, it’s time to use this persistent volume for our MySQL deployment and verify if the data is in-tact via the baseline snapshot. To do so., we will need to alter the persistent volume claim name in the mysql.yaml file as shown below

 

The persistent volume claim name is referenced as claimName in the last line of the mysql.yaml file. Edit the file and change it as per below value
PLEASE BE CAREFUL WITH YAML FORMATTING WHILE EDITING THE VALUES
cd $HOME/csi-unity/mysql/standalone/unity-nas/
Edit mysql.yaml
# Original claimName
claimName: mysql-pv-claim-unity-nas
# New claimName
claimName: mysql-restored-pv-claim-unity-nas
kubectl get deployment
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
adminer   1/1     1            1           29h
mysql     1/1     1            1           28h
kubectl delete deployment mysql
deployment.apps "mysql" deleted
kubectl create -f mysql.yaml
deployment.apps/mysql created
Error from server (AlreadyExists): error when creating "mysql.yaml": services "mysql" already exists <<< IGNORE THIS ERROR AS WE ONLY DELETED THE DEPLOYMENT
kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
adminer-5f64885f68-9wh7b   1/1     Running   0          29h
mysql-6c4598fccd-9wsc2     1/1     Running   0          2m22s
kubectl describe pod mysql-6c4598fccd-9wsc2

Volumes:
  mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-restored-pv-claim-unity-nas
    ReadOnly:   false
Normal  SuccessfulAttachVolume  2m55s  attachdetach-controller  AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-a0faa7f465"

 

As you can see the persistent volume created from the snapshot has been attached to the newly created MySQL pod. Let’s verify the data by logging into the adminer web interface and navigating to the database csi-unity-test and selecting the table “car”

dellambarhassani_22-1667195068803.png

That’s it., hope you enjoyed and understood the end-to-end methodology of dealing with Dell EMC’s Unity-XT CSI implementation over EKS Anywhere clusters

cheers,

Ambar Hassani

#iwork4dell

No Responses!

Top