Start a Conversation

Unsolved

30 Posts

542

October 21st, 2022 19:00

EKS Anywhere, Standalone workload clusters

This article is part of the EKS Anywhere series EKS Anywhere., extending the Hybrid cloud momentum

This article describes an implementation model where we will deploy Standalone EKS Anywhere clusters. The below visual represents the deployment model in which we will implement these standalone EKS Anywhere clusters. Given the fact, there is no management cluster, individual workload clusters are embedded with their own respective Cluster API components.

This video is a must-watch if you want to understand the standalone workload topology used in EKS Anywhere

The below visual highlights a typical interaction of how EKS Anywhere administrative machine bootstraps a temporary KinD cluster to create the target workload cluster. The KinD bootstrap cluster hosts the Cluster API resources that communicate with vSphere and create the necessary resources for the cluster. Once the workload cluster is completed, the KinD cluster is deleted, and the Cluster API resources are moved to the target workload cluster

dellambarhassani_1-1666320103984.png

 

Create your standalone workload clusters

  • Ensure you have a ubuntu OS template created/customized for version 1.22 located under the Templates folder of vSphere
  • SSH into the EKS Anywhere administrative machine as ubuntu user
  • Run the EKS Anywhere cluster creation script named create-eksa-cluster under the home directory
  • The script could take anywhere between 15-to-20 minutes per run 

First standalone cluster:

  • Provide the workload name and management cluster name as c4-eksa1
  • Provide the static IP address for the API server endpoint. In my case it is 172.24.165.12
  • Provide the Kubernetes version input to the bash script as 1.22

Second standalone cluster:

  • Provide the workload name and management cluster name as c4-eksa2
  • Provide the static IP address for the API server endpoint. In my case it is 172.24.165.13
  • Provide the Kubernetes version input to the bash script as 1.22

Make these observations during the execution of the cluster creation script for c4-eksa1 and c4-eksa2

  • Open a separate terminal session to the EKS-A administrative machine and list the docker containers
  • You will notice two docker containers spun up.
  • One of the containers is for the cli tools used by EKS Anywhere to conduct all the necessary build tooling
  • The other container is a KinD bootstrap cluster that hosts the Cluster API and the EKS-A controller along with all the CRDs

dellambarhassani_1-1669694228303.png

 

Observe the bootstrap container

The below narrative has been discussed in the video located at the top and in the article "decoding the architecture". Exec into the bootstrap container while the cluster creation script is still in progress

dellambarhassani_0-1669700198176.png

Run the below commands to observe the cluster-api resources and the other important artifacts related to EKS-A system

 

 

kubectl get pods -A
kubectl get crd -A

 

 

  • Observe the presence of cluster-api and related pods namely capi and capv 
  • Observe the presence of several CRDs that make up the entire cluster api based resourcing methodology.
  • You can further dig into each CRD to observe the makeup of the resourcing and lifecycle management techniques

Cluster creation exhibits

Logs for c4-eksa1

dellambarhassani_0-1669706978410.png

Logs for c4-eksa2

dellambarhassani_1-1669709978535.png

Observe the locally created cluster files once c4-eksa1 and c4-eksa2 clusters are fully created

  • One can observe two directories namely c4-eksa1 and c4-eksa2 created in the home directory of the EKS-A admin machine
  • These are automatically created by EKS Anywhere
  • These sub-directories contain the important files for each cluster including the kubeconfig, SSH keys and the cluster's YAML file
  • Issue the tree command while in the $HOME directory and observe the files created

dellambarhassani_1-1669700505621.png

Accessing the clusters c4-eksa1 and c4-eksa2 via kubectl

  • You are in the terminal session of EKS-A admin machine
  • Use the switch cluster bash script already placed in the EKS-A admin machine to alternate the kubectl access for each cluster
  • The script basically set's the kubeconfig file path and the context for kubectl based on the cluster name provided as input

dellambarhassani_2-1669700635984.png

dellambarhassani_4-1669700673746.png

 

One can understand from the above visuals, how to switch access between clusters using the bash script. The script eases out manual commands to set and change kubectl contexts. 

Once the kubectl access is verified, one can proceed to create additional user accounts and distribute them in the organization to deploy Kubernetes workloads on these target clusters. In case a SSO based access integration is required then it is discussed in subsequent articles with a use-case for KeyCloak based OIDC provider.

Observe the management capabilities present on the standalone workload clusters.

As described in the associated video for this article, once the cluster is fully created the cluster-api resources along with the eksa-system resources are moved from bootstrap to the target standalone or management cluster. In this case our c4-eksa1 and c4-eksa2 serve as standalone workload clusters. In addition, the cluster creation process also places default CNI (cilium) and VMware CSI on the cluster.

  • Switch the kubectl context to the respective c4-eksa1 or c4-eksa2 cluster using the bash script as shown above
  • Run the commands "kubectl get pods -A" and "kubectl get crd -A" to observe the resources that were moved from the bootstrap cluster to the respective standalone workload cluster

c4-eksa1

dellambarhassani_5-1669701067372.png

Overview the Custom resources created in the standalone workload cluster c4-eksa1 especially those related to vsphere. You can observe further details on the CRD by issuing the command kubectl describe crd crd-name and append either -A or -n eksa-system as the namespace. These custom resource definitions are essentially to maintaining the state of the cluster and the underlying resources that are associated with it.

dellambarhassani_7-1669701798850.png

c4-eksa2

dellambarhassani_6-1669701365810.png

Overview the Custom resources created in the standalone workload cluster c4-eksa2 especially those related to vsphere. You can observe further details on the CRD by issuing the command kubectl describe crd crd-name and append either -A or -n eksa-system as the namespace. These custom resource definitions are essentially to maintaining the state of the cluster and the underlying resources that are associated with it.

dellambarhassani_8-1669702137978.png

Observing things from a vSphere console

Login to the vSphere cluster via the web console and observe the machines created under the test-eks-anywhere folder that is used as a default value in the cluster's YAML file

dellambarhassani_9-1669702791752.png

Baseline utilization for the virtual machines within a cluster

The YAML file used to create the clusters have a resource specification of 2 vCPU, 8192 GB RAM. One can observe the baseline utilization for each of the node roles within the standalone workload cluster

Additionally, one can notice all the other parameters that were used in the YAML file, e.g., network name and datastore name, etc. are used against each of the cluster nodes

etcd node

dellambarhassani_12-1669703156730.png

control plane node

dellambarhassani_13-1669703178588.png

worker node

dellambarhassani_14-1669703194164.png

Deleting the standalone workload clusters

You can simply run the delete-eksa-cluster bash script located in the home directory of the EKS-A admin machine providing the same name for the workload and management cluster as an input. It will clean up all the resources deployed as a part of the cluster.

dellambarhassani_1-1669794317544.png

Hopefully, the above article provided you adequate understanding of the standalone cluster topology used in EKS Anywhere

cheers,

Ambar Hassani

#iwork4dell

No Responses!

Top