Start a Conversation

Unsolved

30 Posts

738

October 22nd, 2022 19:00

EKS Anywhere, Dedicated management cluster model

This article is part of the EKS Anywhere series EKS Anywhere., extending the Hybrid cloud momentum

In this article we will describe the implementation for a scalable, distributed model of EKS Anywhere, where the EKS Anywhere workload clusters are created and managed via a long-lived dedicated management cluster. This is a very different topology than the standalone workload clusters covered in the previous article

In a must watch video we unravel the details around this dedicated management cluster topology of EKS Anywhere

The below schematics represent the implementation model

dellambarhassani_7-1666490126875.png

The high-level interactions of the workload creation process are observed below

 

dellambarhassani_4-1666490037081.png

The most important aspect of this implementation model is that unlike a standalone workload cluster, EKS Anywhere does not create KinD based bootstrap cluster for each of the workload clusters. Instead, the temporary KinD cluster on the EKS-A administrative machine is created only to render the management cluster and from thereon, all the remaining activities are conducted by the management cluster, i.e., creation and lifecycle management of the workload clusters

Creating the clusters

  • We will be creating three clusters., 1 management cluster and 2 workload clusters.
  • Ensure you have a ubuntu OS template created/customized for version 1.22
  • This ubuntu OS template should be located under the Templates folder of vSphere
  • SSH into the EKS Anywhere administrative machine as ubuntu user
  • Run the EKS Anywhere cluster creation script named create-eksa-cluster under the home directory

Management cluster

  • Provide the workload cluster name and management cluster name as c4-eksa
  • Provide the static IP address for the API server endpoint. In my case it is 172.24.165.1
  • Provide the Kubernetes version input to the bash script as 1.22

Note that the below clusters are managed workload clusters, which means the management cluster will reference c4-eksa

First workload cluster:

  • Provide the workload cluster name as c4-eksa1
  • Provide the management cluster name as c4-eksa
  • Provide the static IP address for the API server endpoint. In my case it is 172.24.165.12
  • Provide the Kubernetes version input to the bash script as 1.22

Second workload cluster:

  • Provide the workload cluster name as c4-eksa2
  • Provide the management cluster name as c4-eksa
  • Provide the static IP address for the API server endpoint. In my case it is 172.24.165.13
  • Provide the Kubernetes version input to the bash script as 1.22

Cluster creation exhibits

Management cluster c4-eksa 

dellambarhassani_0-1669780848638.png

Workload cluster c4-eksa1

dellambarhassani_2-1669780951469.png

Workload cluster c4-eksa2

dellambarhassani_30-1669792089421.png

Accessing the clusters c4-eksa, c4-eksa1 and c4-eksa2 via kubectl

As described in the standalone workload clusters article, use the switch cluster script to toggle the kubectl context for individual clusters. 

 

cd $HOME
source $HOME/cluster-ops/eks-anywhere/switch-cluster.sh

 

To start with:

  • Repeat the script for each of the cluster
  • While switching from one cluster to another, verify kubectl access via "kubectl get nodes" for each of the clusters
  • You should be able to retrieve the control and worker nodes for each of the cluster
  • This means your kubectl context is set for each of the clusters

Example: Validate kubectl access and get node information for c4-eksa2 workload cluster (same can be done individually for c4-eksa and c4-eksa1)

dellambarhassani_12-1669787660195.png

From here on, one can start deploying workloads on clusters c4-eksa1 and c4-eksa1. While c4-eksa is deemed as a management cluster, it is possible to deploy any Kubernetes workload on it if the underlying capacities exist.

However, since c4-eksa is intended to be used as a management cluster from a role-standpoint, it's best to avoid deploying any workloads on it and use it solely for workload cluster creation and lifecycle management of those clusters.

FURTHER OBSERVATIONS 

From the cluster creation logs shown above

  • The temporary bootstrap cluster is created only for the management cluster c4-eksa1
  • Workload clusters c4-eksa1 and c4-eksa2 do not include any bootstrap cluster and related processes
  • c4-eksa1 and c4-eksa2 are deployed using the long-lived management capabilities embedded in c4-eksa cluster
  • This is explained in greater detail in the video located at the top of the blog and also in "decoding the architecture" blog

Temporary bootstrap cluster for c4-eksa management cluster

  • Open a separate terminal session to the EKS-A administrative machine while c4-eksa is being created via the cluster creation script
  • List the docker containers using "docker ps"
  • You will notice two docker containers spun up namely cli-tools and KinD bootstrap cluster running EKS-Distro

dellambarhassani_0-1669771303550.png

You can exec into the docker container running the KinD cluster and perform observations exactly the same way that we did in the previous article for the standalone workload cluster.

No bootstrap cluster for c4-eksa1 and c4-eksa2 workload clusters

  • Open a separate terminal session to the EKS-A administrative machine while c4-eksa1 and c4-eksa2 are being created via the cluster creation script
  • Unlike c4-eksa, you will notice only one docker container for cli-tools for c4-eksa1 and c4-eksa2
  • This confirms that EKS Anywhere software uses the long-running cluster-api resources on the c4-eksa management cluster for creating c4-eksa1 and c4-eksa2
  • As a result, they do not need a bootstrapping process as in the case of a management or standalone workload cluster

Exhibit: no bootstrap cluster docker container during c4-eksa1 cluster creation

dellambarhassani_1-1669771348426.png

Observe the cluster directories created by EKS Anywhere

Issue the tree command in the home directory of EKS-A admin machine as shown below. You can observe the cluster directories and files created for c4-eksa, c4-eksa1, c4-eksa2

  • These are automatically created by EKS Anywhere
  • These sub-directories contain the important files for each cluster including the kubeconfig, SSH keys and the cluster's YAML file

dellambarhassani_4-1669781674014.png

Validate the cluster nodes in vSphere

dellambarhassani_6-1669783039353.png

Observe the cluster pod differences

Earlier, we have spoken about the presence of cluster-api and eksa-controller pods along with other management related resources on the management or a standalone cluster. We can observe those narratives via the below logs.

While being SSH'd into the EKS-A admin machine, switch the cluster contexts and observe the pods as shown below

Management cluster: Notice that the management cluster has the cluster-api and eksa-system related pods indicating its capabilities to service requests for creating and managing lifeycle of its own and other workload clusters.

dellambarhassani_8-1669783894064.png

Workload clusters: In the case of c4-eksa2 as an example of the workload cluster, one can see there are no cluster-api or eksa-system related pods

dellambarhassani_11-1669787531426.png

Observe the presence and role of cluster-api and eksa-controller in Management cluster

Use the bash script to switch to c4-eksa management cluster and issue the command "kubectl get clusters"

dellambarhassani_15-1669788895239.png

Wonder how we are able to retrieve the clusters via kubectl context set to c4-eksa, i.e., the management cluster? As explained in the videos, the information about the management cluster itself and the associated workload clusters are stored as Custom Resource Definition.

In the above command we are retrieving information from a custom resource definition called as "cluster" and that's how the CRDs get polled, and we can see all the cluster names. Similarly, all the other CRDs, e.g., machine templates, node references, etc. can be retrieved or edited via kubectl.

This validates the fact that the YAML that was used to build the cluster was translated into the CRDs and then those are stored on the management cluster giving it a full landscape of its own resources and those of the managed workload clusters.

We can retrieve the same information from the actual CRD representing the cluster information

dellambarhassani_16-1669788964968.png

The below command would retrieve all the CRDs deployed on c4-eksa management cluster. 

dellambarhassani_20-1669789663747.png

The CRDs with the term vsphere included in them are of particular interest. The CRD below observes the data center configuration presented for each of the clusters. Note that this information about vSphere data center configuration was also passed in the respective cluster YAML

dellambarhassani_17-1669789127540.png

The next one refers to the machine templates for each of the node roles (etcd, control, worker) for individual clusters.

dellambarhassani_18-1669789235437.png

Next, we can observe that the information about individual nodes, i.e., 21 virtual machines: 3 etcd, 2 control and 2 worker nodes per cluster including those of the c4-eksa management clusters are stored in CRDs

dellambarhassani_19-1669789555967.png

Likewise, we can keep continue to retrieve information about other CRDs too that in combination resemble the management cluster capabilities to create and manage the workload clusters associated with it. Infact, a more detailed set of information get be obtained via the kubectl describe command if one is interested in digging the hole deeper.

The other point to note is that as the state of clusters change, the CRDs get altered via the cluster-api and other associated EKSA components. Any state change will trigger various forms of reconciliations till the cluster state is fully reconverged.

Another important point to highlight would be that additional CRDs will be added to the management cluster as and when more workload clusters are created.

Let's also observe the role of EKS-A controller that gets deployed in the management cluster post the bootstrap process is complete.

Herein we can observe logs from the EKSA controller pod and if it's not an overwhelm, then understand that EKSA controller is listening via an API endpoint for any incoming request. In this case the logs indicate the typical logs for creating a new workload cluster which in this case happens to be c4-eksa1.  If you follow the log pattern and read through it will narrate the story of incoming request for c4-eksa1 cluster creation and the typical follow-on routines. I have just snipped a part of those logs. In case of further interest, feel free to create to inspect the logs against the eksa controller in your setup to get a comprehensive workflow understanding.

While SSH'd into the EKS-A admin machine, switch the kubectl context to c4-eksa using the switch-cluster bash script and then kubectl get pods -A. Locate the eksa-controller pod and output the logs for the controller as shown below

dellambarhassani_1-1669793995928.png

dellambarhassani_0-1669793944181.png

What about workload clusters, do they have any CRDs deployed by EKS Anywhere on them?

Note that workload clusters c4-eksa1 and c4-eksa1 are NOT standalone workload clusters and they are managed via the dedicated management cluster c4-eksa

So, the entire management capabilities for c4-eksa1 and c4-eksa2 reside on c4-eksa. We have already observed those CRDs in the above section. Let's take a look at the CRDs within one of the workload clusters, i.e., c4-eksa2

dellambarhassani_22-1669791259231.png

As one can see that there are no management related CRDs on the c4-eksa2 workload cluster. The only CRD seen is that of default cilium CNI which gets installed as a part of cluster creation process.

Baseline utilization of the cluster nodes

Let's observe the utilization of the various nodes in c4-eksa and c4-eksa1 cluster. This is taken while all three clusters were created, and no additional workloads were deployed. Also note that all the three clusters and their respective nodes, etcd, control, worker are configured 2 vCPU and 8GB RAM

Management cluster

c4-eksa control plane node

dellambarhassani_23-1669791436014.png

c4-eksa worker node

dellambarhassani_24-1669791483163.png

c4-eksa etcd node

dellambarhassani_25-1669791511298.png

Workload cluster c4-eksa1

c4-eksa1 control plane node

dellambarhassani_26-1669791544762.png

c4-eksa1 worker node

dellambarhassani_27-1669791565554.png

c4-eksa1 etcd node

dellambarhassani_28-1669791591472.png

Deleting the clusters

A simple script that automates the process of deleting the clusters has been created

SSH into the EKS Anywhere machine and execute the below script to delete the respective clusters.

First delete the workload clusters c4-eksa1 and c4-eksa2 and then use the same script to delete the c4-eksa management cluster

 

 

dellambarhassani_0-1669794403407.png

That’s it! Hopefully with the above details, you should feel comfortable about deploying a scalable pattern of EKS Anywhere clusters.

cheers

Ambar Hassani

#iwork4dell

No Responses!

Top