Monday, August 10, 2020

Deploy .net microservice on AWS Kubernetes using AWS management console

Hi there, I am excited to share my second blog on deploying an app to AWS Kubernetes which is nothing but EKS. In this particular blog, I have taken this approach of manually creating EKS cluster using the AWS management console, just to learn how it works, which I feel is very important.

In my last blog, which was the very first blog I created, I talked about containerizing an application. If you haven't seen please go to this link:

Before we can move on to hands-on I would like to highlight on pre-requisites and steps we are going to do, and also a bit of understanding of Kubernetes. 

To know about Kubernetes, please see my blog here:


1. AWS management console free tier account.

2. AWS CLI installed and configured.
    Test using the command: aws --version
    Once Installed configure your local computer using the command: aws configure

3. Install kubectl and configure.
    Test if it is working using command: kubectl version --short --client

4. We need a containerized application to deploy to the EKS cluster. I will be using the same application as in my first blog here:

5. (Optional) Postman to test the application deployed in the EKS cluster. You can test via browser as well.

6. (Optional) AWS Documentation.
Link:  I have referred AWS documentation to create this blog but with lot more detailed explanation to explain each and every step.

To get started with EKS, I have listed down steps below in detail to give you an overview of what we will be doing:

Now, let us go ahead and do some hands-on.

Provision an Amazon EKS Cluster

Create an IAM Role for EKS Cluster

First, let's create an IAM role to assign permissions to Kubernetes to control resources.

Log in to the AWS management console with a user account with EKS privileges. You should follow the principle of least privilege.

Go to IAM under AWS management console

Click on Roles, then Create Role button

Select EKS

And select  EKS-Cluster

Click on next and you will see this role:

Click on Next till the last screen and create a role.

New role will be created as below:

Create VPC for EKS Cluster

Now we will use a cloud formation template to create a cluster VPC with public and private subnets. There is already a template provided by AWS for creating VPC for EKS that we will be using. The template URL is:

This template will create:

  • VPC and its necessary network resources
  • 2 public subnet, 2 private subnet
  • Security Group
If you want to have more visibility on what resources will be created, please read the link

Go to Cloud formation under management console,

Click Create Stack button and use the above YAML file as below:

Click on Next, give a stack name then go till last to create a stack. 

You can go to the VPC in the management console to see the details of the resources created. 

Under Cloud formation please make note of the output variables as we will be needing it later.

Creating EKS Cluster

Now go to the EKS under management console and once you land on the default page

Click on Next step, enter the name of the cluster, leave everything else as default and click on next

In the next screen Choose the VPC, Subnet and Security group that was part of the CloudFormation template. 

Choose Public and Private endpoint option for Cluster endpoint access and Click next.

Leave the logging disabled for now as it is just a demo, click on next and then click on Review and Create.

The status field shows CREATING until the cluster provisioning completes.

Once the cluster is created, status is changed to Active.

Connect to EKS (using kubectl)

Next, as mentioned before, we will need to make kubectl tool to point to cluster so that we can access the Kubernetes cluster

We will run update-kubeconfig command as below and the Configuration file will be created or updated by default. This is used to manage the entire EKS cluster.

aws eks <your -region> update-kubeconfig --name <name of your cluster>

You can also verify if the config file is created under C:\users\<your user name>\kube

To verify if kubectl is able to talk to my cluster, test your configuration using command:

 kubectl get svc

The above command shows that Kubernetes master is running without any issues.

To know the cluster information, type kubectl cluster-info

Optional - I am also listing here other useful commands that you like to run for your own understanding:

To check the status of EKS cluster:

aws eks describe-cluster --name <your cluster name> --query cluster.status --output text

To see the details of EKS Cluster:

aws eks describe-cluster --name <your cluster name>

To find out how many contexts you have running:

 kubectl config get-contexts There can also be one or more context. Context is nothing but a cluster. The one with * will be the current context. 

Or to know the current context (cluster) running: 

kubectl config current-context

Namespace created for the cluster:

kubectl get ns

Deploy worker nodes

Now we need to create worker nodes which are actually EC2 instances. 

If you run the command: kubectl get nodes, you will see no resources found.

This means Cluster is created, the master is up and running, but no nodes are created. Note that pods can only be created in nodes. 

Create an IAM role for Worker nodes

First create IAM role for worker nodes as below (using the same steps as we did for creating IAM role for EKS cluster):

Create Worker Node and add them to the cluster

Go to the EKS console and choose the cluster that you created. Go to the Compute tab.

Click on Add Node Group

Configure the Node group as below, some of the sections will be pre-selected.

NodeInstanceRole is the IAM role for worker node we just created above. 

Select all subnets that were created for VPC.

Allow remote access will allow you to remote SSH to the nodes

Go to Step 2, where you can define AMI, Instance type, and Disk size.

Step 3 defines scaling policies for worker nodes