Monday, August 10, 2020

Deploy .net microservice on AWS Kubernetes using AWS management console


Hi there, I am excited to share my second blog on deploying an app to AWS Kubernetes which is nothing but EKS. In this particular blog, I have taken this approach of manually creating EKS cluster using the AWS management console, just to learn how it works, which I feel is very important.

In my last blog, which was the very first blog I created, I talked about containerizing an application. If you haven't seen please go to this link:

https://leogether.blogspot.com/2020/08/containerizing-net-microservice.html?zx=5b6e13926dfa506

Before we can move on to hands-on I would like to highlight on pre-requisites and steps we are going to do, and also a bit of understanding of Kubernetes. 

To know about Kubernetes, please see my blog here:

 https://leogether.blogspot.com/2020/08/kubernetes-container-orchestration.html

Pre-requisites

1. AWS management console free tier account.

2. AWS CLI installed and configured.
    Test using the command: aws --version
    
    Once Installed configure your local computer using the command: aws configure

3. Install kubectl and configure.
    Test if it is working using command: kubectl version --short --client
    

4. We need a containerized application to deploy to the EKS cluster. I will be using the same application as in my first blog here: https://leogether.blogspot.com/2020/08/containerizing-net-microservice.html?zx=5b6e13926dfa506

5. (Optional) Postman to test the application deployed in the EKS cluster. You can test via browser as well.

6. (Optional) AWS Documentation.
Link: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html  I have referred AWS documentation to create this blog but with lot more detailed explanation to explain each and every step.

To get started with EKS, I have listed down steps below in detail to give you an overview of what we will be doing:

Now, let us go ahead and do some hands-on.

Provision an Amazon EKS Cluster

Create an IAM Role for EKS Cluster

First, let's create an IAM role to assign permissions to Kubernetes to control resources.

Log in to the AWS management console with a user account with EKS privileges. You should follow the principle of least privilege.

Go to IAM under AWS management console

Click on Roles, then Create Role button

Select EKS

And select  EKS-Cluster

Click on next and you will see this role:

Click on Next till the last screen and create a role.

New role will be created as below:


Create VPC for EKS Cluster

Now we will use a cloud formation template to create a cluster VPC with public and private subnets. There is already a template provided by AWS for creating VPC for EKS that we will be using. The template URL is:

https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-vpc-private-subnets.yaml

This template will create:

  • VPC and its necessary network resources
  • 2 public subnet, 2 private subnet
  • Security Group
If you want to have more visibility on what resources will be created, please read the link https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html

Go to Cloud formation under management console,

Click Create Stack button and use the above YAML file as below:

Click on Next, give a stack name then go till last to create a stack. 

You can go to the VPC in the management console to see the details of the resources created. 

Under Cloud formation please make note of the output variables as we will be needing it later.


Creating EKS Cluster

Now go to the EKS under management console and once you land on the default page

Click on Next step, enter the name of the cluster, leave everything else as default and click on next

In the next screen Choose the VPC, Subnet and Security group that was part of the CloudFormation template. 

Choose Public and Private endpoint option for Cluster endpoint access and Click next.

Leave the logging disabled for now as it is just a demo, click on next and then click on Review and Create.

The status field shows CREATING until the cluster provisioning completes.

Once the cluster is created, status is changed to Active.


Connect to EKS (using kubectl)

Next, as mentioned before, we will need to make kubectl tool to point to cluster so that we can access the Kubernetes cluster

We will run update-kubeconfig command as below and the Configuration file will be created or updated by default. This is used to manage the entire EKS cluster.

aws eks <your -region> update-kubeconfig --name <name of your cluster>

You can also verify if the config file is created under C:\users\<your user name>\kube

To verify if kubectl is able to talk to my cluster, test your configuration using command:

 kubectl get svc

The above command shows that Kubernetes master is running without any issues.

To know the cluster information, type kubectl cluster-info


Optional - I am also listing here other useful commands that you like to run for your own understanding:

To check the status of EKS cluster:

aws eks describe-cluster --name <your cluster name> --query cluster.status --output text

To see the details of EKS Cluster:

aws eks describe-cluster --name <your cluster name>

To find out how many contexts you have running:

 kubectl config get-contexts There can also be one or more context. Context is nothing but a cluster. The one with * will be the current context. 

Or to know the current context (cluster) running: 

kubectl config current-context

Namespace created for the cluster:

kubectl get ns


Deploy worker nodes

Now we need to create worker nodes which are actually EC2 instances. 

If you run the command: kubectl get nodes, you will see no resources found.

This means Cluster is created, the master is up and running, but no nodes are created. Note that pods can only be created in nodes. 

Create an IAM role for Worker nodes

First create IAM role for worker nodes as below (using the same steps as we did for creating IAM role for EKS cluster):

Create Worker Node and add them to the cluster

Go to the EKS console and choose the cluster that you created. Go to the Compute tab.

Click on Add Node Group

Configure the Node group as below, some of the sections will be pre-selected.

NodeInstanceRole is the IAM role for worker node we just created above. 

Select all subnets that were created for VPC.

Allow remote access will allow you to remote SSH to the nodes


Go to Step 2, where you can define AMI, Instance type, and Disk size.

Step 3 defines scaling policies for worker nodes

Once done click on Create.

It will take a few minutes for the worker nodes to get up and running.

You can run get nodes --watch command to watch the status of your nodes and wait for them to reach the Ready status.

Once ready, you will see the output as above.

Now worker nodes are ready.

But there are no pods and deployment yet. Which actually means no application is running.

Run these commands to check pods and deployment status:

kubectl get pods

kubetl get deploy

So let's go the last step.

Deploy Application on EKS

Remember, in my first blog https://leogether.blogspot.com/2020/08/containerizing-net-microservice.html?zx=5b6e13926dfa506, we containerized the microservice. I am using the same microservice to deploy in the cluster. 

We would need to create a YAML file that is of kind "deployment" to describe Kubernetes artifacts or in other words, to describe the deployment of our container in Kubernetes. It will contain the definition of the desired state of our application that Kubernetes will read and make it happen.

So now go back to the application we created in the last blog and open in VS Code.

First, install the Kubernetes VS Code extension if you do not already have so that it can generate the stuff we need. This will by default also install YAML extension.

Next create a new file in the project directory, called deployment.yml.

Now open deployment.yml and type deploy -> select Kubernetes Deployment.

This will create a scaffold as below:

I have updated the file as below and provided an explanation:

Now  run this command to apply deployment.yml file.

kubelctl apply -f deloyment.yml

Now if you run kubectl get deploy and kubectl get pods you will see the output as below:


To check for any pod is alive, run the command as below:

Even when now the container is running inside the Kubernetes cluster this is not enough to query our API that is inside the container. The containers are alive, but there is no way to access it.

We need another YAML file of kind "service" that will access this deployment pods from outside the cluster. This will create a load balancer service that allows us to assign a DNS name to our pod. 

Create a service.yml file and create a scaffold by typing as below:

The file is updated as below:

Deploy service.yml

Now if you run the command, kubectl get svc, you will see the following

To get the details of the application:

Run the command: kubectl describe svc sample-microservice-service

 

You can check the load balancer inbound rules:

Go to Postman and call your load balancer endpoint, in my case it is:

http://a0db56ec8526349e48e7738ab829b619-435976258.ap-southeast-2.elb.amazonaws.com:8080/weatherforecast

Now the service is up and running and is accessible via the load balancer endpoint.

So this is it. Hope my blog is helpful. In my next blog, I create all of the required resources to get started with Amazon EKS using eksctl, which is a simple command-line utility for creating and managing Kubernetes clusters on Amazon EKS.

1 comment: