Kubernetes is an open-source software that helps you manage and deploy your containerized applications. Kubernetes consists of two major component control planes (which decide where to run your pod) and a worker node (where your workload runs). As Kubernetes is a complex system, managing these components is challenging, and this is where you can use a Kubernetes as a Service solution like Amazon Elastic Kubernetes Service (EKS). In this blog, we will see how you can set up and manage EKS and take advantage of the native integration of EKS with other AWS services (Amazon CloudWatch, Amazon VPC, etc.).
Amazon EKS is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without installing and managing your Kubernetes cluster. AWS takes care of all the heavy lifting like cluster provisioning, performing upgrades and patching. EKS runs the upstream Kubernetes version, so you can easily migrate your existing Kubernetes cluster to AWS without changing the codebase. EKS runs your infrastructure to multiple availability zones, eliminating a single point of failure.
An AWS EKS cluster consists of two primary components:
Amazon EKS will completely manage your controlplane, but how much or little control you need to manage your data plane depends on your requirements. AWS gives three options to manage your data plane nodes.
In this competitive cloud market, there are numerous cloud providers that offer services which support the usage of Kubernetes. Here is the comparison to help you pick a cloud provider that meets your needs better.
Before selecting a managed Kubernetes service, it's vital to know the strengths and weaknesses of each. All managed services solve your goal of easily deploying your Kubernetes cluster. The first decision is where your existing workload is running. It might be easier to remain with the cloud provider you already use.
Some comparisons between the three managed services:
These were the major differences between the three cloud offerings in how they offer Kubernetes solutions.
Here are the primary reasons to use EKS:
This section will show how to set up your EKS cluster using eksctl. It is a simple command-line utility that helps you set up and manage the EKS cluster. For more information, check the documentation at https://github.com/weaveworks/eksctl.
You must fulfill a few prerequisites before installing and setting up the EKS cluster using eksctl.
# curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.22.6/2022-03-09/bin/linux/amd64/kubectl | |
% Total % Received % Xferd Average Speed Time Time Time Current | |
Dload Upload Total Spent Left Speed | |
100 44.7M 100 44.7M 0 0 20.1M 0 0:00:02 0:00:02 --:--:-- 20.1M |
# chmod +x ./kubectl |
# mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin | |
# echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc |
# kubectl version --short --client | |
Client Version: v1.22.6-eks-7d68063 |
NOTE: To install Kubectl on other platforms like Windows or Mac, check the following documentation: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
# curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp |
# sudo mv /tmp/eksctl /usr/local/bin |
# eksctl version | |
0.96.0 |
The next step is to create the EKS cluster with all the prerequisites in place. Run the eks cluster command and pass the following options
eksctl create cluster --name demo-cluster --version 1.22 --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 | |
2022-05-13 02:06:30 [ℹ] eksctl version 0.96.0 | |
2022-05-13 02:06:30 [ℹ] using region us-west-2 | |
2022-05-13 02:06:30 [ℹ] setting availability zones to [us-west-2d us-west-2c us-west-2b] | |
2022-05-13 02:06:30 [ℹ] subnets for us-west-2d - public:192.168.0.0/19 private:192.168.96.0/19 | |
2022-05-13 02:06:30 [ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19 | |
2022-05-13 02:06:30 [ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19 | |
2022-05-13 02:06:30 [ℹ] nodegroup "standard-workers" will use "" [AmazonLinux2/1.22] | |
2022-05-13 02:06:30 [ℹ] using Kubernetes version 1.22 | |
2022-05-13 02:06:30 [ℹ] creating EKS cluster "demo-cluster" in "us-west-2" region with managed nodes | |
2022-05-13 02:06:30 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup | |
2022-05-13 02:06:30 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=demo-cluster' | |
2022-05-13 02:06:30 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "demo-cluster" in "us-west-2" | |
2022-05-13 02:06:30 [ℹ] CloudWatch logging will not be enabled for cluster "demo-cluster" in "us-west-2" | |
2022-05-13 02:06:30 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=demo-cluster' | |
2022-05-13 02:06:30 [ℹ] | |
2 sequential tasks: { create cluster control plane "demo-cluster", | |
2 sequential sub-tasks: { | |
wait for control plane to become ready, | |
create managed nodegroup "standard-workers", | |
} | |
} | |
2022-05-13 02:06:30 [ℹ] building cluster stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:06:31 [ℹ] deploying stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:07:01 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:07:31 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:08:31 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:09:32 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:10:32 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:11:32 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:12:33 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:13:33 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:14:33 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:15:34 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:16:34 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:17:34 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:18:35 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-cluster" | |
2022-05-13 02:20:37 [ℹ] building managed nodegroup stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:20:38 [ℹ] deploying stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:20:38 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:21:08 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:21:49 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:22:48 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:24:35 [ℹ] waiting for CloudFormation stack "eksctl-demo-cluster-nodegroup-standard-workers" | |
2022-05-13 02:24:35 [ℹ] waiting for the control plane availability... | |
2022-05-13 02:24:35 [âś”] saved kubeconfig as "/root/.kube/config" | |
2022-05-13 02:24:35 [ℹ] no tasks | |
2022-05-13 02:24:35 [âś”] all EKS cluster resources for "demo-cluster" have been created | |
2022-05-13 02:24:35 [ℹ] nodegroup "standard-workers" has 3 node(s) | |
2022-05-13 02:24:35 [ℹ] node "ip-192-168-19-59.us-west-2.compute.internal" is ready | |
2022-05-13 02:24:35 [ℹ] node "ip-192-168-47-155.us-west-2.compute.internal" is ready | |
2022-05-13 02:24:35 [ℹ] node "ip-192-168-92-182.us-west-2.compute.internal" is ready | |
2022-05-13 02:24:35 [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers" | |
2022-05-13 02:24:35 [ℹ] nodegroup "standard-workers" has 3 node(s) | |
2022-05-13 02:24:35 [ℹ] node "ip-192-168-19-59.us-west-2.compute.internal" is ready | |
2022-05-13 02:24:35 [ℹ] node "ip-192-168-47-155.us-west-2.compute.internal" is ready | |
2022-05-13 02:24:35 [ℹ] node "ip-192-168-92-182.us-west-2.compute.internal" is ready | |
2022-05-13 02:24:38 [ℹ] kubectl command should work with "/root/.kube/config", try 'kubectl get nodes' | |
2022-05-13 02:24:38 [âś”] EKS cluster "demo-cluster" in "us-west-2" region is ready |
# eksctl get cluster -r us-west-2 | |
2022-05-13 02:26:00 [ℹ] eksctl version 0.96.0 | |
2022-05-13 02:26:00 [ℹ] using region us-west-2 | |
NAME REGION EKSCTL CREATED | |
demo-cluster us-west-2 True |
# aws eks update-kubeconfig --name demo-cluster --region us-west-2 | |
Added new context arn:aws:eks:us-west-2:123456789:cluster/demo-cluster to /root/.kube/config |
# kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
ip-192-168-19-59.us-west-2.compute.internal Ready <none> 7m24s v1.22.6-eks-7d68063 | |
ip-192-168-47-155.us-west-2.compute.internal Ready <none> 7m27s v1.22.6-eks-7d68063 | |
ip-192-168-92-182.us-west-2.compute.internal Ready <none> 7m25s v1.22.6-eks-7d68063 |
# kubectl create deployment my-demo-deploy --image=nginx --replicas=3 | |
deployment.apps/my-demo-deploy created |
# kubectl get pods -o wide | |
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | |
my-demo-deploy-85d855f586-d9chq 1/1 Running 0 29s 192.168.92.101 ip-192-168-92-182.us-west-2.compute.internal <none> <none> | |
my-demo-deploy-85d855f586-kqr8n 1/1 Running 0 29s 192.168.53.46 ip-192-168-47-155.us-west-2.compute.internal <none> <none> | |
my-demo-deploy-85d855f586-x7bjj 1/1 Running 0 29s 192.168.18.111 ip-192-168-19-59.us-west-2.compute.internal <none> <none> |
At this stage, your EKS cluster is up and running. The next step is to install CloudWatch container insight to collect your metrics. But first, ensure that the identity and access management (IAM) policy is attached to your instance. In this case, you need a CloudWatch Full Access policy for your worker node so it can push metrics to CloudWatch.
# curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/demo-cluster/;s/{{region_name}}/us-west-2/" | kubectl apply -f - | |
% Total % Received % Xferd Average Speed Time Time Time Current | |
Dload Upload Total Spent Left Speed | |
100 15896 100 15896 0 0 320k 0 --:--:-- --:--:-- --:--:-- 323k | |
namespace/amazon-cloudwatch created | |
serviceaccount/cloudwatch-agent created | |
clusterrole.rbac.authorization.k8s.io/cloudwatch-agent-role created | |
clusterrolebinding.rbac.authorization.k8s.io/cloudwatch-agent-role-binding created | |
configmap/cwagentconfig created | |
daemonset.apps/cloudwatch-agent created | |
configmap/cluster-info created | |
serviceaccount/fluentd created | |
clusterrole.rbac.authorization.k8s.io/fluentd-role created | |
clusterrolebinding.rbac.authorization.k8s.io/fluentd-role-binding created | |
configmap/fluentd-config created | |
daemonset.apps/fluentd-cloudwatch created |
# kubectl get all -n amazon-cloudwatch | |
NAME READY STATUS RESTARTS AGE | |
pod/cloudwatch-agent-5295c 1/1 Running 0 4m54s | |
pod/cloudwatch-agent-jvxsl 1/1 Running 0 4m54s | |
pod/cloudwatch-agent-nncjk 1/1 Running 0 4m54s | |
pod/fluentd-cloudwatch-6q5m5 1/1 Running 0 4m51s | |
pod/fluentd-cloudwatch-9qp6f 1/1 Running 0 4m51s | |
pod/fluentd-cloudwatch-f8kqd 1/1 Running 0 4m51s | |
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE | |
daemonset.apps/cloudwatch-agent 3 3 3 3 3 <none> 4m54s | |
daemonset.apps/fluentd-cloudwatch 3 3 3 3 3 <none> 4m51s |
AWS EKS supports virtual private cloud (VPC) networking using the AWS VPC Container Network Interface (CNI) plugin for Kubernetes. Using this plugin, Kubernetes pods have the same IP as on the VPC network. For more information, check the following link: https://github.com/aws/amazon-vpc-cni-k8s
For EKS, authorization is managed by role-based access control (RBAC) for Kubernetes commands, but for AWS commands, identity and access management (IAM) manages both authentication and authorization. EKS is tightly integrated with the IAM authenticator service, which uses IAM credentials to authenticate the Kubernetes cluster. This greatly helps to avoid managing separate credentials for Kubernetes access. Once an identity is authenticated, RBAC is used for authorization. Here is the step-by-step procedure:
Amazon Elastic Container Registry (ECR) Repository is a fully managed registry to store container images. Every AWS account comes with a single (default) ECR registry, but you can create one or more registries to store container images. ECR is well-integrated with other AWS services like identity and access management (IAM), and you can use it to set permissions to control access. You can use ECR to store other artifacts like Helm charts.
To push your image to ECR Repository, follow these steps
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 1234567890.dkr.ecr.us-west-2.amazonaws.com |
NOTE: The authentication token is valid only for 12 hours from when it is issued.
docker build -t my-eks-repo |
docker tag my-eks-repo:latest 1234567890.dkr.ecr.us-west-2.amazonaws.com/my-eks-repo:latest |
NOTE: 1234567890 is your AWS account ID. Replace it with your account ID.
docker push 1234567890.dkr.ecr.us-west-2.amazonaws.com/my-eks-repo:latest |
Amazon EKS is one of the most widely used Kubernetes managed services. In this blog, you have learned how to set up an EKS cluster and deploy your workload. One of the primary advantages of using EKS is its integration with AWS services like identity management and virtual private cloud. Also, AWS will take care of all the heavy lifting like patching, performing an upgrade and provisioning your cluster. On top of that, if you use AWS offerings like Fargate, AWS will manage your worker node.
‍
Squadcast is an Incident Management tool that’s purpose-built for SRE. Get rid of unwanted alerts, receive relevant notifications and integrate with popular ChatOps tools. Work in collaboration using virtual incident war rooms and use automation to eliminate toil.