First look at Amazon Web Services’ EKS Service - By Dejan Gregor



The purpose of this exercise was to assess user experience in setting up an AWS EKS cluster in Ireland in comparison to provisioning a cluster using KOPS.

AWS EKS Manged Control Plane is charged at $0.30 per hour which totals $ 216.00 per month. EKS Manged Control Plane is delivering the functionality of the API Server and etcd over several AZs in the region. It took at least 5 minutes to get the cluster plane created.






 Running my first AWS CLI command to interrogate the freshly created EKS Control Plane, Hooray!

aws eks describe-cluster --name test-demo-eks --query cluster.status 
"ACTIVE" 

aws eks describe-cluster --name test-demo-eks --query cluster.endpoint --output text 


The next step I had to follow was to download the AWS custom client binaries which include kubectl and IAM authenticator. The kubectl invoked the IAM authenticator. I would have expected AWS to rename the kubectl (e.g. eksctl).

The provisioning of the worker nodes with the Cloud Formation template was not intuitive.  This should be integrated with the process of creating a cluster (EKS Management Control Plane). It took approximately 10 minutes to create a single instance worker node.  Deleting the stack also took about ten minutes.



Installing tiller was difficult and to get the helm client binary communicating with the EKS cluster failed with authentication errors. It is not integrated with the IAM authenticator.

After worker nodes were created and joined, they became available quickly.

kubectl get nodes --watch 
NAME                                        STATUS     ROLES     AGE       VERSION 
ip-172-30-2-48.eu-west-1.compute.internal   NotReady   <none>    3s        v1.10.3 
ip-172-30-3-16.eu-west-1.compute.internal   NotReady   <none>    1s        v1.10.3 

kubectl get nodes 
NAME                                        STATUS    ROLES     AGE       VERSION 
ip-172-30-2-48.eu-west-1.compute.internal   Ready     <none>    36s       v1.10.3 
ip-172-30-3-16.eu-west-1.compute.internal   Ready     <none>    34s       v1.10.3 

I was initially surprised at how many private IP addresses were assigned to the EC2 worker node instance (see below). The advantage is that connectivity between applications on Kubernetes can be managed natively with AWS Security Groups.  Impressive!

 

The EKS K8S worker node instance is very basic.  In contrast user data created with KOPS is comprehensive.


#!/bin/bash 
set -o xtrace 
/etc/eks/bootstrap.sh test-demo-eks  
/opt/aws/bin/cfn-signal --exit-code $? \ 
         --stack  test-demo-eks-nodes-cf \ 
         --resource NodeGroup  \ 
         --region eu-west-1

I did deploy a replica set (rs) of two pods of the open source Geoserver application. Deployment was straight forward as I could easily take a prebuilt image from Dockerhub and successfully run it in Kubernetes.



Summary 

Linking authentication with IAM and the use of a Dashboard for the cluster is to be welcomed.  However, not being able to choose Kubernetes version and the lack of some standard deployment packages (e.g. ingress controller, helm, network cli) was limiting.


The new Kubernetes cluster setup lacked the user friendliness of the end to end workflow.  For example, I had to deploy a Cloud Formation template and copy and paste numerous values (e.g. cluster name, IAM role ARN and security group).

In addition, I had to run kubectl commands to link the cluster master and worker nodes.

Overall, I still find KOPS is currently much easier and user friendly when deploying a new cluster. However, I expect that AWS will deliver a superior Kubernetes service in the near future.
________

Comments

  1. I've got understand your site. Definitely incredibly beneficial in addition to fantastic write-up I did previously viewed in relation to AWS. Appreciate it intended for giving a really superb blog site to our imaginative and prescient vision. Study Digital Ocean vs Aws to recognise additional information with this technological know-how.

    ReplyDelete

Post a Comment