Monitoring Kubernetes Cluster — Part I
Quick and Easy way to Monitor your Kubernetes cluster.
In my previous blogs, we have learned “Get started with k8s” and “Hello World” Application on K8s. In this blog, we will cover how to monitor our cluster and how to get some basic resources matrics of our Kubernetes cluster.
First, let's clear our understanding of the resource types and unit of measure.
In Kubernetes, we can specify the maximum value of resources using “Limit” and request minimum resources using “Request” field in our defination YAML file.
Limits and requests for CPU resources are measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 AWS vCPU or 1 GCP Core. We can specify a value for CPU resources in points. The expression 0.1 is equivalent to the expression 100m, which can be read as “one hundred millicpu”. Some people say “one hundred millicores”, and this is understood to mean the same thing.
Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.
Difference between MiB and MB?
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB. The unit prefix mega is a multiplier of 1000000 (106) in the International System of Units (SI). Therefore, one megabyte is one million bytes of information. This definition has been incorporated into the International System of Quantities.
The mebibyte is a multiple of the unit byte for digital information. The binary prefix mebi means 220; therefore one mebibyte is equal to 1048576bytes, i.e., 1024 kibibytes. The unit symbol for the mebibyte is MiB.
3. Local ephemeral storage
FEATURE STATE: Kubernetes v1.16 beta
Kubernetes version 1.8 introduces a new resource, ephemeral-storage for managing local ephemeral storage. In each Kubernetes node, kubelet’s root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.
What happens when we exceed limits,
・If a Container exceeds its memory limit, it might be terminated. If it is restartable, the kubelet will restart it, as with any other type of runtime failure.
・If a Container exceeds its memory request, it is likely that its Pod will be evicted whenever the node runs out of memory.
・A Container might or might not be allowed to exceed its CPU limit for extended periods of time. However, it will not be killed for excessive CPU usage.
As our understanding of resource types is clear we will implement two of many methods to monitor our cluster.
1. Dashboard (Web UI)
The dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc).
Deploying the Dashboard UI:
The Dashboard UI is not deployed by default. To deploy it, run the following command:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
this will make the Dashboard available at, http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
Note: The UI can only be accessed from the machine where the command is executed.
To access our web UI we need a token, run (in another terminal tab)
kubectl -n kubernetes-dashboard describe secret
Now you have access to “Overview” of your cluster. Keep in mind that we have selected “default” in Namespace.
You can see Pods in your Namespace and details of Pods.
You can check the detail log of a particular Pod.
You can check the status of the resources for a Node.
2. Kubernetes Metrics Server
Metric is a cluster-wide aggregator of resource usage data. Metric server collects metrics from the Summary API, exposed by Kubelet on each node. Metrics Server registered in the main API server through Kubernetes aggregator. This API doesn’t store the metric values, so it’s not possible for example to get the amount of resources used by a given node 10 minutes ago.
Let’s clone code base and run a command
$ git clone https://github.com/skynet86/hello-world-k8s.git
$ kubectl create -f metrics-server.yaml
Give a couple of minutes and then you will be able to access metrics.
$ kubectl top nodes
$ kubectl top pods
# Different namespace
$ kubectl top pods -n kube-system