How to monitor a Kubernetes environment - Part 3
Mark Bakker· 3 min read
This post is part 3 in a 4-part series about Container Monitoring. Post 1 dives into some of the new challenges containers and microservices create and the information you should focus on. Post 2 describes how you can monitor your Mesos cluster. This article describes the challenges of monitoring Kubernetes, how it works, and what this means for your monitoring strategy.
What is Kubernetes?
Kubernetes is a powerful orchestration system, developed by Google, for managing containerized applications in a (private) cloud environment. Kubernetes is able to automate the deployment, management,and scaling of containerized applications and services. Kubernetes provides the infrastructure to build a truly container-centric development and operations environment.
How it works Pods
Kubernetes introduces a new level of abstraction to your containerized environment thanks to pods. A pod is a group of one or more containers, that are located on the same host and share the same resources, such as network, memory and storage of the node. Each pod in Kubernetes gets its own IP address that is shared with all the containers inside.
Nodes and clusters
In Kubernetes, servers that perform work are known as nodes. A node is a worker machine inside Kubernetes where pods are running on. A node may be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by Kubernetes.
In short, Kubernetes exists of the following components:
What it means for your monitoring strategy
To ensure a good performance of your business service it is critical to monitor Kubernetes itself as well as the health of your deployed applications, the containers, and the dependencies between them. The new abstraction introduced by Kubernetes, requires you to rethink your monitoring strategy, especially if you are used to traditional monitoring tools and traditional hosts such as physical machines or VMs. Microservices have changed the way we think about running services on VMs, but Kubernetes has changed the way we manage and scale containers.
What does this mean for you?
Monitoring Kubernetes is different than traditional monitoring in multiple ways:
More components (between hosts and applications) to monitor
You need monitoring capabilities that can track the dynamic behavior of containers and applications inside them
As the number of containers scale the number of dependencies will increase
Redefine alerts to focus on monitoring the service level. These alerts are the first line of defense in assessing if something is impacting the application. But getting to these alerts is challenging unless your monitoring system is container-native.
How to monitor Kubernetes with StackState
Now you know that it's critical to monitor the different layers and components of your Kubernetes environment. StackState integrates with all of them to provide you a holistic view of your Kubernetes cluster performance, its health and dependencies:
All services, clusters, nodes, and pods including their dependencies are automatically synchronized
StackState automatically keeps track of what is running where thanks to its service discovery capability. Whenever you spin up a container, the StackState agent identifies which application is running inside the containers and automatically starts collecting and reporting the right metrics. If you destroy or stop a container, StackState will understand that too. It allows you to define configuration templates for specific images in a distributed configuration store on top of the StackState Agent which will use them to dynamically reconfigure its checks when your containers ecosystem changes.
In this post, we’ve walked through the challenges of monitoring Kubernetes, how it works, and what it means for your monitoring strategy. Request a free trial of StackState and start with monitoring your Kubernetes cluster to have greater visibility into the health, performance, and dependencies of your clusters and be better prepared to address potential issues.
Mark Bakker· 3 min read