3 Key Questions to Ask Before Getting Started with Kubernetes

Dmitry Maximov
Dmitry Maximov
9 min read

Kubernetes has become the de facto standard for container orchestration. For many developers it is now a preferred way of deploying and maintaining applications. You can even see questions about using Kubernetes for side projects popping up on Indiehackers.com and reddit.com. At the same time, it’s not the right choice for everything, as evidenced by a lengthy Hacker News discussion on ‘Don’t use Kubernetes yet’. And this StackOverflow topic, started a few years ago, "Should I use Serverless or Kubernetes or Docker Swarm?," is still relevant.

To ensure long-term success for your organization, engineering teams need to consider the following questions before getting started with Kubernetes:

  1. Is Kubernetes the right platform for your application?

  2. Does your team have enough expertise in Kubernetes to be successful?

  3. How are you going to troubleshoot apps running on Kubernetes?

Let's discuss each question in further depth.

1. Is Kubernetes the right platform for your app?

Today there are more ways than ever to deploy your application. Kubernetes certainly made a big splash in the industry. But is it really the best option for your next project? Kubernetes is not something you set up once and forget. It fundamentally changes how you architect and build your application. Next to this, your company size and the stage you are in might influence your decision on whether or not you want to use Kubernetes. As an example, if you are a three-person startup that has yet to achieve a product-market fit, Kubernetes may just slow you down. In this situation, there are alternatives available that you should consider:

Using serverless functions

The less infrastructure you have, the easier it is to manage, so you can quickly iterate as you strive to find product-market fit. It makes sense to avoid adding infrastructure complexity before you really need it. Using serverless functions is the approach that allows you to have as little infrastructure as possible, since you don’t need to worry about maintaining your servers. This approach also allows developers to save costs, since the code only runs in response to events or requests and you don’t pay for idle time. At the same time, this option limits the languages and libraries you can use and may require code changes if you application is not designed for using serverless functions from the start.

Orchestrating containers yourself

If you have just a few microservices and fairly predictable traffic, you would probably be better off just running containers in the cloud or on-premise and then orchestrating them yourself. Without automated container orchestration, if you experience unexpectedly high traffic you would need to run commands manually on your servers to run more instances. But if your traffic is predictable enough, you probably don’t even need autoscaling.

Deploying containers to Google Cloud Run or AWS Fargate

If your product user base is growing and the number of teams and microservices is growing with it, you may reach a point when manually orchestrating your microservices, especially in moments of unexpectedly high traffic, becomes a real headache. If you want autoscaling and a simple way to deploy containerized apps without the infrastructure management hassle – and you need relatively little control or customization over how your workloads are deployed and run – take a look at Cloud Run or AWS Fargate. Both options provide you with sensible defaults and you won’t need to worry about the complexity of configuring Kubernetes networking resources, storage plugins, RBAC and so on.

Adopting Kubernetes for container orchestration

For larger scale projects, you may want more flexibility, such as:

  • Extensive control over workload configuration

  • OS-level access to the host infrastructure

  • Ability to use third-party monitoring, logging or other management tooling

  • Option to deploy workloads anywhere and to move them between cloud or on-premise, when necessary

In these situations, Kubernetes may be indeed the best solution. Managing self-hosted clusters that run vanilla Kubernetes would give you the maximum control, but keep in mind they require expertise, time and resources that many businesses lack.

You should consider using a public cloud with a managed Kubernetes service offered by many cloud providers. In this case, they would do the heavy lifting by providing you with the necessary support and maintenance of the Kubernetes clusters. This approach would give you a hassle-free control plane and easy deployment options so you can focus on developing your apps. The control plane handles all the details of orchestrating which containers run on which nodes. It also abstracts away the hardware infrastructure so your workloads just run as worker nodes.

If you are a startup that is looking to scale or a big company with a bunch of infrastructure, Kubernetes may indeed be a great choice for you. But there are still two more questions to consider, in order to ensure a smooth journey.

2. Does your team have enough expertise in Kubernetes?

Even if you choose the managed option, Kubernetes is still complex to set up. Correct infrastructure and network setup, installation and configuration of all Kubernetes components are not that straightforward, even though there are multiple tools created to streamline the process.

Adopting Kubernetes without having a member of your team who knows it well will only lead to wasted time and resources, since poor implementation can end up messing things up in your infrastructure. Here are some areas of expertise required to successfully deploy and run your application on Kubernetes:

  • Operational knowledge of Kubernetes itself

  • How to architect and optimize your application to run on Kubernetes

  • How to deploy to Kubernetes

  • How to set SLIs, SLAs, SLOs

  • How to monitor your application and underlying infrastructure

  • Security/compliance requirements and how to implement them

If you don't have the required knowledge within your team, start acquiring it by hiring an experienced Kubernetes engineer or growing the expertise within a team. You can also start working with the official Kubernetes documentation and subscribe to Kubeweekly newsletter from the Cloud Native Computing Foundation. At the same time, you can start using Kubernetes in test environments during your learning period, before moving to production.

3. How you are going to troubleshoot apps running on Kubernetes?

Kubernetes might lure you with a simple start, but the tricky part is that when you have a problem to fix, you need to go much deeper than high-level abstractions. Often people do not expect this challenge unless they are already experts in Kubernetes.

Without logging and monitoring, it’s difficult to understand where the issue is within your containers. And Kubernetes native logging and monitoring functionality is rather limited in the ways you can collect, store and view logs and metrics.

Monitoring allows for easier management of Kubernetes clusters through detailed reporting of memory, CPU usage and storage. You need to monitor your app in any case, but with Kubernetes it can be more complex than expected and requires a different approach to monitoring. Containers are short-lived and get deployed and redeployed to adjust to usage demand. How do you troubleshoot something that may not even exist anymore?

First, you need to think about how to collect and visualize metrics from your applications. You also need a logging system that you can use to collect and store your logs, so you can search them easily. Setting up logging for Kubernetes allows you to track errors and refine the performance of your containers.

If you know how to use it, you could benefit from adding distributed tracing to understand how requests spread across your system. It’s more complex to utilize, but without it you would still have blind spots.

There are plenty of commercial and open source solutions available, but most address only one piece of the observability puzzle. While building your own observability stack gives you a great level of flexibility, it also poses some challenges:

  • Setting up monitoring using open source tools is complicated and there is a lack of good guidelines or defaults

  • Having multiple tools to collect and view metrics, logs, events and traces forces developers to do a lot of context-switching during troubleshooting

  • Developers are usually not Kubernetes experts, yet there is a lot of complexity in Kubernetes they need to deal with when troubleshooting issues. Often they stumble ahead inefficiently or have to call in other team members to help.

  • A single application often consists of many interacting services. Troubleshooting with Kubernetes native and existing open source tools requires developers to just know these interactions. In order to troubleshoot effectively, developers need intimate knowledge on how all services are connected, even the services they don't own.

To address these challenges and help your developers overcome Kubernetes complexity during troubleshooting, you can utilize a solution like StackState.

Simplified Kubernetes Troubleshooting

StackState is purpose-built to address the inherent challenges in troubleshooting Kubernetes:

  • Includes Kubernetes monitors you can use out of the box without the need for additional setup, reducing the need for developer expertise in Kubernetes or knowledge of the environment.

  • Combines logs, metrics, events and traces within a single tool, so the whole troubleshooting journey can be completed efficiently and effectively, without the need for constant context switching.

  • Guides remediation with hints and visual assistance using smart problem clustering and probable cause recommendation to help you fix issues as quickly as possible.

  • Automatically discovers and visualizes all Kubernetes service and resource dependencies to help you keep track of all changes in your dynamic environment.

We bring together all of the Kubernetes observability and monitoring capabilities you need, so you can quickly find and remediate problems and maintain a reliable Kubernetes environment.

Conclusion

When you have a large production deployment that also needs to scale, Kubernetes might be a great strategic choice for your business. You can change your deployment platform as your business grows, so don’t over-engineer it from the start. If you have a smaller project, you can start with using serverless functions or orchestrating microservices yourself and adopt Kubernetes later, when you really need it.

Before moving to Kubernetes, assess if your team has the required expertise with Kubernetes itself, containerization and container deployments as well as the architecture expertise for scaling containerized apps and services. If not, then make it a priority to acquire this expertise.

To make your apps running on Kubernetes reliable, make sure that you have monitoring and logging both for application and infrastructure in place.

If you want to empower all of your engineers with the knowledge they need to troubleshoot Kubernetes applications, try the first deep observability tool that aggregates metrics, logs, events and traces, shows connections and dependencies across services and takes engineers straight to the change that caused an issue.

Get our free trial and become a design partner for some amazing new features.