Restarting Kubernetes Pods: A Detailed Guide

Mark Bakker Profile Pic
Mark BakkerProduct Owner & Co-Founder
12 min read

This blog will help you learn all about restarting Kubernetes pods and give you some tips on troubleshooting issues you may encounter.

Kubernetes pods are one of the most commonly used Kubernetes resources. Since all of your applications running on your cluster live in a pod, the sooner you learn all about pods, the better.

One of the things you’ll probably need to do from time to time is to restart your Kubernetes pods, either to reload their configuration or to recover from a crashed application situation. We’ll guide you through exactly how to do that in this post, but before we dive into restarting pods, let’s do a quick recap.

What are Kubernetes Pods and Why You Would Want to Restart Them?

When you want to deploy your application on Kubernetes, you have a few options for doing that depending on your needs.

Kubernetes pods are the simplest and smallest things you can deploy on Kubernetes. You can’t deploy a container on its own in Kubernetes. Instead, you need to encapsulate it in a pod. Therefore, when you want to effectively restart your application, you basically need to restart the pod that it’s running in.

But restarting a pod is not actually as simple and straightforward as one would think. It’s not difficult either, but there are some things you need to know about restarting a pod in order to avoid some unexpected issues.

For example, depending on the restart method you choose, you may or may not experience downtime of your application.

You also absolutely need to know that it’s possible that your pod won’t start correctly after a restart. This can happen, for example, if a Kubernetes secret that the pod was using has been deleted in the meantime. In rare cases, it can also happen that your pod will be stuck in the “waiting for resources” state for quite a while if your cluster operates at full capacity.

Let’s Restart Some Pods

Enough theory—let’s dive into practice. If we want to restart some pods, we need to deploy some first. Let’s create a simple deployment with the following YAML definition:

```apiVersion: apps/v1kind: Deploymentmetadata: name: example-deployment labels: app: examplespec: replicas: 1 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: -name: example-pod image: nginx```

So, in order to create a pod, save the above code snippet into a YAML file and apply it with kubectl apply-f [filename.yaml]:

```$ kubectl apply-f deploy.yamldeployment.apps/example-deployment created```

Now, our deployment has been created, and the deployment resource should create one pod for us.

Before we can restart our pod, we need to verify a few things. First of all, we need to check if our pod is in fact running. We can do that with the following command:

```$ kubectl get podsNAME READY STATUS RESTARTS AGEexample-deployment-5dd497cf49-6pk78 1/1 Running 0 3s```

OK, it looks like our pod does indeed exist. The second thing we need to check is the state of the pod. If your pod is in a terminal state (succeeded or completed), then restarting won’t really work. You’ll just delete the pod, but a new one won’t be created. 

Technically, you can restart a pod that’s in a pending state and you could do that, for example, when you decided to make a last-minute configuration change. But normally you would restart a pod that’s in either a running or failed state. You can check the state of a pod with the same command that we used above. There, in the same output, we can see that our pod is in a “running” state.

We are all set then, so it’s time to restart our innocent pod.

One could expect that it’s as easy as executing a command such as kubectl restart pod followed by a pod name. But hold your horses. It may come as a surprise to you, but there’s no kubectl restart pod command. In fact, there isn’t any command to restart the pod.

Wait, what?

image-restarting-kubernetes-pods-no-kubectl

Well, in the Kubernetes world, you actually can’t restart a pod in a typical sense. What you can do, however, is delete the pod. And once you do that, Kubernetes will realize that the number of pods running doesn’t match its desired state and, therefore, will quickly create a new one.

The outcome basically will be as if you’d restarted the pod. So, long story short, the easiest way to “restart” a pod in Kubernetes is to execute kubectl delete pod [pod_name]:

```$ kubectl delete pod example-deployment-5dd497cf49-6pk78pod "example-deployment-5dd497cf49-6pk78" deleted```

Kubernetes says that our pod has been deleted. Let’s list the pods again then:

```$ kubectl get podsNAME READY   STATUS   RESTARTS  AGEexample-deployment-5dd497cf49-pdxc2   1/1  Running   0   4s```

It looks like Kubernetes lied to us. It seems like the pod hasn’t been deleted. But if you look closely, you’ll see that the pod ID (the random numbers and letters after the pod name) are different. Also, if you check the age of the pod, you’ll see it’s only four seconds old. All of that tells us that the pod we see now is actually not the same pod that we originally deployed.   

Kubernetes did its job. As soon as we deleted one pod, Kubernetes created a new one to bring the cluster state back to its desired state. In other words, we effectively restarted our pod. 

Great! Now you know how to restart a pod.   

But there’s something else you should know. First of all, that’s not the only method of restarting a pod. And second, this method will introduce a downtime to your application since Kubernetes will only create a new pod after the old one has been deleted.  

Let’s take a look at other methods to restart a pod. 

A bit more of a sophisticated method of restarting a pod is to perform a rollout restart of a deployment. Our pod is managed by a Kubernetes deployment resource, and with Kubernetes deployments, it’s possible to do the opposite of what we just did.  

Instead of deleting a pod and then creating a new one, rollout restart will first create a new pod and, once that new pod is ready, only then will it terminate the old one.  

Let’s see that in action. To perform rollout restart, you need to execute kubectl rollout restart deployment [deployment_name]:  

```$ kubectl rollout restart deployment example-deploymentdeployment.apps/example-deployment restarted```

And if you’re quick enough in executing kubectl get pods, you may be able to see the whole process:

```$ kubectl get pods NAME  READY   STATUS   RESTARTS   AGEexample-deployment-6d46c6b5c8-f45jw   1/1   Running   0  3m42s$ kubectl get podsNAME   READY   STATUS   RESTARTS AGEexample-deployment-6d46c6b5c8-f45jw   1/1   Running   0   3m54s example-deployment-74b49db54d-p2rdx   0/1   ContainerCreating   0 1s$ kubectl get podsNAME   READY   STATUS   RESTARTS AGEexample-deployment-6d46c6b5c8-f45jw   1/1   Running   0   3m56sexample-deployment-74b49db54d-p2rdx   1/1   Running   0   3s$ kubectl get pods NAME   READY   STATUS   RESTARTS AGEexample-deployment-74b49db54d-p2rdx   1/1   Running  0   3sexample-deployment-6d46c6b5c8-f45jw   1/1   Terminating   0   3m56s$ kubectl get podsNAME   READY   STATUS   RESTARTS AGEexample-deployment-74b49db54d-p2rdx   1/1   Running   0   4s```

As you can see, with this method we always have at least one pod running. Therefore, there’s no downtime like there was with the previous method.  

But here’s a catch. You can also see that with this method, it may happen that for a short period of time both old and new pods will be running at the same time. Depending on how your application works, this may be an issue, so it’s something to be aware of. 

How to Stop and Start a Pod

Both methods shown above have two things in common—the pod restart is really quick, and you don’t have control over it.

But there are times when you may want to restart a pod “slower” with a few seconds or even minutes to do something in between.

In such cases, you can use the kubectl scale command to stop the pod first, then do whatever you need to do before bringing the pod up again.

Normally, the kubectl scale command is used to increase or decrease the number of pods to deal with the load—for example, from 2 to 5. But you can also scale the number of pods to 0 and then scale it back to at least 1. This will effectively restart your pods.

Let’s see an example: 

```$ kubectl get podsNAME   READY   STATUS   RESTARTS AGEexample-deployment-74b49db54d-p2rdx   1/1   Running   0   13m $ kubectl scale deployment example-deployment --replicas=0deployment.apps/example-deployment scaled$ kubectl get podsNo resources found in default namespace.$ #Now you have time to do something$ kubectl scale deployment example-deployment --replicas=1deployment.apps/example-deployment scaled$ kubectl get podsNAME   READY   STATUS   RESTARTS AGEexample-deployment-74b49db54d-jlxgt   1/1   Running   0   4s ```

If you’ve worked with Kubernetes for a bit already, you may be wondering why you can’t just delete the deployment and recreate it later instead.

Sure, you could do that as well with a similar result, but there’s a key difference. You should use the scaling method when you don’t want to lose the history of the deployment. You’ll still be able to see when it was originally created and when it was scaled. Sometimes this information can be useful.

If you delete the deployment and recreate it later, from Kubernetes' perspective, these will be just two completely separate deployments that won’t have anything in common. This method can be useful, for example, when you are debugging an issue and want to be 100% sure that new pods are starting from a “clean state.”

image-restarting-kubernetes-pods-history

What If Something Goes Wrong? 

Now you know a few ways to restart a pod in Kubernetes. But restarting a pod may sometimes cause a pod to fail—for example, if you want to restart a pod in order to change its configuration or add a new configmap or volume. Any of these changes could introduce some bugs that prevent the pod from starting again properly.  

Let’s see how we can troubleshoot issues in such cases. 

Imagine that your pod uses some Kubernetes secrets. You wanted to update the secret, so you ran your CI/CD process for that, and then you restarted the pod using the kubectl delete pod method to test the change quickly.  

```$ kubectl get podsNAME  READY   STATUS   RESTARTS   AGEexample-deployment-6679dc8d4c-vnddk   1/1   Running   0   3s$ kubectl delete pod example-deployment-6679dc8d4c-vnddkpod "example-deployment-6679dc8d4c-vnddk" deleted$ kubectl get podsNAME  READY   STATUS   RESTARTS   AGEexample-deployment-6679dc8d4c-hccdr   0/1   ContainerCreating   0   2s $ kubectl get podsNAME  READY   STATUS   RESTARTS   AGEexample-deployment-6679dc8d4c-hccdr   0/1   CreateContainerConfigError   0   3s``` 

Oh, no! Instead of a shiny new pod, you see some errors. 

What now? 

The first and easiest thing to do in such cases is to execute the kubectl describe pod command followed by a pod name. There, in the Events section of the output, you should see some indication of what’s going on: 

```$ kubectl describe pod example-deployment-6679dc8d4c-hccdrName: example-deployment-6679dc8d4c-hccdr(...)Events:Type   Reason   Age   From   Message----     ------     ----    ----    -------Normal   Scheduled  3m47s   default-scheduler  Successfully assigned default/example-deployment-6679dc8d4c-hccdr to mint-vbox (...) Warning  Failed     2m20s (x8 over 3m46s)  kubelet            Error: secret "example-secrets" not found```

Pretty straightforward, isn’t it? It seems like there was a problem with the secret update, and it was deleted instead. So, to fix our failed pod restart in this case, we need to restore the secret that the pod wants to load.

This is just one example, but as a general rule, kubectl describe pod can usually tell you what the problem with your pod is.

However, sometimes it won’t. Imagine that you restored the secret on the cluster and restarted your pod again but it still fails. This time, however, kubectl describe pod doesn’t say anything about the missing secret. This could indicate that now we have an issue not with the pod configuration but with the application running in the pod.

If that happens to you, the next debugging step is to check the logs of your application. You can do that by executing kubectl logs [pod_name]:

```$ kubectl logs example-deployment-6679dc8d4c-977zv[INFO] Application starting…[ERROR] Can’t connect to DB, authentication error…```

Aha! It seems like the secret is not missing anymore, but it’s simply the wrong one now. Updating the secret to the correct one and restarting the pod one more time will fix your problem.

Summary

As you can see, something as simple as restarting a pod is worth a whole blog post. Not only can you actually directly restart a pod in Kubernetes, but you have a few ways of achieving a pod restart. 

If you want some more tips on troubleshooting Kubernetes, check out this blog. And if you want to avoid failures in your cluster caused by issues with configurations, typos or any other reasons, take a look at our guided Kubernetes troubleshooting and remediation solution.

About StackState 

StackState is designed to help engineers and developers quickly identify and resolve issues in their Kubernetes-based applications. With features like real-time visualization, integrated monitors, guided remediation and time travel capabilities, StackState provides a comprehensive solution to streamline troubleshooting and improve system reliability. See for yourself in our live playground environment.