6 Ways to Benefit from the SUSE StackState Integration

Jeroen van ErpProduct Manager
15 min read

With the recent integration between SUSE and StackState, SUSE customers will benefit from the enhanced observability StackState offers for their applications running on SUSE’s diverse Kubernetes distributions. As businesses increasingly rely on Kubernetes, ensuring the stability and performance of applications becomes of great importance. 

StackState, with its advanced observability features, empowers engineers to gain deep insights into their applications, whether they're deployed in traditional data centers, air-gapped environments or even on the edge, where data processing and storage require lower bandwidth and offer better app performance.  

In this post, we'll cover six transformative benefits of this integration, showcasing how the SUSE and StackState integration can transform the way you deploy, manage, observe and optimize your Kubernetes applications.  

Benefit #1 - Compatibility with all SUSE Kubernetes distributions

RKE and K3s — two Kubernetes distributions from the SUSE Rancher container platform — stand out in their ease of use in getting started with rapidly deploying workloads and running applications. When applications become absolutely vital for the business, and reliability is non-negotiable, gaining insights into how these applications operate becomes paramount.  

StackState supports the different SUSE Kubernetes Distributions. Before diving deeper into what StackState delivers in support of these SUSE Kubernetes Distributions, let’s look at each independently. 

Rancher Kubernetes Engine (RKE) 

RKE is a robust solution designed for managing Kubernetes clusters. It streamlines the process of setting up and maintaining Kubernetes, ensuring that teams can focus on deploying applications rather than wrestling with infrastructure challenges. Some of the key benefits of RKE include its flexibility, scalability and security features, making it an ideal choice for enterprises. Learn more about RKE.

Kubernetes at the edge (K3s) 

K3s, on the other hand, offers a lightweight and simplified approach to Kubernetes, specifically tailored for edge computing scenarios. With its reduced resource requirements and optimized performance, K3s is perfect for environments where resources are at a premium, such as IoT devices and edge servers. Learn more about K3s.  

The benefit of StackState observability 

One of the standout features of both RKE and K3s lies in their effortless integration with StackState. With the simple installation of the StackState agent, teams can unlock robust observability and troubleshooting capabilities. This ensures that whether in traditional data centers or on the edge, teams maintain absolute visibility and control over their applications. The ability to monitor, diagnose and resolve issues in real-time guarantees that applications remain both performant and resilient, regardless of their deployment location. 

Benefit #2 - Augmenting RKE2 security strengths 

RKE2, often referred to as RKE Government, stands as Rancher's next-generation Kubernetes distribution. RKE2 stands out by blending elements from both RKE (also known as RKE1) and K3s, but it also brings its own distinctive qualities to the table.  For example: 

  • RKE2 inherits the user-friendly essence, operational simplicity and deployment model from K3s. 

  • Conversely, RKE2 closely aligns with upstream Kubernetes, drawing inspiration from RKE1. 

Notably, unlike RKE1 — which depended on Docker — RKE2 sidesteps this reliance and follows in the footsteps of upstream Kubernetes by deploying control plane components as static pods managed by the kubelet, utilizing containerd as the embedded container runtime. 

A significant emphasis of RKE2 is its focus on security and compliance, especially catering to the U.S. Federal Government sector. It's designed to meet stringent security benchmarks and is FIPS 140-2 compliant.

However, security alone isn't enough. To ensure applications run reliably, observability becomes crucial. This is where StackState shines. Certified for RKE2, StackState seamlessly integrates with it, providing insights into application reliability and ensuring teams always have a clear picture of their application's health. 

Benefit #3 - Hassle-free startup through Rancher Marketplace

Let’s talk about Helm. Helm is a tool for managing packages of charts, i.e., pre-configured Kubernetes resources. Rancher's Apps & Marketplace serves as a centralized hub, managing a curated set of Helm charts and providing users with a catalog-like system to install these on their clusters.  

Helm charts can be utilized to deploy partner Helm applications or Rancher’s tools, such as Monitoring or Istio. The preloaded repositories deploy as stand-alone Helm charts, and any additional repositories are added exclusively to the current cluster. 

The StackState platform is conveniently accessible via the marketplace, ensuring a quick and easy start in just minutes. When the StackState Helm chart is used, the agent will be installed in the appropriate cluster and data can start flowing into the StackState platform. With this simple action, you can easily ensure that all of your clusters immediately come under the observability oversight of StackState.

Please note that the StackState Helm chart is curated by both StackState and SUSE to ensure a deployment that is conformant to all the SUSE standards.  

Demo video: StackState observability in action 

Before we continue with the benefits of using StackState in combination with various SUSE Kubernetes distributions, take a look at the video below. In it, Jeroen van Erp, Product Manager at StackState, demonstrates how to observe a multicluster application.  

Benefit #4 - Modern observability for air-gapped environments 

Air-gapped environments, by design, lack internet connectivity, which poses unique challenges for observability. Because they are physically isolated from unsecured networks, traditional monitoring tools that rely on external connections are rendered ineffective. Additionally, the limited access to real-time logs, metrics and events can hamper effective troubleshooting and performance analysis. 

Best practices for observability in air-gapped RKE2 environments 

Efficient observability within air-gapped RKE2 environments demands a customized approach. Meanwhile, effective troubleshooting requires a combination of real-time monitoring, historical data analysis and predictive analytics. 

Here's the structured plan to achieve comprehensive insights: 

  • Local Monitoring Agents: Deploy monitoring agents within the environment to capture and analyze data locally. 

  • Containerized Observability Tools: Leverage tools designed for containerized deployments, ensuring they operate effectively within the air-gapped constraints. 

  • Log Management: Implement robust log management solutions to capture, store and analyze logs efficiently. 

  • Custom Metrics and Event Monitoring: Integrate custom metrics to monitor specific aspects of the environment, complemented by event monitoring for real-time insights. 

  • Proactive Maintenance is Key. Regularly update and patch systems, monitor for anomalies and establish incident response protocols.  

Observability in air-gapped environments isn't just a technical necessity; it's a strategic imperative. StackState takes center stage in this scenario, and our position here is a key differentiator. When running an air-gapped environment, installing SUSE’s RKE2 is a must, which provides you with a highly secure and reliable cluster.   Then, combined with the same out-of-the-box content StackState provides to cloud-enabled environments, you get a fully air-gapped complete observability solution for all of your applications. In this way, StackState helps organizations unleash the full potential of their air-gapped RKE2 deployments, resulting in optimal performance, heightened security and better overall business results.  Learn more about building a strong observability strategy.  

Benefit #5 - Broaden understanding of RKE2 beyond the platform team 

Enterprises leveraging these modern platforms depend on the expertise of platform engineers, whose role is indispensable in ensuring the seamless operation of these systems. Their deep understanding of best practices, configurations and optimizations ensures that Kubernetes clusters run efficiently and securely. 

However, this knowledge shouldn't remain siloed. Sharing the knowledge as "policies" across your software development teams is a game-changer. When developers deploy applications to RKE2, having these policies automatically applied means they benefit from the collective wisdom of the platform team from the get-go. This not only ensures optimal configurations but also accelerates the deployment process throughout the enterprise. 

StackState fortifies this approach with an expert, sharable knowledge base, offering tested monitors and remediation guides. These features encapsulate best practices and troubleshooting steps, guaranteeing that, upon deployment, applications seamlessly align with the platform team's recommendations. In essence, StackState bridges the gap between platform knowledge and application deployment, so that best practices are not just recommended but are baked into every deployment. Learn more about capturing and creating an expert knowledge base.

Benefit #6 - Full visibility into resource dependencies across the cluster 

As Kubernetes clusters become more intricate, the demand for a transparent depiction of interactions among components intensifies. The automated dependency mapping of all resources within Kubernetes clusters is pivotal, and this is another area where StackState stands out. 

Troubleshooting with dependency mapping  

Kubernetes is composed of various components, each playing a crucial role in the cluster's operation. From the API server, which serves as the entry point for commands, to worker nodes that run the containers, understanding these components, and knowing which runs where and how they interact is crucial for efficient cluster management. 

Application dependency mapping, in the context of Kubernetes, refers to the visual representation of how different services and resources are interconnected within a cluster. Because they serve as a compass for Kubernetes troubleshooting, managing clusters without a comprehensive view of dependencies can lead to inefficiencies, performance problems and potential pitfalls. In fact, inaccurate or incomplete dependency maps make troubleshooting a daunting task.   By visualizing the interconnections and health of various resources, IT teams can:

  • understand the impact of issues, preemptively identify potential issues, understand their impact and pinpoint the root cause of immediate service failures and bottlenecks 

  • streamline enterprise-wide troubleshooting and resolution processes and quickly implement solutions  

  • optimize overall performance and resource allocation, minimize downtime and ensure a seamless user experience 

Full application dependency mapping in Kubernetes clusters is not just a luxury; it's a necessity. It provides clarity, aids in troubleshooting, and ensures the optimal performance of the cluster. With tools like StackState offering advanced mapping capabilities, organizations can harness the full potential of Kubernetes, ensuring stability, performance and security. Learn more about the power of application dependency maps.

SUSE & StackState: An important integration 

In the rapidly maturing landscape of Kubernetes, the integration between SUSE and StackState offers a great way to increase reliability and observability. As we've explored, the six key benefits of this collaboration can significantly enhance the way businesses deploy and manage their applications. 

But why settle for just reading about the power of this integration when you can experience it firsthand? We invite you to take a closer look into the world of advanced observability by exploring the StackState playground and unlocking a new dimension of Kubernetes management.

Visit the StackState Playground now