Kubernetes Security: Secrets from the Trenches

in Monitoring and Observability


Kubernetes is a popular open-source container orchestration platform. It is highly configurable and feature-rich, but it also requires a deep understanding of containerization.

    When you are running Kubernetes in production, you need to account for cluster monitoring and logging, governance, and security. In this article, you will learn about Kubernetes security, including pro tips to help you handle architecture concerns, dependencies, and container vulnerabilities.

    If you want to read more about Kubernetes, please check these articles Kubernetes Secrets Management and Tips for Monitoring Kubernetes Applications. Also, if you would like to know how MetricFire can help you monitor Kubernetes, please book a demo with us or sign up for the free trial.


    Considerations for Running Kubernetes in Production

    When first getting started with Kubernetes (k8s) it is common for teams to experiment with small, bare-bones deployments. These are often on local servers and are used as a proof for concept. This is fine as a demo but does not necessarily indicate the readiness of teams for production-grade deployments.

    Production deployments are typically much larger, with many more resources and moving parts to manage. Before you get started with a deployment to production, you should take the following into consideration.

    Cluster Monitoring and Logging

    Production deployments can scale to hundreds or thousands of pods that are continuously communicating with each other and with services. To ensure that this communication occurs smoothly, you need to implement continuous monitoring and logging. 

    Without this oversight, you are left to rely on Kubernetes self-healing to identify and correct issues. You also have little to no ability to measure your deployment performance or audit for issues and optimizations. Kubernetes does come with Prometheus to assist with monitoring but for many production configurations, this isn’t enough.‍ 


    Production deployments require the ability to audit systems for compliance and manage access. This often requires integrating with tools that can analyze your log data and provide reporting. It may also include integrating outside permissions and identity management tools. 

    You need to be able to directly see and smoothly manipulate configurations as needed. Meanwhile, you need to ensure that processes are operating as designed and infrastructure is carefully managed. Often this means deploying additional tools such as service meshes, which can grant you greater control and flexibility in your deployment.


    Ensuring Kubernetes’ security is another significant concern. Production deployments typically contain significant amounts of sensitive and valuable data. Deployments host mission-critical processes and consist of large numbers of resources that can be abused. Because of this, finding the right Kubernetes security tooling is key.

    You need tools and practices that can help you integrate security down to the container level, including image registries and code scanners. Additionally, security should be layered, leveraging the isolated nature of Kubernetes components to implement strict access controls. This includes strict authentication, authorization, and encryption.‍ 

    Kubernetes Security Challenges

    There are several common security challenges faced in any Kubernetes deployment. Below are some to be aware of in particular. 

    Architecture Concerns

    The most significant challenge for many deployments is the architecture of Kubernetes. With each container added to your deployment, more attack surface is added. Additionally, because the components are ephemeral, this surface area is dynamic, making it difficult to monitor and protect. This ephemeral nature also makes it difficult to ensure compliance with Kubernetes. 

    The only real solution for this is service discovery and automation. There are also benchmarks you can use to verify the use of security best practices, such as the CIS Kubernetes Benchmarks. However, these are only a guide and it is up to your DevOps or DevSecOps team to ensure that recommendations are followed.


    When deployed, Kubernetes relies on and integrates with many other components. Each of these dependencies can include vulnerabilities in your deployment and represent a risk to your productivity. 

    For example, k8s relies on runtime orchestration tools to manage host containers. If an attacker were to gain access to these tools they could deploy resources at will, completely bypassing Kubernetes and your built-in controls.

    Container Vulnerabilities

    Although containers are isolated, these resources are still vulnerable. The use of infected or corrupt images and a lack of restrictions on communications can enable attackers to manipulate your container from within. This is particularly problematic if containers are running as root. In these cases, attackers can gain access to and possibly control over your wider systems. 

    There is no tooling in Kubernetes that automatically detects or protects from container attacks. Instead, it is up to you to secure containers from the start and to actively scan for vulnerabilities and concerns. 

    Kubernetes Security Secrets from the Trenches

    Properly securing your Kubernetes deployment requires understanding where you are vulnerable and taking steps to patch or otherwise secure that risk. It also involves actively monitoring your deployment at all times. Assuming that you have handled a concern is not enough, you need to continuously verify if you want to ensure that everything is properly secured. 

    Below are a few tips for ensuring that you’re secure to start with and that you stay that way. ‍ 

    Monitor Network Traffic to Limit Unnecessary or Insecure Communication

    As indicated, monitoring plays a significant role in the security of your deployment. In particular, you should be carefully monitoring your network traffic and comparing it to expected volumes, types, and sources. All communications between containers, pods, and services should be monitored. 

    You can use the information you gain about your traffic in two ways. One, you can identify communications that shouldn’t be occurring and stop them, such as larger transfers of data. Two, you can identify what should be happening but isn’t. For example, if a service goes offline or if a pod isn’t responding to requests.

    Isolate Kubernetes Nodes

    When deploying, you should never expose your Kubernetes nodes to the public. Rather, nodes should be deployed to an isolated network and restricted from access as much as possible. This requires isolating your Kubernetes data traffic and control plane.

    Without isolation, access to one means access to the other. This can present a significant threat if either is compromised. To avoid this, try to configure your nodes to only allow connections on specific ports and from your master nodes. ‍ 

    Kubernetes Namespace Resource Quota

    You should make sure to carefully define your resource quotes for any namespaces you have deployed. If you do not restrict resources you may find all of your hardware resources being used by a single namespace. This can cause your entire cluster to become unavailable, leading to downtime and service disruption. For example, if an attacker performs a denial of service (DoS) attack.

    To avoid this, familiarize yourself with the resource quota configurations that k8s makes available. You have options to define controls for both Kubernetes resources (including pods and services) as well as for “classic” resources (including memory, disk space, and CPU). ‍ 

    Use Third-Party Authentication for API Server

    When setting up your authentication protocols, you should strongly consider a third-party provider. For example, GitHub or a connector like Dex. Third-party providers can offer access to more secure features, such as multi-factor authentication (MFA). These providers can also help you ensure that your kube-apiserver is immutable, preventing attackers from adding or removing users. 

    Another consideration is to try and ensure that users are not managed on the API server level. This level can be more difficult to secure and can provide more extensive access to attackers if breached. 

    Rotate Encryption Keys

    You should be using encryption wherever possible as a fallback security measure. When used properly, it can help ensure that even if your deployment is compromised your data is not. When setting up encryption you can ensure it remains secure by making sure to periodically rotate your encryption certificates and keys. This reduces the chances that attackers are able to intercept keys or forge certificates.

    In your deployment, Kubernetes automatically rotates server and client certificates. However, your kube-apiserver certificates must be rotated manually unless you are using a managed service. 


    When working with Kubernetes, you need to carefully monitor and log your cluster, to ensure proper health and security. However, this isn’t enough for running large scale operations. When running Kubernetes in production, you will run into security challenges concerning architecture, dependencies, and container vulnerabilities. You need to ensure that all of these aspects are secure. 

    There are certain practices you can adopt to ensure your Kubernetes operations are secure. You can monitor network traffic to limit unnecessary or insecure communication. You should isolate Kubernetes nodes, to prevent unauthorized access. To prevent cluster availability issues, you should limit Kubernetes namespace resource quota. 

    You should also consider using third-party authentication, like GitHub and Dex, for your API server. As a final measure, you should use encryption as much as possible, and rotate your keys. Encryption can ensure that your data remains protected, even if it is accessed by unauthorized users. 

    When you need to monitor your Kubernetes setup, try out MetricFire's free trial or book a demo and talk to us directly. MetricFire can help you monitor your applications across various environments.

    Get similar stories in your inbox weekly, for free

    Share this story:

    MetricFire provides a complete infrastructure and application monitoring platform from a suite of open source monitoring tools. Depending on your setup, choose Hosted Prometheus or Graphite and view your metrics on beautiful Grafana dashboards in real-time.


    Latest stories

    How ManageEngine Applications Manager Can Help Overcome Challenges In Kubernetes Monitoring

    We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …

    AIOps with Site24x7: Maximizing Efficiency at an Affordable Cost

    In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …

    A Review of Zoho ManageEngine

    Zoho Corp., formerly known as AdventNet Inc., has established itself as a major player in …

    Should I learn Java in 2023? A Practical Guide

    Java is one of the most widely used programming languages in the world. It has …

    The fastest way to ramp up on DevOps

    You probably have been thinking of moving to DevOps or learning DevOps as a beginner. …

    Why You Need a Blockchain Node Provider

    In this article, we briefly cover the concept of blockchain nodes provider and explain why …

    Top 5 Virtual desktop Provides in 2022

    Here are the top 5 virtual desktop providers who offer a range of benefits such …