Ensuring the security of your container is essential to avoid service disruption and improve user experience. This checklist will help ask crucial questions, touching the areas where you could be getting it wrong with the security of your containers.
Containers provide an abstraction to run applications separately from their environment, but they are not immune to security threats. Attackers adopt more sophisticated techniques to target container technologies like Dockers and Kubernetes to carry out malicious activities and steal private information.
This post is heavily inspired by this checklist covering questions you should ask to help you know whether you could still do other things to improve the security of your deployment.
Following are six questions that spread across your container lifecycle right from the point where you're building the environment and container images to the deployment and running stages.
By thinking about these questions and answering them appropriately, you will recognize some actions that you can take to make your deployments more secure.
Are you running your builds separately from your production cluster?
If your build processes are on the same machine as your application workloads, they share the same kernel. And if any threat escapes the process onto the host, it would have access to your application. Your Dockerfile contains instructions that are going to run during the build phase. If a threat actor manages to compromise the build instructions in your Dockerfile, they can get it to execute malicious activities. This could even be more fatal if your Docker build process uses the Docker socket because the Docker socket has root access to the host. It is safer and highly recommended that you keep your build process in a separate cluster to minimize the risks of compromisation.
However, suppose you're confident of your ability. In that case, you could run your build in production using sandboxing such as gVisor, or you could be scheduling your build jobs to specific nodes in the cluster where applications don't run. You could also use routeless build processes like the Docker build kit in the routeless mode because they do not require the Docker daemon.
It is safe to run your builds this way if you're extremely careful about it; otherwise, it is advised that you keep your builds away from your production cluster.
Is all executable code added to a container image at build time?
Once you build your application image, it is recommended that you scan it for potential vulnerabilities that attackers could exploit. If you use package managers like YALM, cURL Apt, or any other packages that add additional executable code to your already scanned image, it would be best if you do not deploy it directly into production. Best practices advise that you do not deploy code that is not scanned for vulnerability into production. Suppose you have any cause to grab additional code through a package manager. In that case, you should go back and rebuild your application's container image, scan it for vulnerability before moving ahead to deploy it into production. This gives you confidence in the security of the code you are running.
Are you avoiding the
--privileged flag is tagged as one of the most dangerous flags in computing. The
--privileged flag was introduced to allow Docker to run in Docker to enable developers at Docker to work on Docker itself. While others might have other cases for using it, the majority use it for running builds. When added to a container, the
--privileged flag grants the container all possible permissions and capabilities. This gives the container access to every single container on the host. Threat actors could leverage this permission to perform dangerous activities like erasing the partition on the hard drive on the host. Unless it is indispensable, you should avoid using the
--privileged flag in your Docker container. A popular misconception is that you have to enable the
--privileged flag before you have enough permission to run as a root user. This isn't true because your Docker container runs as a root user by default, and you do not have to enable the privileged flag to get that done.
Are you keeping your hosts up to date with the latest security releases?
Keeping the containerizing, orchestration, monitoring, logging, and all kinds of software you're running on your host up to date can improve the security of your containers. Older versions of software tend to have bugs and security issues which are continuously fixed in newer versions. Keeping your software up to date will keep your host system in the loop with the latest security updates.
It might be challenging to keep all software on your host machine updated, especially when you have many smaller software components; this is one of the advantages of using managed distributions—they help to collect updates to essential software running on the machine and helps you easily keep up with security releases.
Are your secrets encrypted at rest and in transit?
Secrets like passwords, tokens, and so on need to be safely encrypted wherever they are—at rest or in transit. If you're storing secrets natively in Kubernetes using the Kubernetes secrets, the secrets are kept along with all your other state information in the
etcd database by default. If you search for the value of your secret inside the database, you can see that the database file where the secret is stored matches the secret value. If you go ahead and look up the
etcd database file, searching for the secret value, you can find it right there with no encryption. Anyone who has access to the database file on the host machine can get the secrets because
etcd is not encrypted by default.
Can you prevent your container from drifting?
Do you have the facility to spot when a container attempts to run something, not in the initially scanned container image? In the second question above, we explained that you shouldn't run a container image when additional executable code has been added without rebuilding and scanning the image. Provided that you're treating your container as immutable, even though it will be difficult, if a threat actor successfully got through all other security layers and adds an executable code at runtime, you need a capability that can help you detect and prevent it from running. In recent days, a common container compromise has to do with exploiting the container resources for malicious cryptocurrency mining.
To avoid this, you can adopt open source tools like driftctl to help you detect such activities. Cloud platforms such as Aquasec also provide features that can help you detect the executable code added at runtime and prevent it from running by denying the permission.
Get similar stories in your inbox weekly, for free
Share this story:
Get deep visibility into the performance of your complex enterprise applications and cloud native workloads. Identify potential issues, improve productivity, and ensure that your business and end users are unaffected by downtime and substandard performance ...
We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …
Harness the power of artificial intelligence (AI) and machine learning (ML) to monitor your IT resources with Site24x7's artificial intelligence for IT operations (AIOps) and machine learning operations (MLOps). Improve mean time to repair (MTTR) issues with the help of Site24x7 AIOps ...
In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …