Configuration management: Do you really need it?
If DevOps is about automation and smoothing the relation between different teams responsible for developing the same application, configuration management is one of its pillars. Wikipedia defines CM as a systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. Last week, I've been talking with someone who asked me some questions about CM and I was not able to introduce him to this topic, because I was diving deep into the complex use cases. I decided to write this post.
As you may understand, in this post, we are going to use a very basic example, a simple use case, probably not the use case you will find in large teams and corporations but consider it as an introductory example.
One of the principal use cases of configuration management is when you manage different environments or many versions of the same environment. When we talk about the goals, they’re different from one team to another, but it’s mainly — as said — about managing the configurations of your development, testing, production pipelines, and workloads. In DevOps, add configuration management is and should be automated.
Let’s take one or two very simple scenarios. The goal here is not exploring the advanced features of configuration management, but a simple use case through wich, you can understand if you need it or not.
You work with a team of developers responsible for building a set of services. In most cases, developers will use a local development environment that should be similar to the testing and the production one. If you use a given version in your MySQL database, the developer should use the same version. There is a certain work to achieve in order to keep these environments similar. This problem can be solved by using containers since a container by definition provides us with an application and its dependencies, therefore developers in development environments can use the same container shipped with the same software, their versions, their dependencies, and the same versions of their dependencies used in production. This solves the dependency hell.
"Dependency hell is a colloquial term for the frustration of some software users who have installed software packages which have dependencies on specific versions of other software packages (Wikipedia)"
However, even if containers solve many problems, they’re not addressing configuration management. When your developers' team tests regularly, they need a self-service of machines or containers and cluster — if you use Kubernetes. Reproducing the environment can be done using tools like Terraform.
"I worked once with a company that has 12 environments for one application: The same application has 3 variants and 4 different environments for each variant.. imagine the jumble!"
In this case, a developer should have 12 terraform configuration files with specific variables to each cloud service you create. Each application has also variables that should change from one environment to another (database connection string, credentials .. etc). In this case, you will find yourself managing different configurations without having a single source of truth. Imagine the mess in shared environments between developers and ops teams in this case.
Say your developers use different external services like S3, each developer should have their own bucket to not interfere with the work of others.
In case you don’t have a configuration management process, each developer will manually change the name of the bucket hardcoded in the application code and will probably add it to the versioning system by mistake. One of the solutions is storing the S3 bucket name in the environment variable. The code will then read the bucket name using the programming language. If you’re using Python, for instance, you will use os.environt[‘BUCKET_NAME’] and you’ll export the BUCKET_NAME variable to your system variable.
If you use this, you already started employing configuration management in your development process. The only problem is that this solution will not solve most problems related to managing your configurations.
First of all, you will probably need to store hundreds of variables in your OS, some of them may have the same name. When your team introduces new variables in code, each one of your team should do the same manually in their local systems… Managing the configurations becomes harder and harder with time.
A solution you may think of is having a centralized source of truth, a single script that you maintain, and adjust according to your evolving requirements. Each one of your developers should pull and adjust it to their needs. If you thought about this solution, you already started imagining a configuration management tool. However, scripting compared to the many other configuration management tools has many drawbacks. For example, a script is not reversible. Some configuration management tools are, because they focus on the final goal, not the steps required to achieve the goal. A tool like SaltStack or Ansible gets as an input a description of the desired state, not the steps to follow, while you should describe each step in a bash script to achieve a goal. Using a configuration management tool also makes remote configuration management possible, if you are creating scripts to do that, you’re probably wasting time - using one of the existing solutions, or creating an abstract layer to remotely manage your configuration is safer and even more interesting.. but why would you reinvent the wheel.
The use case here is a very basic one, configuration management tools do more than this, consider this as "the use case zero" of CM.
If you think this simple use case applies to you, you should consider using SaltStack, Chef, Ansible, or one of its alternatives. However, if you have a small team and you don’t feel that this will make you gain time, don’t over-engineer your work and overwhelm your team with tooling: keep things simple.
Get similar stories in your inbox weekly, for free
Share this story:
Get deep visibility into the performance of your complex enterprise applications and cloud native workloads. Identify potential issues, improve productivity, and ensure that your business and end users are unaffected by downtime and substandard performance ...
We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …
Harness the power of artificial intelligence (AI) and machine learning (ML) to monitor your IT resources with Site24x7's artificial intelligence for IT operations (AIOps) and machine learning operations (MLOps). Improve mean time to repair (MTTR) issues with the help of Site24x7 AIOps ...
In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …