What Is The Most Developer Friendly Way To Run Kubernetes?

in Kubernetes

Untitled design (2).png

DIY Kubernetes deployments, while cheaper (at least they pretend to be), can causecatastrophes at the time of production if you were not careful enough. Imagine the HRMS youdeveloped for an upcoming start-up running into serious performance issues at the time ofemployee enrollment.


Organizations have huge expectations from Kubernetes ranging from developer velocity to streamlining delivery, from quick fixing bugs and security issues to delivering timely software updates against user feedback. A sub-standard Kubernetes implementation will hurt those goals. Therefore, DIY Kubernetes is a big no-no if you’re developing a business-critical application.

What about managed Kubernetes?

While installing and managing Kubernetes is of least concern to a business, quickly deploying new applications is a priority. For us, the developers, or Kubernetes’ end-users, platform availability is everything. We want it to be fast and work as expected. Getting into implementation details of a Kubernetes cluster or constantly worrying about its operational state is the last thing we want. Consequently, it is best to opt for a managed Kubernetes solution such as Google Kubernetes Engine (GKE), Elastic Kubernetes Services (EKS), or Azure Kubernetes Services (AKS).

Are they any better than DIY Kubernetes?

Google Kubernetes Engine (GKE), Elastic Kubernetes Services (EKS), and Azure Kubernetes Services (AKS) are the three leading managed Kubernetes services from Google, Amazon, and Microsoft respectively. Google, being the original developer of Kubernetes, was the first in the market with its GKE and still carries the distinction of being the most advanced solutions available today. Nevertheless, Amazon and Microsoft are not far behind with EKS and AKS either. All three offer, more or less, the same feature set. They are all available globally, price the same, offer documentation, tend to run on the latest versions of Kubernetes, have support for CLI, promise ~100% SLA, allow on-demand updates, support GPU nodes, include Windows and Linux container, RBAC, Load Balancers, and Global Load Balancing, and compliant with HIPAA, SOC, ISO, PCI, and DSS.

Cluster Automated Upgrades

While on-demand upgrades are standard across the board, only GKE allows automatic upgrades to control panel. AKS is relatively easier to upgrade, too, with a single command to upgrade a cluster. Upgrading EKS requires various commands and may take a while.

Resource Monitoring

GKE’s integrated monitoring platform, Stackdriver, is the gold standard of Kubernetes monitoring. In addition to monitoring master and nodes, it can add logging and all Kubernetes components inside the platform without the need for extra user manual steps.AKS resource monitoring tool, Azure Monitor, is on par with StackDriver if not better. In addition, there is another tool for monitoring—Application Insights. You can evaluate the health of each container using Azure Monitor and can monitor the Kubernetes components using Application Insights provided you have configured Istio.EKS can integrate with AWS CloudWatch for logging and monitoring and CloudTrail that captures all API calls for Amazon EKS as events.

Auto-Scaling

Kubernetes has the ability to scale up and down nodes on-demand or automatically to enforce service availability.GKE makes setting up auto-scaling a lot simpler than the other two. From the interface or through the CLI, you just have to set the VM size and high and low value of the nodes available, and GKE takes care of the rest. AWS also provides auto-scaling with few manual configurations. This feature is also implemented in Microsoft Azure – it is possible to set up auto-scaling by defining an autoscaler profile and running a few CLI commands.

Role-Based Access Control (RBAC)

RBAC allows admins to dynamically configure policies through K8s API. All three providers and, most providers, in general, offer RBAC.

Managed Kubernetes services aren’t developer-friendly

These managed Kubernetes services from tech giants might make installing and managing a k8s cluster easy; they are of no help when you have to develop an incremental feature, deploy multiple builds over a continuous pipeline, monitor applications in a production-like environment or while securing a cluster.

When you opt for a managed K8s cluster, you expect that it will assist you in the rapid development of microservices and containerized applications with minimal effort. However, when you’re a developer, this might not be the case when you have to manually produce multiple YAML files. The time you should be coding and bringing the application to life, you’re endlessly creating YAML manifest and config files, which is not only a tedious, complex, and error-prone process but also bad for business.

Kubernetes, unlike other tools, is not natively integrated with the local development environment you write code in. In DevOps, you expect code to be deployed and tested at each commit in the development cloud. This incoherence breaks your usual workflows around continuous integration and arrests the automated delivery pipeline. Kubernetes managed, or DIY is of no help in this direction. Integrating your IDE environment with Kubernetes yourself is unthinkable at a time you’re running on tight deadlines and fixed schedules.

When you’re developing an application, your toolkit goes beyond Kubernetes. You need Jenkins – for instance – and a host of other CI/CD tools. At a large scale, observability tools are also required. Under DevOps, you also need logging, analytics, metrics, tracing, and dashboards and support for all sorts of databases and key/value stores. You also have to take care of pub/sub message and queueing systems.

However, debugging, diagnosis, and troubleshooting are not natively integrated into a managed Kubernetes cloud. You have to go to all sorts of places to deploy tools like Prometheus, Grafana, Jaeger, Kiali, etc.

As a developer, you are supposed to code and build applications, not worry about setting up production environments or deploying additional tools.

CloudPlex to your rescue

CloudPlex builds upon your existing cloud-managed solutions for Kubernetes in order to enhance the experience and make Kubernetes approachable.

CloudPlex promises to end the mess around generating and managing YAML manifest and config files once and for all. Think of CloudPlex as a visual canvas you can intuitively drag and drop on to design, develop, test, configure, deploy, monitor, and manage Kubernetes applications and finally say goodbye to horrors of producing YAML manifest, and config files.

You just need to configure Docker containers using the visual canvas, and the platform will take care of the rest—performing validation and generating related manifest and config files.Hand cranking YAML files is turning into a persistent pain in the neck. Nobody wishes to configure storage class, persistent volume claims, and persistent volumes for each Docker container. CloudPlex takes that pain away too.

A magic wand that automates storage class configs, persistent volume claim, persistent volume, and associations.An ultimate tool for developers, CloudPlex allows you to develop, test, and debug in your local IDE against a production-like environment. Cloudplex proxy provides diagnostic dashboards with a full stack trace and rich metrics and seamlessly integrates with your CI tool and automatically deploys in your dev cloud.

Moreover, you can debug, diagnose & troubleshoot in the cloud with the CloudPlex dashboard.CloudPlex uses all your favorite tools: Prometheus, Grafana, Jaeger, and Kiali for debugging, troubleshooting, and diagnosing applications. So, no more of your time spent searching the internet looking for those tools essential to your application development workflow.

They are already integrated into the cloud dashboard.Further, CloudPlex visual canvas allows application deployment on multiple managed-Kubernetes clusters and not just Google Kubernetes Engine (GKE), Azure Kubernetes Services (AKS), and Elastic Kubernetes services (EKS).There is also support for DigitalOcean Kubernetes Services, IBM Cloud Kubernetes (IKS) and OpenStack, any user-managed Kubernetes cluster, or an on-Premise (bare metal) Kubernetes cluster.With all of the above features, CloudPlex is not only an easier way to run a managed Kubernetes cluster, but it’s also a great improvement over DIY and bare-metal Kubernetes clusters.

Whether you are running your cluster using a public or a private cloud, CloudPlex makes Kubernetes a more developer-friendly platform.

Asad Faizi

Founder CEO

CloudPlex.io, Inc

asad@cloudplex.io

Share this news with your followers
cloudplex
Asad Faizi, CEO @ Cloudplex

Founder and CEO of Cloudplex - We make Kubernetes easy for developers.

Originally published on cloudplex.io

Sponsored


Content Marketing Platform for Cloud Native Companies
Get in touch

Content marketing platform for Cloud Native Companies

DevOps Narrated (Podcast)
Subscribe to The DevOps FaunCast

Listen to the stories behind the stories and learn new things each week