Google optimizes Kubernetes with an autopilot feature
Google's making life easier for the cloud-native community
IT Operations has never looked back since Google introduced Kubernetes becoming the benchmark cloud platform for running microservices infrastructure in the cloud. Though very helpful, Google Kubernetes Engine has proven overwhelming over the years. Emphasis on developing the engine has been an important subject too, yielding the GKE Autopilot, now container orchestration has moved levels up with the hands-off feature.
Google spreads absolute control over not just the control plane, the infrastructure too.
Autopilot introduction brings Simplicity and Reduction in workload operations for developers
Most things manually done in GKE are now carried out automatically e.g infrastructure maintenance
Even though the optimization of resources comes at a particular bill, it is only applied to the number of deployed pods
Optimization means a few features have been removed, autonomy-oriented users will not be happy
GKE Autopilot supports stack components selection e.g VMs, Virtual Private Cloud-based public/private network, CSI-based Storage, etc
Kubernetes has been demonstrating a lot of versatility and proficiency around reliability, scalability, and cybersecurity more than enough to persuade 100,000 companies to utilize the Google Kubernetes Engine in the second quarter of 2020 alone since its birth on July 21, 2015. Kubernetes Autopilot is the latest upgrade in Google Cloud, delivering an unimitable platform for Google.
What is GKE Autopilot?
Drew Bradstock, Group Product Manager for Google Kubernetes Engine said the bottom line of autopilot was to amass all the tools Google had for GKE and bring them together with their site reliability engineering teams with technical know-how in running these clusters in production.
Google mentioned Autopilot as a "revolutionary mode of operations for managed Kubernetes that lets you focus on your software while managing the infrastructure." Google envisions that Autopilot will entice more businesses to embrace the container orchestration platform because it simplifies operations "by managing the cluster infrastructure, control plane, and nodes."
Kubernetes has two components - "the control plane" - solely in charge of overseeing cluster infrastructure and the workloads attached, then "the node" - they infinitely run customer applications packages as containers.
Kubernetes initially had the cloud providers managing the control planes when it first came into light, the worker nodes are essentially virtual machines that have always been available for user access.
Introducing GKE Autopilot, Google intends to manage the entire infrastructure and not just the control plane. It dramatically simplifies the creation of clusters as decisions become narrower and easier to make.
GKE Autopilot has an emphasis on simplification of options for supplying security and top grade in cluster infrastructure. As compared to GKE, GKE Autopilot uses very few knobs and switches in the provisioning of a GKE Autopilot cluster. Several worker nodes and configurations are carried out by GKE Autopilot which also determines the beat class configuration and an ideal fleet size at runtime based on the characteristics of the workload you deployed.
Some companies view their astuteness on Kubernetes as an important factor to get over their competition. With Autopilot, business particularly enjoys Kubernetes more especially owing to the reduction in maintenance and management work.
All of this comes at a price though, but not to worry, billing calculation is not done by the nodes, it's done by a pod. The more the pods deployed, the more the fees. These pods account for the computed memory consumed and storage resources. In addition to the GKE flat fee of $0.10 per hour, there are fees for resources the clusters and pods consume. Google offers a 99.95% SLA for the control plans of its Autopilot clusters and a 99.9% SLA for Autopilot pods in multiple zones.
Due to its automation, GKE Autopilot holds numerous downsides for users that want absolute autonomy, they should stick to the past. Configuring 3rd party storage platforms such as Portworx by Pure Storage or a network policy on Tigera Calico is not supported by GKE Autopilot. Deploying applications from the marketplace and adding nodes with AI accelerators based on GPU or TPU are other missing features on the upgrade.
GKE Autopilot clearly embodies a bigger deal forward in terms of auto-security, auto-scaling, etc. Its easy-to-use nature connotes the difference it brings to the Kubernetes environment. Google has once again moved steps further delivering an industry-first that removes the complexity of running cloud-native workloads.
Get similar news in your inbox weekly, for free
Share this news:
The improved AWS feature allows users to trigger Lambda functions from an SQS queue.
United States Defense Department Asks Amazon, Google, Microsoft, and Oracle to Bid on the JWCC Program
DoD looking to entrust cloud security to multiple vendors.
Google makes fuzzing easier and faster with ClusterFuzzLite
HTTP-based autoscaling and scale to zero capability on a serverless platform