Kubernetes vs OpenShift: This is what you need to know

in Kubernetes


Kubernetes is the most popular orchestration engine - on the other hand, OpenShift from Redhat is one of the most popular implementations of this orchestration engine.

Redhat markets OpenShift as a Platform-as-a-Service (PaaS) offering, which includes Kubernetes with many other features added on top and a support plan. However, Kubernetes can be deployed on many platforms such as Google Cloud, AWS and Azure. Major cloud provider offers a managed Kubernetes service like Google GKE, AWS EKS, and Azure AKS.

While Kubernetes and OpenShift seem similar, some subtle differences dictate what type of applications and organizations for which each of them is suitable. We’ll be looking at them in detail so that you’ll be able to make an informed decision.

    Installation Support

    OpenShift has some defining characteristics that give it an edge over Kubernetes. However, one of the areas in which it is restricted, at least at the moment, is the support for various platforms. This applies to both operating systems and hosting platforms.

    Virtual machines running Kubernetes masters and nodes can run on multiple operating systems, while Redhat restricts Openshift to some of them (e.g: Red Hat Enterprise Linux (RHEL) 7.4 or later with the "Minimal" installation option and the latest packages from the Extras channel, or RHEL Atomic Host 7.4.5 or later for the master node).

    In terms of platforms, OpenShift used to be limited to Redhat’s own offerings but now supports others like AWS and vSphere with OpenShift 4.

    Convenience of Deploying

    Both OpenShift and Kubernetes use similar objects for deployment. However, OpenShift’s implementation, called DeploymentConfig is logic-based in comparison to Kubernetes’ controller-based Deployment objects. OpenShift has the advantage of using hooks to prepare the environment for updates.

    Kubernetes’ Deployment objects shine when it comes to multiple and concurrent updates with the ability to scale them appropriately. Another advantage of Kubernetes is that OpenShift now supports Deployment objects; this makes switching to OpenShift quite convenient. Overall, OpenShift is preferred; however, if you require multiple or concurrent updates, you may have to opt for Kubernetes.

    CI/CD is another essential requirement for containerized applications. The integration of Jenkins on OpenShift makes it more useful, especially with its features like OAuth, support for source-to-image, and pipeline definitions. Kubernetes (bare-metal, on-premise or managed) integrates well with Jenkins. They both integrate with JenkinsX if you are concerned about GitOps.

    Helm is a tool that simplifies deployment and is fully compatible with Kubernetes. While Helm 3 has become more OpenShift-friendly with its separation from Tiller, OpenShift 4 introduced the integration feature with OperatorHub.

    This new feature empowers developers and administrators to automate and orchestrate complex tasks using the operators’ automation capabilities like automated backups, self-tuning, and automated updates, and of course, self-service provisioning.


    OpenShift offers strict security policies and is more restrictive than Kubernetes. It does not, for instance, allow running containers as root by default, that is why many of the official images on DockerHub are not allowed. However, you can enable running some containers with root (some containers like Redis or Postgres require root access).

    Role-Based Access Control (RBAC) is enforced as part of OpenShift’s security approach and makes a lot of sense as permissions are vital, especially in production systems.

    Authentication is another area in which OpenShift shines. It has specific support for LDAP and Active Directory integrations and also supports authentication and authorization for external applications. All this is done through a centralized OAuth implementation. While most - if not all - of this can be achieved on Kubernetes through services like Firebase on GKE, it requires more effort, and this gives OpenShift a clear edge.


    OpenShift offers its services through many hosting platforms such as AWS, Azure, self-hosting, and Redhat managed solutions. Each platform has its specific pricing, with Azure pricing starting at $0.76/hour, AWS pricing starting at $36,000/year, and Redhat offering both a free plan as well as a $50/month plan if more than one project is required.

    As discussed earlier, Kubernetes can be deployed to your own machines as well as on many platforms. So pricing depends on the platform that you select and the number of resources required. Most providers like Google Cloud, Amazon EKS, and Azure AKS provide pricing calculators so that users can specify their exact needs and arrive at a pricing plan that works for them. There is no clear winner with regards to pricing as it depends on each user’s needs.

    Routers vs. Ingress

    Routers from OpenShift have been around for much longer than its Kubernetes counterpart. Routers are based on HAproxy and were a mature solution in comparison to Ingress on Kubernetes. However, Ingress offers more features on some of its implementations.

    Ingress is an interface, and in addition to HAProxy, it is supported by multiple servers like Nginx, Kong, Google Cloud, and AWS implementations. In essence, it comes down to what you expect.

    OpenShift Projects / K8s Namespaces

    When it comes to OpenShift projects vs Kubernetes Projects, many features in OpenShift were built in the early days of Kubernetes, as states a Redhat engineer:

    Projects provide for easier multi-tenancy by:

    1. Having stricter validation than namespaces (i.e. you cannot annotate a project other than a handful of predefined keys meaning you can assert a privileged user or component set that data)
    2. Projects are actually indirectly created by the server by a request mechanism. Thus you do not need to give users the ability to create projects directly.
    3. A cluster admin can inject a template for project creation (so you can have a predefined way to set up projects across your cluster).
    4. The project list is a special endpoint that determines what projects you should be able to see. This is not possible to express via RBAC (i.e. list namespaces means you can see all namespaces).

    Note that all of this was built in the early days of Kubernetes, and thus may be less important now.

    Ease of Use

    The learning curve associated with any platform is one of the factors that will determine how well it is accepted. Kubernetes is an open-source tool, but Redhat has been in this business for much longer. This has allowed them to develop the user experience for their proprietary Gears technology:

    Gears were a core component of OpenShift v2. Technologies such as kernel namespaces, CGroups, and SELinux helped deliver a highly-scalable, secure, containerized application platform to OpenShift users. Gears themselves were a form of container technology.

    OpenShift v3 takes the gears idea to the next level. It uses Docker as the next evolution of the v2 container technology. This container architecture is at the core of OpenShift v3.

    OpenShift’s CLI offers the oc command that combines the functions of many Kubernetes commands such as kubectl, kubens, and kubectx. It allows many actions such as support for logging, switching between projects and namespaces, and building container images from source, which require external tools on Kubernetes.

    A comparison of web dashboards shows that OpenShift 4’s offering is a lot more intuitive, and also allows performing almost all actions available via the CLI.

    Use Cases and Suitability

    Looking at the above comparison, OpenShift offers a product packaged with many additional features. Overall, OpenShift is easier to use and has a learning curve that is not as steep as that of Kubernetes’. However, these advantages restrict some of the flexibility that is required by some applications.

    In comparison, Kubernetes offers much flexibility for experienced teams that can take care of their own containerized applications. Despite providing less support, managed Kubernetes providers like GKE offer wide panels of features, such as flexibility of deployment on many platforms and operating systems. These are all vital for complex applications.

    Based on the above, it is safe to say that OpenShift is more suitable for teams that have less experience with Kubernetes and security and wish to avoid the hassle of managing the complicated aspects of containerized applications and their security. OpenShift offers many features that will enable such teams to deploy their applications successfully.

    On the other hand, Kubernetes is more suitable for teams that need to have full control over their applications and containers. GKE, for instance, allows users to fine-tune hosting solutions to match their exact needs and provides competitive pricing. This makes it much more suitable for complex applications.


    It’s not a hard and fast rule, but if you want to quickly deploy your application without worrying much about the complexities of the container environment, Redhat’s OpenShift is more suitable for you. But if you prefer to have complete control, then a managed Kubernetes provider like GKE is the way to go.

    Get similar stories in your inbox weekly, for free

    Share this story:
    The Chief I/O

    The team behind this website. We help IT leaders, decision-makers and IT professionals understand topics like Distributed Computing, AIOps & Cloud Native

    Latest stories

    How ManageEngine Applications Manager Can Help Overcome Challenges In Kubernetes Monitoring

    We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …

    AIOps with Site24x7: Maximizing Efficiency at an Affordable Cost

    In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …

    A Review of Zoho ManageEngine

    Zoho Corp., formerly known as AdventNet Inc., has established itself as a major player in …

    Should I learn Java in 2023? A Practical Guide

    Java is one of the most widely used programming languages in the world. It has …

    The fastest way to ramp up on DevOps

    You probably have been thinking of moving to DevOps or learning DevOps as a beginner. …

    Why You Need a Blockchain Node Provider

    In this article, we briefly cover the concept of blockchain nodes provider and explain why …

    Top 5 Virtual desktop Provides in 2022

    Here are the top 5 virtual desktop providers who offer a range of benefits such …

    Why Your Business Should Connect Directly To Your Cloud

    Today, companies make the most use of cloud technology regardless of their size and sector. …

    7 Must-Watch DevSecOps Videos

    Security is a crucial part of application development and DevSecOps makes it easy and continuous.The …