Originally posted on infoworld.
Kubernetes and containers are changing how applications are built, deployed, and managed. These distros are leading the charge.
Kubernetes has become the project developers turn to for container orchestration at scale. The open source container orchestration system out of Google is well-regarded, well-supported, and continues to evolve.
Kubernetes is also sprawling, complex, and difficult to set up and configure. Not only that, but much of the heavy lifting is left to the end user. The best approach, therefore, isn’t to grab the bits and try to go it alone, but to seek out a complete container solution that includes Kubernetes as a supported, maintained component.
This article looks at the six most prominent Kubernetes offerings. These are distributions that incorporate Kubernetes along with container tools, in the same sense that different vendors offer distributions of the Linux kernel and its userland.
Note that this list does not include dedicated cloud services, such as Amazon EKS or Google Kubernetes Engine. I’ve focused on software distributions that can be run locally or as a cloud-hosted option.
Canonical, maker of Ubuntu Linux, provides its own Kubernetes distribution. One of the big selling points for Canonical Kubernetes is the widely respected, well-understood, and commonly deployed Ubuntu Linux operating system underneath. Canonical claims that its stack works in any cloud or on-prem deployment, with support included for both CPU- and GPU-powered workloads. Paying customers can have their Kubernetes cluster remotely managed by Canonical engineers.
Canonical’s Kubernetes distribution is also available in a miniature version, Microk8s. Developers and Kubernetes newcomers can install Microk8s on a notebook or desktop and use it for testing, experimentation, or even production use on low-profile hardware.
Canonical and Rancher Labs (see below) co-produce Kubernetes Cloud Native Platform, which pairs Canonical’s Kubernetes distro with Rancher’s container-management platform. The idea is to use Kubernetes to manage the containers running in each cluster, and use Rancher to manage multiple Kubernetes clusters. Cloud Native Platform is available starting with Rancher 2.0.
For many of us, Docker is containers. And since 2014, Docker has had its own clustering and orchestration system, Docker Swarm, which until recently was a competitor to Kubernetes.
Then, in October 2017, Docker announced it would be adding Kubernetes—in its unmodified, vanilla state—as a standard pack-in with both Docker Community Edition and Docker Enterprise 2.0 and later editions. Docker Enterprise 3.0 added the Docker Kubernetes Service, a Kubernetes integration that keeps versions of Kubernetes consistent between developer desktops and production deployments. As of November 2019, however, Docker Enterprise was acquired by Mirantis, and should now be considered part of Mirantis Kubernetes Engine (see below).
Note that Docker Desktop only ships the latest version of Kubernetes, so while it’s useful for getting started with the current edition on a local machine, it’s less useful for spinning up local clusters that require earlier versions (e.g., a cut-down clone of some production cluster).
VMware Tanzu Kubernetes Grid
VMware’s Tanzu Application Platform is used to create modern, cloud-native applications on Kubernetes across multiple infrastructures. The Tanzu Kubernetes Grid (TKG) is where Kubernetes figures in.
TKG’s core is a certified Kubernetes distribution, with integration for vSphere 8 and other current VMware products. Any containerized workloads are meant to run on TKG, but applications that can use higher levels of abstraction than Kubernetes’ metaphors can use the Tanzu Application Service PaaS (formerly Pivotal Application Service). If you need the granular control over resources that Kubernetes provides, use TGK; for more generic workloads, Tanzu Application Service should do the job.
Mirantis Kubernetes Engine
Formerly known as Docker Enterprise UCP (Universal Control Plane), the Mirantis Kubernetes Engine (MKE) is more closely aligned with its origins in Docker than some of the other Kubernetes distributions discussed here, in big part due to Mirantis’s acquiring Docker Enterprise in November 2019.
MKE lets you manage both Docker and Docker Swarm containers. That’s convenient because Swarm is the container-orchestration technology originally developed for Docker, and it’s less inherently complex than Kubernetes. MKE also supports Mirantis Container Cloud, the company’s own container platform-as-a-service that was originally Docker Enterprise Container Cloud.
MKE doesn’t provide a Linux distribution to install on, although it’s certified to run on various Linux distributions (Ubuntu server is recommended), and the product gained Windows Server 2022 support as of version 3.6.
For those who want the most minimal Kubernetes experience possible, Mirantis also offers k0s, a Kubernetes distribution delivered as a single binary that can run on systems with as little as a single CPU core, 1GB of RAM, and a few gigabytes of disk space.
The company also develops Lens, an open source IDE for Kubernetes management, although you can use Lens with any Kubernetes distribution, not just MKE.
Rancher Kubernetes Engine
Rancher Labs incorporated Kubernetes into its container management platform—called Rancher—with version 2.0.
Rancher also comes with its own Kubernetes distribution, Rancher Kubernetes Engine (RKE). RKE is meant to remove the drudgery from the process of setting up a Kubernetes cluster and customizing Kubernetes for a specific environment, without allowing those customizations to get in the way of smooth upgrades to Kubernetes. That’s a key consideration for such a fast-moving, constantly updated project.
RKE also stands out in that it uses containers as part of the build and upgrade process. The only part of the underlying Linux system Rancher interacts with is the container engine.That’s all RKE needs to set up and run, and to roll back to an earlier edition if things go awry.
Rancher also offers a minimal Kubernetes distribution called K3s. Optimized for low-profile deployments, K3s requires a mere 512MB of RAM per server instance and 200MB of disk space. It squeezes into this footprint by omitting all legacy, alpha-grade, and nonessential features, as well as many less commonly used plugins (although you can add those back in if you need them).
Red Hat OpenShift
Red Hat OpenShift, Red Hat’s PaaS product, originally used Heroku buildpack-like “cartridges” to package applications, which were then deployed in containers called “gears.” Then, Docker came along, and OpenShift was reworked to use the new container image and runtime standard. Inevitably, Red Hat also adopted Kubernetes as the orchestration technology within OpenShift.
OpenShift was built to provide abstraction and automation for all the components in a PaaS. This abstraction and automation also extend to Kubernetes, which still imposes a fair amount of administrative burden. OpenShift can alleviate that burden as part of the larger mission of deploying a PaaS.
OpenShift 4, the latest version, adds some improvements harvested from Red Hat Enterprise Linux CoreOS, such as that platform’s immutable infrastructure. It also allows Kubernetes Operators for deeper-level custom automation throughout Kubernetes.