As more and more enterprises look at leveraging the capabilities of public clouds, they face an array of important decisions. for example, they must decide which cloud(s) and what technologies they should use, how they operate and manage resources, and how they deploy applications.
2. 03
Introduction
with PREFACE
04
Why Choose An Enterprise
Kubernetes Platform?
05
Benefits for Your
Hybrid Cloud Environment
06
Operating Production
Applications on Kubernetes
08
Enterprise Kubernetes Guide:
The Changing Development Landscape
KUBERNETES
3. Introduction
PREFACE
Kubernetes, also known as K8s, is an open-source system for automating
deployment, scaling, and management of containerized applications. It groups
containers that make up an application into logical units for easy management
and discovery.
Kubernetes has quickly become the fastest-growing, most widely adopted
containerization platform in the history of open-source software. CIOs, CTOs,
and developers choose Kubernetes for its breadth of functionality, its vast
and growing ecosystem of open-source supporting tools, and its support
and portability across multiple cloud services and cloud providers.
“As more and more organizations
move to microservice and
cloud-native architectures that
make use of containers, they’re
looking for strong, proven
platforms. Practitioners are
moving to Kubernetes…”
__Red Hat
4. Why Choose An Enterprise
Kubernetes Platform?
As cloud service providers have evolved from just deploying CPU compute,
memory, artificial intelligence, and machine learning platforms, enterprises may
encounter proprietary connections and API calls that are unique to the particular
cloud platform. This creates portability and lock-in concerns.
Portability
Containerization saves IT teams time, through its ability to bundle software code,
regardless of whether the code is in development or deployed in production.
That’s one reason containerization has gained significant traction in software
development as an alternative to traditional virtualization.
Time savings / Time to value
Kubernetes’ benefits include a much more stable and secure environment, with a
reduction in code errors and bugs. What do we mean by this?
Stability / Security
Why choose an enterprise Kubernetes platform, as opposed to assembling
open-source Kubernetes tools yourself?
Let’s examine three key considerations for IT leaders:
Due to this risk of downtime with application workloads, IT departments have to
constantly trade security of the OS for the stability of applications. Containerized
applications have all their dependencies bundled together. This enables the OS
to be patched/upgraded with increased confidence.
Similarly, libraries and software within the container applications can be
updated/upgraded with greater confidence for the same reasons. This allows for
rapid deployment of bug fixes, security patches, and even new features within
the applications.
5. Benefits for Your
Hybrid Cloud Environment
An enterprise should first determine if the benefits of Kubernetes justify the work
effort to select, deploy and go live. If you’re making the leap to Kubernetes, an
enterprise Kubernetes platform helps you achieve consistency, agility, and
stability/security in your adoption of containers. The ability to seamlessly move
from one cloud to another allows you to be nimble in a fast-paced, constantly
changing environment.
Think of an enterprise Kubernetes platform as the operating system for your
hybrid cloud environment. Regardless of your cloud provider, an enterprise
Kubernetes platform will allow you to manage, run, and enhance your
applications in the most efficient manner without compromising the stability
and security of your IT ecosystem.
As a result, your internal and external customers are the true beneficiaries, as
they can now consume your applications and services with extreme speed and
voracity.
Deploying an Applications
on Kubernetes
Step 1: Dockerize The Application:
The project contains dependencies
that enable it to work as expected.
In Go, dependencies are just third-party
libraries that can be imported into the
project. Smaller images mean faster
upload/download from the container
registry and also faster load time
when being moved from one node to
another when working with Kubernetes.
6. Deploying an Applications
on Kubernetes
Step 2: Creating A Deployment:
The first step in moving to Kubernetes is
to create a Pod that hosts the application
container. But since pods are ephemeral
by nature, we need to create a higher
controller that takes care of our pod.
For that reason, we’ll use a Deployment.
This also has the bonus of enabling us to
deploy more than one replica
of our application for high availability.
Operating Production
Applications on Kubernetes
Production Considerations:
A production Kubernetes cluster environment has more requirements
than personal learning, development, or test environment Kubernetes.
A production environment may require secure access by many users,
consistent availability, and the resources to adapt to changing demands.
Production Cluster Setup:
In a production-quality Kubernetes cluster, the control plane manages
the cluster from services that can be spread across multiple
computers in different ways. Each worker node, however,
represents a single entity that is configured to run Kubernetes pods.
7. Production
Control Plane:
Production
Worker Nodes:
The simplest Kubernetes cluster
has the entire control plane and
worker node services running on
the same machine, you can grow
that environment by adding
worker nodes.
Production-quality workloads
need to be resilient and
anything they rely on needs to
be resilient (such as CoreDNS).
Whether you manage your own
control plane or have a cloud
provider do it for you, you still
need to consider how you want
to manage your worker nodes.
Production user
Management:
Taking on a production-quality cluster means deciding how you want to
selectively allow access by other users. In particular, you need to select strategies
for validating the identities of those who try to access your cluster
(authentication).
8. Enterprise Kubernetes Guide:
The Changing Development Landscape
With the plethora of open source tools available, developers are often able to
create a simple local Kubernetes environment for experimentation within a few
days. But running Enterprise Kubernetes to power mission-critical production
applications and managing day-2 operations has proven to be a daunting
challenge
A complete Enterprise Kubernetes infrastructure needs proper DNS, load
balancing, Ingress, stateful services, K8’s role-based access control (RBAC),
integration with LDAP and authentication systems, and more. Once Kubernetes
is deployed, day-2 operational challenges and life-cycle management comes into
play.
Each OS provided a complete execution environment for applications: this
included binaries, libraries and services, as well as compute, storage, and
networking resources.
The drawbacks of this approach are the inherent size and volume of VMs. Each
OS is many gigabytes in size, which not only requires storage space, but also
increases the memory footprint.
This size and tight coupling results in several complexities in the VM
lifecycle and the applications running on top. Without a good way of separating
different layers in a VM, swapping out different parts in this layer cake is nearly
impossible. For this reason, once a VM is built, configured and running, it usually
lives on for months or years. This leads to pollution and irreversible entangling of
the VM in terms of OS, data, and configuration.
New versions of the OS, its components, and other software inside the VM
are layered on top of the older version. Because of this, each in-place upgrade
creates potential version conflicts and stability problems. Maintaining this
ever-increasing complexity is a major operational pain point, and often leads to
downtime.
9. This places an unbalanced operational focus on the OS and underlying layers,
instead of the place it should be: the application.
Operational friction, an unnecessarily large and perennial operating
environment, and lack of decoupling between layers are all in sharp contrast with
how lean and agile software development works. It’s no surprise, then, that the
traditional approach doesn’t work for modern software development.
In the new paradigm, developers actively break down work into smaller
chunks, create (single-piece) flow, and take control and ownership over the
pipeline that brings code from local testing all the way to production. Containers,
microservices and cloud-native application design are facilitating this.