Deep Understanding of Kubernetes Environment

Deep Understanding of Kubernetes Environment

tudip-logo

Tudip

08 April 2019

Kubernetes is an open-source container orchestration tool that can automate scale, distribute and manage fault tolerance on containers in the event of a failure in the component. Kubernetes is originally created by Google and then donated to Cloud Native Computing Foundation. After that multiple companies came together for the further development and maintenance. Kubernetes is widely chosen in production environments to handle Docker containers and other container tools in a fault-tolerant (an event of the failure of its components) manner.

This blog will walk you through the deep understanding of the Kubernetes concepts. It will assume that you have a basic understanding of Kubernetes, Docker and containerized applications. If you don’t know about it, please refer to the Introduction to Kubernetes on google cloud and What is Docker blog.

Basic Overview of Kubernetes

First of all, let’s get a high-level overview of a few major concepts related to the Kubernetes and then we will deep dive into it.

1. Role-based Access Controls (RBAC)

RBAC system provides access controls over how users can interact with the API resources running on Kubernetes cluster. RBAC permissions can apply to the entire cluster or to the specific namespace*. Cluster level access controls restrict certain users access to the Kubernetes resources whereas namespace specific provides access to certain resources on the basis of their namespace.

To add RBAC for Kubernetes resources, a user must have Cluster Admin role over a cluster. All the recent release for Kubernetes comes up with RBAC policies are enabled by default. Which help mitigate/avoid the damage that can be done if the credentials are misused or bugs exist in an application.

All interactions with Kubernetes API server entity must have certain access controls in the format of RBAC.

RBAC allows you to specify which types of actions are permitted depending on the user and their role in your organization which includes:

  • Securing your cluster by granting privileged operations only to admin users.
  • Forces user authentication in your cluster for their interactions.
  • Limits resource creation to specific namespaces (deployment, pods and many more). You can also use quotas to ensure that resource usage is limited and under control.
  • A certain user is only able to see resources in their authorized namespace. This allows the administrator to restrict access over resources.

Commonly used RBAC access levels in Kubernetes

Cluster Roles

A ClusterRole is cluster-wide, so the rules are applicable across namespaces (if you choose so). Cluster Roles can define rules for cluster-scoped resources

RBAC Roles

These can be reused for different subjects. In a Role, the rules are applicable to a single namespace.You cannot use wildcards entities to represent more than one, but we can deploy the same role object entities in different namespaces. If you want the role to be applicable across the cluster, the equivalent object is called ClusterRoles. Role Binding: Will connect the remaining entity-subjects. With provided role which binds API objects that can establish who can use it. For the cluster-level bindings, non-namespaced equivalent, there are ClusterRoleBindings.

2. Kubernetes Federation

Managing multiple clusters or deploying resources on multiple clusters for application is never been easy on Kubernetes environment. Also, if multi-cluster environment running on Hybrid cloud then managing data sync is a headache. All these limitations can be resolved with the Kubernetes Federation. Deploying Federated Kubernetes cluster using Kubefed tool is recommended. *Note: For namespace understanding please refer the Kubernetes basic blog. Federation is somewhat similar with the Kubernetes cluster for an architectural view. One of the objectives of the Federation is to be able to define the APIs and API groups which fulfill to confederate at any given Kubernetes resource.

What is Cluster Federation?

Cluster federation is treating multiple Kubernetes clusters as a single logical cluster. The federation control plane consists of a federation API server and a federation controller manager that work collaboratively. The federated API server forwards requests to all the clusters in the federation. In addition, the federated controller manager performs the duties of the controller manager across all clusters by routing requests to the individual federation cluster member’ changes.

What is a federated service?

Kube-dns is a standard add-on for Kubernetes cluster, which provides in-cluster DNS that resolve Kubernetes service resource by its name. Services can bind a number of running instances of applications (pods in Kubernetes) and place them behind a single addressable load balancer (not exposing on-site cluster pod endpoints to the Internet). Suppose you have service name auth that contains a set of running pods for authorization. Other applications only treat that with a simple name and all required tasks can be performed with DNS sub-system.

Federation is useless until you have multiple clusters. Reasons why to use multiple clusters:

  • Low latency
  • Fault isolation
  • Scalability
  • Hybrid cloud

3. Helm – Kubernetes Package Manager

Helm is a powerful and flexible package management tool for the Kubernetes environment. To understand it you can compare it with APT (Advanced Package Tool) which interacts with the packaging system in the Linux based systems. With the Helm, you can get to know new terms which are used in Kubernetes which are tiller, charts and many more. You can get familiar with it in later sections. Helm ease up the process of installing and updating complex application on top of the Kubernetes.

Tiller – Helm Server

Tiller is the server-side configuration of Helm which runs inside of your Kubernetes cluster. For development, you can also run it locally, and configured to talk to a remote Kubernetes cluster. The easiest way to install tiller into the cluster is simply with initializing Helm. Once you initialize the Helm it checks whether you configured it correctly or not. On configuring it will connect any cluster (if your session configures to any it connects to that one). On connecting, it will install tiller into the kube-system namespace.

Charts – Helm Package

Helm uses a packaging format which term as Charts in the Kubernetes. It is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy the simple dependent pod or entire complexive backend application and many more.

Charts are Helm packages that contain at least two things to configure the system in Kubernetes:

  • A description of the package (Chart.yaml)
  • One or more templates, which contain Kubernetes manifest files which contain configuration detail.
  • You can develop your own application deployable configuration for Kubernetes using the Charts which easily setup up even complex infrastructure on air.

4. Istio – an add-on for Kubernetes

Istio is something that you can treat as an add-on for Kubernetes. Istio will come up in the picture when you are running a micro-services based application or likely it is recommended to use Istio if you are deploying an application in Kubernetes environment.

As we all know that some management difficulties that DevOps teams face with Monolith applications can be resolved with the help of Microservices Software Development.

As an application grows and consumes thousands of microservices then for a DevOps team it is really difficult to manage all those microservices. To ease up with that headache we have Istio for help.

Istio lets you create a network of deployed services with load balancing, authentication, service-to-service monitoring, and more, without any changes in service code.

You can add Istio support to services by deploying a special sidecar proxy (Envoy – an API gateway) throughout your environment that trace and monitor all network communication between microservices (modular design for an app), then configure and manage Istio using its control plane functionality.

Envoy is a self-contained process which runs alongside every application server. All of the Envoys form a transparent communication mesh between each application which sends and receives messages to and from the localhost without worrying about the network topology.

The out of box architecture design has two substantial benefits over the traditional library approach towards service to service communication:

  • A single Envoy deployment can form a mesh between any programming language you have used to develop an application. It is becoming a favorable choice for service-oriented architectures to use multiple application frameworks and languages.
  • Envoy can be deployed and upgraded quickly across an entire infrastructure transparently regardless of application infrastructure.

Istio leverages Envoy’s many built-in features like Dynamic service discovery, Load balancing, TLS termination, HTTP/2, and gRPC proxies, Circuit outage, Health checks and many more.

Istio provides a number of key capabilities across a network of services some of them are enlisted below:

  • Traffic Management
  • Security
  • Platform Support
  • Observability

If you get familiar with the above topics in Kubernetes then you can check out the Knative (Kubernetes-native) which build, deploy, and manage modern serverless workloads in Kubernetes.

search
Blog Categories
Request a quote