This tutorial part of our series on Kuberentes. up until now, we have covered Kuberenetes basics and Installation, now we will see how to deploy app in kuberenetes cluster with Pod, ReplicaSets and Deployment.

Overview

We are going to deploy simple Nginx application, which is most basic and easily available. we will try from Basic deployment concepts to Best practice for production deployment in Kubernetes.

If you already you know the basic concepts, then you can directly Go to Best Practice section.

Let’s start with Pod…

Kubernetes Pod

From Official Kubernetes doc: “Pods are the smallest deployable units of computing that can be created and managed in Kubernetes.”

Following image shows, Pod can contain single container or mulitple container, depends on usecase. mulitple containers in pod, shared the resources like Volume, Network etc.

Kubernetes-pod

Let’s deploy our nginx app, using pod and see what happen.

Create pod defination file nginx_pod.yaml :

apiVersion: v1
kind: Pod
metadata:
  name: nginx 
  labels:
    app: nginx 
spec:
  containers:
  - name: nginx 
    image: nginx 

You can think Pod—>Spec—>Containers as pod defination file.

Deploy it using kubectl :

kubectl create -f nginx_pod.yaml

Output:

pod "nginx" created

Let’s see the status:

kubectl get pod

Output:

NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          7s

You can also try kubectl describe pod nginx see the status.

Pod Lifecycle

  1. Make a Pod reuqest to API server using a local pod defination file
  2. The API server saves the info for the pod in ETCD
  3. The scheduler finds the unscheduled pod and shedules it to node.
  4. Kubelet running on the node, sees the pod sheduled and fires up docker.
  5. Docker runs the container

The entire lifecycle state of the pod is stored in ETCD.

Once, nginx is ready, you have test the page, working or not.

the quick way is to port forward and see :

kubectl port-forward nginx 1111:80

we have forwarded port 80 to localhost port 1111, now you can access http://localhost:1111

But note that this is temparary solution, we will learn how to do this in correct way in next topic of service concepts.

Pod Concepts

  kubectl run -it  --restart=Never --rm  --image=cassandra:latest cassandra-cleanup -- cqlsh cassandra-svc-pod -u cassandra

Now we understand the Pod is not suitable for our usecase, nginx app should be keep alive even after Node failure, so let’s try ReplicaSets but before that we should know Labels and Selector


Kubernetes Labels and Selectors

Labels

For e.g.

  labels:
     app: nginx
     role: web
     env: dev

Selectors

For e.g.

  selectors:
    env = dev
    app != db
    release in (1.3,1.4)

Labels and Selectors are used in many places like Services, Deployment and we will see now in Replicasets


Kubernetes ReplicaSets

ReplicaSets ensures that a specified number of pod replicas are running at any given time. Note that we can ignoring Replication Controller topic, which is the same as ReplicaSets because ReplicaSet is the next-generation Replication Controller. replica sets support the more advanced set-based selectors and thus are more flexible than replication controllers.

Replicasets

As we understand if you deploy application directly in Pod and if node goes down, it won’t be up again. in such scenario replicasets comes into picture, which will ensures that a specific number of pod (or replicas) are running at any given time. if you want your pod to stay alive you make sure you have an according replica set specifying at least one replica for that pod. The replica set then takes care of (re)scheduling your instances for you.

Let’s deploy our nginx app, using ReplicaSets and see what happen.

Create ReplicaSets defination file nginx_replicasets.yaml :

apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  name: my-app-v1
  labels:
    app: my-app
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-v1
        image: nginx
        ports:
        - containerPort: 80
        

You can think of ReplicaSets—>Pod—>Spec—>Containers

Deploy it using kubectl :

kubectl create -f nginx_replicasets.yaml

Output:

replicaset "my-app-v1" created

Let’s see the status:

kubectl get replicaset

Output:

NAME                     DESIRED   CURRENT   READY     AGE
my-app-v1                1         1         0         10s

Now try to delete pod

kubectl get pod -l app=my-app

Output:

NAME              READY     STATUS    RESTARTS   AGE
my-app-v1-dgc5c   1/1       Running   0          33m

Delete it

kubectl delete pod my-app-v1-dgc5c 

Now check the status :

kubectl get pod -l app=my-app

Output:

my-app-v1-55jp3   0/1       ContainerCreating   0          4s
my-app-v1-dgc5c   0/1       Terminating         0          34m

You can notice that, after deleting pod, it’s creating again, becuase of ReplicaSets, it will ensure number of replica running given time. If you permenatly delete pod, then you have delete replica sets.

But we can’t use Replica Sets directly becuase, It does not support rolling update functionality.

Also if you use Replication controller, it has rolling update but it’s only for some options like image etc..

Let’s see the best practise …


Deployment

Deployment is the king… :)

Let’s create deployment file for nginx app and deploy it..

Create defination file nginx_deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-app-v1
  labels:
    app: my-app
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-v1
        image: nginx
        ports:
        - containerPort: 80

You can think of Deployment—>ReplicaSets—>Pod—>Spec—>Containers because deployment will create and manage replica sets.

Deploy it using kubectl :

kubectl create -f nginx_deployment.yaml

Output:

deployment "my-app-v1" created

Let’s see the status:

kubectl get deployment

Output:

NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-app-v1   1         1         1            0           6s

You can also observer that, deployment has created replica sets :

kubectl get rs
NAME                   DESIRED   CURRENT   READY     AGE
my-app-v1-55cc959447   1         1         1         5m

In comming tutorial, we will see how to use load balancer to expose services and monitoring k8s cluster.

Feel free to comment below, if you have doubts…