Kubernetes local volume

Kubernetes pods are ephemeral. If you want your applications to have state, you have to understand how Kubernetes volumes work.

After diving into Persistent Volumes you will quickly understand that setting up Kubernetes cluster on bare-metal with stateful sets is not a trivial task.

You might ask, why can’t we use hostPath?. There are many problems with hostPath, just to name a few:

  • Unmanaged volume lifecycle
  • Possible path collisions
  • Too many privileges
  • Not portable

In this article we are going to use Local Volume, which is the part of External Storage project.

Local persistent volumes allows users to access local storage through the standard PVC interface in a simple and portable way.

In our lectures, as you can remember, we are using bare-metal servers with local hard drives for persistent storage (kubespray automatically installs this feature).

So, we already have our Kubernetes 1.10 cluster ready with PersistentLocalVolumes, VolumeScheduling and MountPropagation feature gates enabled.

Let’s take a look at storage classes that are already installed.

$ kubectl --kubeconfig=admin.conf get storageclasses

As you can see, we have one Storage Class ready:

NAME            PROVISIONER                    AGE
local-storage   kubernetes.io/no-provisioner   5h

Also, let’s take a look at Persistent Volumes ready to be used:

$ kubectl --kubeconfig=admin.conf get persistentvolumes

NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS    REASON    AGE
local-pv-5d11f9f    1817Gi     RWO            Delete           Available             local-storage             5h
local-pv-d44e40d8   1817Gi     RWO            Delete           Available             local-storage             5h

You might wonder, where are this Persistent Volumes come from? Why exactly two ?

Let’s take a look at how many nodes we have:

$ kubectl --kubeconfig=admin.conf get nodes

NAME      STATUS    ROLES         AGE       VERSION
node1     Ready     master,node   5h        v1.9.2+coreos.0
node2     Ready     master,node   5h        v1.9.2+coreos.0

So, we have two nodes, each of which have master and node roles.

Both servers have bind mounts in the /mnt/disks/vol1 directory, which automatically creates persistent volumes.

$ kubectl --kubeconfig=admin.conf describe pv local-pv-d44e40d8

Name:            local-pv-d44e40d8
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by=local-volume-provisioner-node2-90344f05-2440-11e8-808e-d43d7ee30396
                 volume.alpha.kubernetes.io/node-affinity={"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["node2"]...
StorageClass:    local-storage
Status:          Available
Claim:
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        1817Gi
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /mnt/disks/vol1
Events:    <none>

Updated: