Kubernetes 1.10 installation using kubespray

Note: this article is a part of series about setting up self hosted Kubernetes cluster. You can also learn how to:
1. Enable logging using ElasticSearch, Kibana, Fluentd
2. Enable monitoring using Prometheus, Grafana Kubernetes App
3. Protect your public Kubernetes cluster with WireGuard VPN

Intro

Today we are going to install Kubernetes cluster on bare metal servers. We are going to setup cluster on two nodes. Why two node cluster is a bad idea we explained earlier, but gives us powerful hardware at reasonable cost.

Because I’m very fond of Ansible, kubespray was a match made in heaven. You have to know that there are few alternatives.

Getting Started

First of all, Kubespray has a good getting started guide. We are going to follow this basic steps to install simple two node cluster.

Let’s close kubespray repository and checkout latest version, which is v2.5.0 at the time of writing of this article.

$ git clone https://github.com/kubernetes-incubator/kubespray.git
$ git checkout v2.6.0
HEAD is now at 8b3ce6e4... bump upgrade tests to v2.5.0 commit (#3087)

Now let’s use inventory generator to generate our inventory.

$ cp -r inventory/sample inventory/mycluster
$ declare -a IPS=(88.99.84.233 138.201.175.83 88.198.152.200)
$ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Now let’s open inventory/mycluster/hosts.ini and make it look like this:

[all]
node1    ansible_host=88.99.84.233 ip=10.0.1.1 access_ip=10.0.1.1
node2    ansible_host=138.201.175.83 ip=10.0.1.2 access_ip=10.0.1.2
node3    ansible_host=88.198.152.200 ip=10.0.1.3 access_ip=10.0.1.3

[kube-master]
node1
node2

[kube-node]
node1
node2
node3

[etcd]
node1
node2
node3

[k8s-cluster:children]
kube-node
kube-master

[calico-rr]

[vault]
node1
node2
node3

Now, let’s open the file inventory/mycluster/group_vars/k8s-cluster.yml, and add the following to the end of the file:

helm_version: "v2.9.1"
helm_enabled: true
docker_dns_servers_strict: no
system_namespace: kube-system
kubeconfig_localhost: true
enable_network_policy: true
ipip: false
local_volume_provisioner_enabled: true
local_volume_provisioner_namespace: ""
local_volume_provisioner_base_dir: /mnt/disks
local_volume_provisioner_mount_dir: /mnt/disks
local_volume_provisioner_storage_class: local-storage

Now let’s install our cluster:

$ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v --private-key="key_file" -e ansible_user=rick -e kube_log_level=4

Let’s take a look at few of the command line parameters:

  • helm_version and helm_enabled - let’s us use latest and greatest helm charts
  • docker_dns_servers_strict=no - relaxes check number of nameservers while running kubespray
  • kubeconfig_localhost=true - kubectl and admin.conf will appear in the artifacts/ directory after deployment
  • local_volume_provisioner_enabled=yes - this will automatically enable PersistentLocalVolumes, VolumeScheduling and MountPropagation feature gates. We will discuss local volumes in the next article
  • enable_network_policy=true - enable Network Policies using Calico
  • ipip=false - disable IP to IP tunnelling. Needed by our WireGuard setup.
  • kube_log_level=4 - make logging more verbose

This will take some time to provision Kubernetes on our cluster.

Despite the fact that kubespray has efk addon, we are going to use more complex ElasticSearch / Kibana / Fluentd setup described in the next article.

After installation finished, let’s open the file artifacts/admin.conf and replace https://192.168.0.1:6443 with the server public IP address.

Now, let’s the command to test cluster access and health:

$ kubectl --kubeconfig admin.conf get pods --all-namespaces

This will produce output similar with this, with a lot of system pods running in kube-system namespace:

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   calico-node-gdmgg                       1/1       Running   0          4m
kube-system   calico-node-p65mg                       1/1       Running   0          4m
kube-system   kube-apiserver-node1                    1/1       Running   0          4m
kube-system   kube-apiserver-node2                    1/1       Running   0          4m
kube-system   kube-controller-manager-node1           1/1       Running   1          5m
kube-system   kube-controller-manager-node2           1/1       Running   1          5m
kube-system   kube-dns-79d99cdcd5-jkq8w               3/3       Running   0          4m
kube-system   kube-dns-79d99cdcd5-znftx               3/3       Running   0          3m
kube-system   kube-proxy-node1                        1/1       Running   0          4m
kube-system   kube-proxy-node2                        1/1       Running   0          5m
kube-system   kube-scheduler-node1                    1/1       Running   0          5m
kube-system   kube-scheduler-node2                    1/1       Running   0          5m
kube-system   kubedns-autoscaler-5564b5585f-f8vqq     1/1       Running   0          3m
kube-system   kubernetes-dashboard-69cb58d748-2858g   1/1       Running   0          3m
kube-system   local-volume-provisioner-qn7ld          1/1       Running   0          4m
kube-system   local-volume-provisioner-wkm8j          1/1       Running   0          4m

Dashboard access

Kubernetes Dashboard Login

To access the dashboard, let’s start kubectl proxy:

$ kubectl --kubeconfig=admin.conf proxy
# Starting to serve on 127.0.0.1:8001

Now you can open this url: [http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login]

Let’s try to access our Kubernetes Dashboard. Please refer to Access control and Creating sample user pages for more info.

You can use any of the pre-installed secret tokens to access Kubernetes Dashboard. But if you want to have full-access to dashboard, which may be insecure, you can follow the next steps.

Option 1. Admin user

Let’s create cluster-admin-user.yaml file with following contents:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Let’s run the following command:

$ kubectl --kubeconfig=admin.conf apply -f cluster-admin-user.yaml
# serviceaccount "admin-user" created
# clusterrolebinding "admin-user" created

Next, let’s fine Bearer token for the service we created:

$ kubectl --kubeconfig=admin.conf -n kube-system describe secret $(kubectl --kubeconfig=admin.conf -n kube-system get secret | grep admin-user | awk '{print $1}')

Output will look something like this:

Name:         admin-user-token-dpxqz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin-user
              kubernetes.io/service-account.uid=4b310585-2449-11e8-8c79-448a5bd88998

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1090 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRweHF6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0YjMxMDU4NS0yNDQ5LTExZTgtOGM3OS00NDhhNWJkODg5OTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.gx5IY-N_1QxgGgGfWnVZfuy95D75w5mKACCeWxbayDvbOUDIvkm470imGw65SDTD8HI6B2Ev3uOgHnqbouimaFilPY3-rtIhZMfiTH95YKjs7891bwbzj5yjDPXpx_p83YsUyHlNP5j3qzEg0a6p64meD9ZavivJgBbEAQiLKXMByUe_YEZhItREA5yP1CN0M2HotxInJQKHfgJ-SZcicY707mVihPKDKdQXk_miwWM9bvRW_FcgM-B8Gzo8pauaYrS4A0haopaOfBw7b9hkzoB0wld76TsXOKr65K1qQOepNk49mHVj7HjlQlA88b8t3rYMc3sHiFu8rUEUd7yMDg

Option 2. Create Dashboard Service Account

Please beware: Granting admin privileges to Dashboard’s Service Account might be a security risk.

Let’s make Dashboard use privileges of Service Account. Let’s create file cluster-dashboard-role.yaml with following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
$ kubectl --kubeconfig admin.conf apply -f cluster-dashboard-role.yaml
# clusterrolebinding "admin-user" created

Updated: