Kubernetes logging using ElasticSearch, Kibana, Fluentd (EFK)

Before we can run applications on our cluster, we need centralized logging solution. One of the best solutions we can use in this case is mix of ElasticSearch, Kibana, Fluentd. We are going to automate our install using ansible templates.

Repository for this setup is located at ansible-kubernetes-elasticsearch-logging repository.

The installation is very straightforward. First, let’s clone the repository:

$ git clone https://github.com/ScalableSystemDesign/ansible-kubernetes-elasticsearch-logging.git

Next, let’s customize the setup. Let’s open install.yaml. At the top of the file you can find these variables:

- name: Install ElasticSearch  
hosts: 127.0.0.1
connection: local
vars:
  - data_replicas: 2
  - busybox_init_image: busybox:1.28.4
  - volume_claim: 50Gi
  - namespace: kube-system
  - kubectl: /usr/local/bin/kubectl
  - helm: /usr/local/bin/helm
tasks:
  - name: helm init
    environment:
      KUBECONFIG: "{{ kubeconfig }}"
    local_action: command {{ helm }} init --home="{{ helm_dir }}"
    ignore_errors: yes

  - name: helm add incubator
    environment:
      KUBECONFIG: "{{ kubeconfig }}"
    local_action: command {{ helm }} repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator --home="{{ helm_dir }}"
    ignore_errors: yes

  - name: Update helm
    environment:
      KUBECONFIG: "{{ kubeconfig }}"
    local_action: command {{ helm }} repo update --home="{{ helm_dir }}"
    ignore_errors: yes

  - name: Install ElasticSearch
    environment:
      KUBECONFIG: "{{ kubeconfig }}"
    local_action: command {{ helm }} install {{ es_chart }} --name logging --set master.persistence.storageClass=local-storage,data.persistence.storageClass=local-storage,data.antiAffinity=hard,data.replicas={{ data_replicas }},data.persistence.size={{ volume_claim }},client.readinessProbe.initialDelaySeconds=60,data.readinessProbe.initialDelaySeconds=60,master.readinessProbe.initialDelaySeconds=60 --namespace={{ namespace }} --home="{{ helm_dir }}"
    ignore_errors: yes

  - name: Install fluentd
    environment:
      KUBECONFIG: "{{ kubeconfig }}"
    local_action: command {{ helm }} install incubator/fluentd-elasticsearch --name fluentd --set elasticsearch.host=logging-elasticsearch-client --namespace={{ namespace }} --home="{{ helm_dir }}"
    ignore_errors: yes

  - name: Install Kibana
    environment:
      KUBECONFIG: "{{ kubeconfig }}"
    local_action: command {{ helm }} install stable/kibana --name kibana --set env.ELASTICSEARCH_URL=http://logging-elasticsearch-client:9200,env.SERVER_BASEPATH=/api/v1/namespaces/{{ namespace }}/services/kibana/proxy --namespace={{ namespace }} --home="{{ helm_dir }}"
    ignore_errors: yes

You can customize the setup that fits best for your needs. Next, let’s run the installation process.

Please note, that node1 and node2 are node names, that you can get by running kubectl --kubeconfig=admin.conf get nodes.

$ ansible-playbook install.yaml -e 'kubeconfig=path/to/admin.conf'

After process finished running, you can check if ELK pods are running in kube-system namespace:

$ kubectl --kubeconfig=admin.conf get pods --namespace=kube-system

NAMESPACE     NAME                                            READY     STATUS      RESTARTS   AGE
kube-system   calico-kube-controllers-6c7d6bbfc7-kfnft        1/1       Running     0          9h
kube-system   calico-node-d56wc                               1/1       Running     0          9h
kube-system   calico-node-fpxnb                               1/1       Running     0          9h
kube-system   fluentd-fluentd-elasticsearch-c2764             1/1       Running     0          9h
kube-system   fluentd-fluentd-elasticsearch-cjmhk             1/1       Running     0          9h
kube-system   kibana-5b89df8d45-htlf8                         1/1       Running     0          9h
kube-system   kube-apiserver-node1                            1/1       Running     0          9h
kube-system   kube-apiserver-node2                            1/1       Running     0          9h
kube-system   kube-controller-manager-node1                   1/1       Running     0          9h
kube-system   kube-controller-manager-node2                   1/1       Running     0          9h
kube-system   kube-dns-7bd4d5fbb6-7n622                       3/3       Running     0          9h
kube-system   kube-dns-7bd4d5fbb6-82qhq                       3/3       Running     0          9h
kube-system   kube-proxy-node1                                1/1       Running     0          9h
kube-system   kube-proxy-node2                                1/1       Running     0          9h
kube-system   kube-scheduler-node1                            1/1       Running     0          9h
kube-system   kube-scheduler-node2                            1/1       Running     0          9h
kube-system   kubedns-autoscaler-679b8b455-f8bjw              1/1       Running     0          9h
kube-system   kubernetes-dashboard-55fdfd74b4-bz2br           1/1       Running     0          9h
kube-system   local-volume-provisioner-2kjqv                  1/1       Running     0          9h
kube-system   local-volume-provisioner-8g4h7                  1/1       Running     0          9h
kube-system   logging-elasticsearch-client-6585d8964b-f4bvs   1/1       Running     2          9h
kube-system   logging-elasticsearch-client-6585d8964b-tnm2l   1/1       Running     2          9h
kube-system   logging-elasticsearch-data-0                    1/1       Running     0          9h
kube-system   logging-elasticsearch-data-1                    1/1       Running     0          9h
kube-system   logging-elasticsearch-master-0                  1/1       Running     0          9h
kube-system   logging-elasticsearch-master-1                  1/1       Running     0          9h
kube-system   logging-elasticsearch-master-2                  1/1       Running     0          9h
kube-system   tiller-deploy-5c688d5f9b-prlbj                  1/1       Running     0          9h

As you can see, we have elasticsearch-client, elasticsearch-master, elasticsearch-data, fluentd and kibana pods running.

Now, just open this url http://localhost:8001/api/v1/namespaces/kube-system/services/kibana/proxy/app/kibana.

You need to finish configuring Kibana as shown in the video below:

To re-create the setup, just clean the cluster from ELK setup using single command:

$ ansible-playbook delete.yaml -e 'kubeconfig=path/to/admin.conf'

References:

Updated: