Before we can start discussing some of the Reactive Streams concepts, we have to get familiar with monitoring and metrics systems.
Measuring how well is your system performing can be surprisingly a lot more complex than you think. Especially, when asynchronous chained code is involved.
We are going to visualize few different ideas:
- how well our streams are performing
- how back pressuring works
We are going to use different tools that can help us in this case. More specifically, this is:
Please make sure you have docker installed and running locally. If not, please refer to this tutorial.
Before we can run
Prometheus container, we have to configure it to scrape data from Kamon1.
Let’s create file
prometheus.yml with following contents:
scrape_configs: - job_name: 'Streams' scrape_interval: 5s static_configs: - targets: ['docker.for.mac.host.internal:9095']
Why might ask what is
docker.for.mac.host.internal:9095. This is the address of docker host. This will help us to access host
machine to scrape time-series data from Prometheus container.
Now, let’s run our Prometheus container in the background:
$ docker run -d -p 9090:9090 -v path/to/prometheus.yml prom/prometheus
After that, you will be able to visit
Prometheus dashboard at http://localhost:9090/
Grafana works great with Prometheus. It can query Prometheus and display very beautiful and meaningful charts.
$ docker run -d -p 3000:3000 grafana/grafana
Let’s go to Grafana dashboard at http://localhost:3000/. Default login is
admin and password is
Now, let’s add Prometheus as the data source as shown in the video below:
Kamon is JVM system that helps us collect metrics and expert them using different exporters, for example Prometheus, Zipkin, etc.