1 Introduction
I had been googling for days and could not find any good information on how to monitor Spring application with ELK, so I spent sometime on it and finally make it work for me. So I am sharing my experience and hope others will find it useful.
This is a series of 3 posts, please continue to read part 2 and part 3.
This is a series of 3 posts, please continue to read part 2 and part 3.
1.1 The Testing Environment
In my testing environment, I used two Linux CentOS VMs created with Vagrant on my laptop, each of them using 1G of RAM. On one VM, I installed the ELK stack (version 5.1.1) and on the other VM, I used it as the application server and push log message to the ELK stack.
1.2 ELK Stack
The ELK stack consists of Elasticsearch, Logstash, and Kibana and it allow us to have a very easy way to collect, search and analyze large data sets.
The following diagram shows the component in an ELK stack for log monitoring:
The above stack is made up of 5 components -
- Log files - Application generates log data and save into log files.
- Filebeat - A log data shipper initially based on the Logstash-Forwarder source code. It is installed as an agent on the application server and it will monitor the log files, and forward the log data to the Logstash module in the ELK stack.
- Logstash - An agent run on the ELK server and receive data from Filebeat clients. Once a message is received, it will parsed and saved Elasticsearch for indexing.
- Elasticsearch - A distributed indexed datastore which can run as a cluster. It’s job is to store the incoming log data across the nodes in the cluster and to service queries from Kibana.
- Kibana - A browser-based interface served up from a web server. It’s job is to allow you to build tabular and graphical visualisations of the log data based on elasticsearch queries. Typically these are based on simple text queries, time-ranges or even far more complex aggregations.
1.3 The Spring Boot Application
Recently, Spring Boot become very popular in Java development communities, The
Spring Boot is a framework design for Rapid Application Development. It comes with auto configurations and embedded tomcat server. Due to this simplicity, developers can focus on coding by running and testing Spring boot application right from IDE.
2 ELK Stack Installation
In this example, the ELK stack is installed on a CentOS 6 VM.
2.1 Install java and ruby
Install java SDK 1.8 :
# yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel ruby ruby-gems
|
2.2 Install Elasticsearch
Download the Elasticsearch package
# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
|
Install it with:
# yum install elasticsearch-5.1.1.rpm
|
2.3 Configure Elasticsearch
- If you need to change how elasticsearch store it’s data, you can modify the
/etc/elasticsearch/elasticsearch.yml file.
Details of the directory structure can be found on https://www.elastic.co/guide/en/elasticsearch/reference/2.0/setup-dir-layout.html
For example, you can specify where elasticsearch to save it’s data.
path.data: /elasticsearch-data
|
- By default, elasticsearch only listen to localhost, if you need to split the ELK stack into multiple hosts, you need to modify /etc/elasticsearch/elasticsearch.yml and change the following:
network.host: localhost
|
Details of the network options can be found on https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html
- By default, Elasticsearch to uses 2G of RAM when it start, you should change the heap size by editing the /etc/elasticsearch/jvm.options file based on the RAM size of your host.
For example, setting heap size
-Xms249m
-Xmx249m
|
- Make Elasticsearch to automatically start during bootup
chkconfig elasticsearch on
|
Start Elasticsearch
service elasticsearch start
|
Verify Elasticsearch is running
# curl http://localhost:9200
{
"name" : "Xemu",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "pGQUv3pJRSaa0HCFm4E5MA",
"version" : {
"number" : "2.4.3",
"build_hash" : "d38a34e7b75af4e17ead16f156feffa432b22be3",
"build_timestamp" : "2016-12-07T16:28:56Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
|
2.4 Install Kibana
Get Kibana rpm:
# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
|
Install Kibana:
yum install -y kibana-5.1.1-x86_64.rpm
|
2.5 Configure Kibana
Edit Kibana configuration file
vi /etc/kibana/kibana.yml
|
To specify the port and host that kinbana listen to
# Kibana is served by a back end server. This setting specifies the port to use.
# server.port: 5601
….
….
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
# server.host: "localhost"
server.host: “0.0.0.0”
|
Details of kibana setting can be found on https://www.elastic.co/guide/en/kibana/5.x/settings.html
Start Kibana
# chkconfig kibana on
# service kibana start |
2.6 Logstash
Get logstash rpm:
# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
|
Install logstash:
yum install -y logstash-5.1.1.rpm
|
Note:
- Logstash software is installed on: /usr/share/logstash/bin/logstash
- Logstash uses systemctl to control start and stop
Make sure plug-in logstash-input-beats has been installed
# /usr/share/logstash/bin/logstash-plugin list --verbose | grep beats
logstash-input-beats (3.1.12)
|
If for any reason the plugin is not installed, you can install it with “logstash-plugin install logstash-input-beats”
At this moment, we are not going to start logstash, we will start it after logstash configuration on part 2 of this article.
2.7 Install Filebeat
We need to install filebeat to allow us to create index in Elasticsearch and dashboard on Kibana
Run the fllowing as root to install filebeat
# yum install filebeat-5.1.1-x86_64.rpm
|
2.8 Install the index template in Elasticsearch
$ curl -XPUT 'http://localhost:9200/_template/filebeat' -d@/etc/filebeat/filebeat.template.json
{"acknowledged":true}
|
You can delete the index if you need to install new template
$ curl -XDELETE 'http://localhost:9200/filebeat-*'
|
2.9 Import Beats dashboard Kibana dashboard
We need to install a dashboard for log files receive from firebeat
# /usr/share/filebeat/scripts/import_dashboards
|
Note:
- More information about importing kibana dashboard https://www.elastic.co/guide/en/beats/libbeat/5.1/import-dashboards.html
- Alternative method to install dashboard
First, find out the latest beats-dashboards plug-ins release, go to https://github.com/elastic/beats-dashboards/releases
Download and install the plug-in
$ unzip beats-dashboards-1.3.1.zip
$ cd beats-dashboards-1.3.1/
$ ./load.sh
|
very useful really good information thanks for posting such a good information it will helps the people a lot keep it up.
ReplyDeleteBest Regards,
DevOps Training in Hyderabad
DevOps Online Training in Hyderabad
DevOps Online Training
DevOps Institutes in Hyderabad
Learn DevOps Online
Best DevOps Training Institute in Hyderabad
Best DevOps Online Training Institute in Hyderabad
Best DevOps Online Training in India
DevOps Institute in Hyderabad
Best DevOps Training
DevOps Training and Certification
learn DevOps
DevOps Institutes in Ameerpet
DevOps Training
DevOps Courses
DevOps Certification Training
CourseIng
Thank you for your guide to with upgrade information about AWS keep update at
ReplyDeleteAWS Online Course