4 steps in the process:
- Create ec2 instance for Elasticsearch and Kibana, validate networking settings
- Install ELK oss versions on ec2 instance (no Logstash, just Elasticsearch and Kibana)
- Set up logging on Elastic Beanstalk via Filebeats
- Configure Kibana
I’ll be installing Elasticsearch and Kibana on an external instance.
Logstash and/or Filebeats will live on the Elastic Beanstalk instances forwarding logs to Elasticsearch.
Kibana will be on the same instance as Elasticsearch, and potentially we can forward the kibana port back to our application on a special route so nontechnical users can view logs.
Create ec2 Instance
Using AWS wizard, create your ec2 instance. I attached 30gb SSD drive.
Create new ssh key Credentials, VPC, Security Group, Subnet, Internet Gateway. After downloading credentials and moving to ssh directory (mv <key>.pem ~/.ssh/
), change permissions to 0400 chmod 0400 ~/.ssh/<key>.pem
Create Elastic IP and assign to new instance. Edit security group rules to include ssh
, Custom TCP :9200
, Custom TCP :5601
from your Elastic Beanstalk IP address or anywhere if you don’t care about security right now and just want to get it going (not recommended obviously)
Look at VPC. Look at Route Table. Under routes, make sure you have Destination: 0.0.0.0/0 - Target: YourInternetGateway
ssh to your ec2 instance. You can add a shortcut to ~/.ssh/config
so you can just call ssh elk
going forward.
Host elk
HostName <ELASTIC_IP_ADDR>
User ec2-user
IdentityFile ~/.ssh/<YOUR_KEY>.pem
ForwardAgent yes
Networking Help: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-ssh-troubleshooting/https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-ssh-troubleshooting/
Install ELK on ec2 instance
logz.io instructions for 6.x installation: https://logz.io/learn/complete-guide-elk-stack/#installing-elk
RPM Instructions for 7.0: https://www.elastic.co/guide/en/elasticsearch/reference/7.0/rpm.html
Elastic.co Guide: https://www.elastic.co/guide/en/elasticsearch/reference/7.0/index.html
On ec2 instance:
sudo yum -y update
sudo yum install -y java
[ec2-user@x ~]$ java -version
openjdk version "1.8.0_201"
OpenJDK Runtime Environment (build 1.8.0_201-b09)
OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode)
ElasticSearch 7.0 (oss) Installation
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.0.0-x86_64.rpm
sudo rpm --install elasticsearch-oss-7.0.0-x86_64.rpm
sudo vi /etc/elasticsearch/jvm.options
-Xms512m
-Xmx512m
sudo -i service elasticsearch start
sudo -i service elasticsearch status
$ curl http://localhost:9200
{
"name" : "xxx.ec2.internal",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "xxx-xxx-xxx",
"version" : {
"number" : "7.0.0",
"build_flavor" : "oss",
"build_type" : "rpm",
"build_hash" : "xxx",
"build_date" : "xxx.697037Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
sudo vi /etc/elasticsearch/elasticsearch.yml
This is not a recommended configuration but to open elasticsearch up to external URLs, change network.host
config.
network.host: 0.0.0.0
discovery.seed_hosts: [0.0.0.0]
$ curl <ELASTIC_IP>:9200
Debug: sudo vi /var/log/elasticsearch/elasticsearch.log
Kibana Installation
https://www.elastic.co/guide/en/kibana/current/rpm.html
wget
https://artifacts.elastic.co/downloads/kibana/kibana-oss-7.0.0-x86_64.rpm
sudo rpm --install kibana-oss-7.0.0-x86_64.rpm
sudo -i service kibana start
sudo -i service kibana status
Set up port forwarding to localhost for kibana. In ~/.bashrc
, add an alias to forward port 5601 on your server to localhost:5601
alias kibana='ssh -N -L 5601:localhost:5601 elk'
locally, run: $ kibana
Visit localhost:5601
To visit Kibana from an external host, change the network configuration:
sudo vi /etc/kibana/kibana.yml
server.host: "0.0.0.0"
Set up logging on Elastic Beanstalk
I found that the file writing didn’t work.
.ebextensions/01_filebeat_installation
commands:
1_command:
command: "curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.0.0-x86_64.rpm"
cwd: /home/ec2-user
2_command:
command: "sudo rpm -vi filebeat-oss-7.0.0-x86_64.rpm"
cwd: /home/ec2-user
.ebextensions/02_filebeat_conf
files:
/etc/filebeat/filebeat.yml:
content: |-
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/app/current/log/*.log
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/*.log
- /var/app/support/logs/*.log
output.elasticsearch:
hosts: ["<YOUR_ELASTIC_IP>:9200"]
group: root
mode: "000755"
owner: root
.ebextensions/03_filebeat_start
commands:
1_command:
command: "nohup sudo filebeat run &"
cwd: /home/ec2-user
Manual Installation of Filebeats on ElasticBeanstalk Server
Some notes: I’m downloading the open source version of Filebeats.
After some testing, I determined it’s better to separate the filebeat.inputs in 2 sections, one for the rails log under /var/app/current/log
and one for the passenger, aws, and beanstalk logs.
I’ve done both manual and auto installation through .ebextensions
and found manual installation worked better, mostly because I’m wrotten with .ebextensions configuration
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.0.0-x86_64.rpm
sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
sudo rpm -vi filebeat-oss-7.0.0-x86_64.rpm
sudo vi /etc/filebeat/filebeat.yml
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/app/current/log/*.log
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/app/support/logs/*.log
- /var/log/*.log
- /var/log/messages
#- c:\programdata\elasticsearch\logs\*
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["<YOUR_ELASTIC_IP>:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
sudo filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["<ELASTIC_IP>:9200"]'
nohup sudo filebeat run -c /etc/filebeat/filebeat.yml &
sudo tail -f /var/log/filebeat/filebeat
[ec2-user@ip-172-30-3-99 ~]$ sudo tail -f /var/log/filebeat/filebeat
2019-04-19T08:30:42.499Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 2
2019-04-19T08:30:42.500Z INFO cfgfile/reload.go:150 Config reloader started
2019-04-19T08:30:42.500Z INFO cfgfile/reload.go:205 Loading of config files completed.
2019-04-19T08:30:42.503Z INFO log/harvester.go:254 Harvester started for file: /var/log/awslogs.log
2019-04-19T08:30:42.506Z INFO log/harvester.go:254 Harvester started for file: /var/log/cfn-hup.log
2019-04-19T08:30:43.506Z INFO pipeline/output.go:95 Connecting to backoff(elasticsearch(http://<YOUR_ELASTIC_IP>:9200))
2019-04-19T08:30:43.510Z INFO elasticsearch/client.go:734 Attempting to connect to Elasticsearch version 7.0.0
2019-04-19T08:30:43.515Z INFO template/load.go:129 Template already exists and will not be overwritten.
2019-04-19T08:30:43.515Z INFO [index-management] idxmgmt/std.go:272 Loaded index template.
2019-04-19T08:30:43.515Z INFO pipeline/output.go:105 Connection to backoff(elasticsearch(http://<YOUR_ELASTIC_IP>:9200)) established
2019-04-19T08:31:11.935Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":0,"time":{"ms":5}},"total":{"ticks":40,"time":{"ms":51},"value":40},"user":{"ticks":40,"time":{"ms":46}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"eee0e42d-1ffe-482c-a352-3494da6685fd","uptime":{"ms":30012}},"memstats":{"gc_next":4194304,"memory_alloc":2202896,"memory_total":7999168,"rss":26750976}},"filebeat":{"events":{"added":30,"done":30},"harvester":{"open_files":2,"running":2,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":9,"batches":7,"total":9},"read":{"bytes":3138},"type":"elasticsearch","write":{"bytes":10410}},"pipeline":{"clients":2,"events":{"active":0,"filtered":21,"published":9,"retry":3,"total":30},"queue":{"acked":9}}},"registrar":{"states":{"current":19,"update":30},"writes":{"success":28,"total":28}},"system":{"cpu":{"cores":2},"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
2019-04-19T08:31:21.936Z INFO log/harvester.go:254 Harvester started for file: /var/app/current/log/staging.log
2019-04-19T08:31:22.508Z INFO log/harvester.go:254 Harvester started for file: /var/app/support/logs/passenger.log
2019-04-19T08:31:22.508Z INFO log/harvester.go:254 Harvester started for file: /var/app/support/logs/access.log
2019-04-19T08:31:32.509Z INFO log/harvester.go:254 Harvester started for file: /var/log/messages
2019-04-19T08:31:41.935Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":7}},"total":{"ticks":70,"time":{"ms":24},"value":70},"user":{"ticks":60,"time":{"ms":17}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":13},"info":{"ephemeral_id":"eee0e42d-1ffe-482c-a352-3494da6685fd","uptime":{"ms":60012}},"memstats":{"gc_next":4298928,"memory_alloc":3158160,"memory_total":10597672}},"filebeat":{"events":{"added":42,"done":42},"harvester":{"open_files":6,"running":6,"started":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":32,"batches":6,"total":32},"read":{"bytes":2186},"write":{"bytes":34585}},"pipeline":{"clients":2,"events":{"active":0,"filtered":10,"published":32,"total":42},"queue":{"acked":32}}},"registrar":{"states":{"current":19,"update":42},"writes":{"success":10,"total":10}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
ps -ef | grep filebeat
sudo kill -9 <pid>
Configure Kibana
The only thing I’ve done so far was create saved views for passenger logs, application rails logs per environment, and AWS activity logs per environment. Nothing special at this time.