How to ship message data into elastic
A while ago we did take a look at how application logs can be shipped into an elastic cluster. In this article, we are going to follow up on that by showing an option on how application messages can be ingested into an elastic cluster.
The architecture looks like this:
An application (typically a microservice) writes incoming and outbound messages directly into a local JSON file. This file is picked up by filebeat and ingested into the elastic. The main advantage of using a local file over for instance a REST or Kafka interface is that this architecture fully decouples the application from the elastic ingestion. If there is any maintenance or downtime on your elastic stack, your messages will be buffered within this local file without any effect on your application.
For this scenario, we’ll adhere to the message structure of CXF’s LogEvent. Our sample data looks like this (Remember this would be written by the application during runtime):
{
"encrypted":"false",
"address":"https://example.com/api/IncidentService",
"hostName":"localhost",
"severity":"INFO",
"service":"IncidentService",
"message":"<source name=\"example_3C273\" flux=\"0.0196\"/>",
"operationName":"executeTest",
"eventType":"REQ_IN",
"messageDirection":"IN",
"hostAddress":"127.0.0.1",
"transport":"SOAP",
"processId":"1",
"service_namespace":"http://foobar.example.com/test",
"application":"ExampleApp",
"duration": 1.5
}
Next, we’ll need to install filebeat. I recommend the guide on the official website. The configuration looks like this (A locally running elastic stack is assumed):
output.elasticsearch:
hosts: ["localhost:9200"]
index: "message"
setup.template.name: "message"
setup.template.pattern: "message-*"
filebeat.inputs:
- input_type: log
json.keys_under_root: false
json.overwrite_keys: false
json.add_error_key: true
paths:
- ./messages.json
setup.ilm.enabled: false
You’ll also need to specify a setup template. This will be used by filebeat during the first startup. All source code is available on github.
The ingestion can be started with the following command:
$ filebeat -e -v -c $PWD/filebeat.yml
After a while you’ll be able to see the data being written onto the index:
You’ll find your data through the “discover” feature within Kibana:
Depending on the ingestion volume you may want to consider shipping the data into kafka and process it through logstash. This can be achieved with the same setup just by updating the filebeat config.