SlideShare a Scribd company logo
Presented By: Etash Singh
Linking Metrics to
Logs using Loki
01 What is Log Aggregation?
02 Overview of Loki
03
04
Installation Options
05
Comparisons with Existing Solutions
Our Agenda
06
Available Clients for Loki
07
What is Promtail?
Demo
What is Log Aggregation?
Log aggregation is the practice of gathering up disparate log files for
the purposes of organizing the data in them and making them
searchable.
c
Overview of
Loki?
Grafana Loki is a set of
components that can be
composed into a fully featured
logging stack.
1. Unlike other logging systems,
Loki is built around the idea of only
indexing metadata about your
logs: labels (just like Prometheus
labels).
2. Log data itself is then
compressed and stored in chunks
in object stores such as S3 or GCS,
or even locally on the filesystem.
Loki Overview: Motivation
○ Incident Response and Context Switching
○ Resolving Problems in Existing Solutions
○ Cost Efficiency
○ Kubernetes and Docker
Loki Overview: Features
● Multi-Tenancy:
○ Data between tenants is completely separated
○ Achieved through a tenant ID (which is represented as an alphanumeric string)
○ When disabled, all requests are internally given a tenant ID of "fake"
● Modes of Operation:
○ Loki is optimized for both running locally (or at small scale) and for scaling horizontally
○ Loki comes with a single process mode that runs all of the required microservices in one process
○ The microservices of Loki can be broken out into separate processes, allowing them to scale independently of each other
Loki Overview: Architecture
Components in Loki
● Distributor
○ Handle incoming streams by clients
● Ingester
○ Write log data to long-term storage backends (DynamoDB, S3, Cassandra, etc.)
○ Return log data for in-memory queries on the read path
● Query Frontend
○ Optional service providing the querier's API endpoints
○ Used to accelerate the read path
● Querier
○ Handles queries using the LogQL query language
○ Fetch logs both from the ingesters and long-term storage
Loki Overview: Architecture
To summarize, the read path works as follows:
1. The querier receives an HTTP/1 request for data.
2. The querier passes the query to all ingesters for in-memory data.
3. The ingesters receive the read request and return data matching the query, if any.
4. The querier lazily loads data from the backing store and runs the query against it if no ingesters returned data.
5.
6. The querier iterates over all received data and deduplicates, returning a final set of data over the HTTP/1
connection.
And the the flow for the write path is as follows:
1. The distributor receives an HTTP/1 request to store data for streams.
2. Each stream is hashed using the hash ring.
3. The distributor sends each stream to the appropriate ingesters and their replicas (based on the configured
replication factor).
4. Each ingester will create a chunk or append to an existing chunk for the stream's data. A chunk is unique per
tenant and per labelset.
5. The distributor responds with a success code over the HTTP/1 connection.
Loki Overview: Architecture
Installation Options
1. Tanka (A reimplementation of Ksonnet that Grafana Labs created after Ksonnet was deprecated)
2. Helm (Loki Helm chart in its repository:
https://github.com/grafana/loki/tree/master/production/helm/loki)
3. Docker: Loki can be installed using both Docker and Docker Compose
4. Using Binaries: Every release includes binaries for Loki which can be found on the
Releases page. We can also build Loki binaries by creating them manually from by
cloning its repositories.
Comparison with Elastic Stack
Loki Promtail Grafana Elastic Stack Datadog
Data is stored in a cloud storage system
such as S3, GCS, or Cassandra as well as
on-disk
Data stored on-disk as JSON objects Data stored on-disk
Indexes metadata of logs Indexes the whole logs Indexes metadata of logs
Available on premise Available on premise Not available on premise
Open Source Open Source Flexible Pricing
Visualization Tool: Grafana Visualization Tool: Kibana Visualization Tool: Datadog Dashboards
Available Clients for Loki
● Promtail:
○ Client of choice when you're running Kubernetes
○ Configure it to automatically scrape logs from pods running on the same node that it runs on
● Docker Driver:
○ Automatically adds labels appropriate to the running container
● Fluent Bit & Fluentd:
○ Ideal when you already have Fluentd deployed and you already have configured Parser and Filter plugins
There are three unofficial clients present as well: promtail-client(Go), push-to-loki.py(Python) and
Serilog-Sinks-Loki(C#)
What is Promtail?
Promtail is an agent which ships the contents of local logs to a private or cloud Loki instance. It
is usually deployed to every machine that has applications needed to be monitored.
It primarily:
1. Discovers targets
2. Attaches labels to log streams
3. Pushes them to the Loki instance.
Currently, Promtail can tail logs from two sources: local log files and the systemd journal (on
AMD64 machines only).
OUR CHARTInsert Your Subtitle Here
Reference
● https://github.com/grafana/loki/tree/master/docs
● https://docs.google.com/document/d/11tjK_lvp1-SVsFZjgOTr1vV3-q
6vBAsZYIQ5ZeYBkyM/view
Thank You !

More Related Content

Linking Metrics to Logs using Loki

  • 1. Presented By: Etash Singh Linking Metrics to Logs using Loki
  • 2. 01 What is Log Aggregation? 02 Overview of Loki 03 04 Installation Options 05 Comparisons with Existing Solutions Our Agenda 06 Available Clients for Loki 07 What is Promtail? Demo
  • 3. What is Log Aggregation? Log aggregation is the practice of gathering up disparate log files for the purposes of organizing the data in them and making them searchable.
  • 4. c Overview of Loki? Grafana Loki is a set of components that can be composed into a fully featured logging stack. 1. Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). 2. Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem.
  • 5. Loki Overview: Motivation ○ Incident Response and Context Switching ○ Resolving Problems in Existing Solutions ○ Cost Efficiency ○ Kubernetes and Docker
  • 6. Loki Overview: Features ● Multi-Tenancy: ○ Data between tenants is completely separated ○ Achieved through a tenant ID (which is represented as an alphanumeric string) ○ When disabled, all requests are internally given a tenant ID of "fake" ● Modes of Operation: ○ Loki is optimized for both running locally (or at small scale) and for scaling horizontally ○ Loki comes with a single process mode that runs all of the required microservices in one process ○ The microservices of Loki can be broken out into separate processes, allowing them to scale independently of each other
  • 7. Loki Overview: Architecture Components in Loki ● Distributor ○ Handle incoming streams by clients ● Ingester ○ Write log data to long-term storage backends (DynamoDB, S3, Cassandra, etc.) ○ Return log data for in-memory queries on the read path ● Query Frontend ○ Optional service providing the querier's API endpoints ○ Used to accelerate the read path ● Querier ○ Handles queries using the LogQL query language ○ Fetch logs both from the ingesters and long-term storage
  • 8. Loki Overview: Architecture To summarize, the read path works as follows: 1. The querier receives an HTTP/1 request for data. 2. The querier passes the query to all ingesters for in-memory data. 3. The ingesters receive the read request and return data matching the query, if any. 4. The querier lazily loads data from the backing store and runs the query against it if no ingesters returned data. 5. 6. The querier iterates over all received data and deduplicates, returning a final set of data over the HTTP/1 connection. And the the flow for the write path is as follows: 1. The distributor receives an HTTP/1 request to store data for streams. 2. Each stream is hashed using the hash ring. 3. The distributor sends each stream to the appropriate ingesters and their replicas (based on the configured replication factor). 4. Each ingester will create a chunk or append to an existing chunk for the stream's data. A chunk is unique per tenant and per labelset. 5. The distributor responds with a success code over the HTTP/1 connection.
  • 10. Installation Options 1. Tanka (A reimplementation of Ksonnet that Grafana Labs created after Ksonnet was deprecated) 2. Helm (Loki Helm chart in its repository: https://github.com/grafana/loki/tree/master/production/helm/loki) 3. Docker: Loki can be installed using both Docker and Docker Compose 4. Using Binaries: Every release includes binaries for Loki which can be found on the Releases page. We can also build Loki binaries by creating them manually from by cloning its repositories.
  • 11. Comparison with Elastic Stack Loki Promtail Grafana Elastic Stack Datadog Data is stored in a cloud storage system such as S3, GCS, or Cassandra as well as on-disk Data stored on-disk as JSON objects Data stored on-disk Indexes metadata of logs Indexes the whole logs Indexes metadata of logs Available on premise Available on premise Not available on premise Open Source Open Source Flexible Pricing Visualization Tool: Grafana Visualization Tool: Kibana Visualization Tool: Datadog Dashboards
  • 12. Available Clients for Loki ● Promtail: ○ Client of choice when you're running Kubernetes ○ Configure it to automatically scrape logs from pods running on the same node that it runs on ● Docker Driver: ○ Automatically adds labels appropriate to the running container ● Fluent Bit & Fluentd: ○ Ideal when you already have Fluentd deployed and you already have configured Parser and Filter plugins There are three unofficial clients present as well: promtail-client(Go), push-to-loki.py(Python) and Serilog-Sinks-Loki(C#)
  • 13. What is Promtail? Promtail is an agent which ships the contents of local logs to a private or cloud Loki instance. It is usually deployed to every machine that has applications needed to be monitored. It primarily: 1. Discovers targets 2. Attaches labels to log streams 3. Pushes them to the Loki instance. Currently, Promtail can tail logs from two sources: local log files and the systemd journal (on AMD64 machines only).
  • 14. OUR CHARTInsert Your Subtitle Here Reference ● https://github.com/grafana/loki/tree/master/docs ● https://docs.google.com/document/d/11tjK_lvp1-SVsFZjgOTr1vV3-q 6vBAsZYIQ5ZeYBkyM/view