• Guide Content

OpenTelemetry Logging: How It Works and 4 Code Examples

How Does OpenTelemetry Collect Logs from Systems and Applications? 

OpenTelemetry is an open-source observability project overseen by the Cloud Native Computing Foundation (CNCF). It makes it easy to instrument applications and collect logs, traces, and metrics.

OpenTelemetry can collect logs through a combination of SDKs, agents, and exporters. It provides libraries and agents for multiple programming languages and frameworks, allowing the integration of OpenTelemetry logging capabilities directly into applications:

  • SDK integration: You can use the OpenTelemetry SDK to instrument applications for log collection. The SDKs provide an easier way to interact with OpenTelemetry APIs, making creating and managing application logs possible.
  • Agents and collectors: OpenTelemetry offers agents that can be deployed alongside applications or on hosts. These agents can collect logs from various sources, including standard output streams (stdout/stderr) and log files. They can also capture logs from system processes and services.
  • Exporting logs: Once collected, logs can be exported to different backends for storage and analysis. OpenTelemetry supports various exporters, such as OTLP (OpenTelemetry Protocol), Jaeger, and Prometheus, which can send logs to monitoring and observability platforms. This flexibility allows you to integrate OpenTelemetry within existing logging and monitoring infrastructure.

What Are the Benefits of the OpenTelemetry Log Data Model?

The OpenTelemetry log data model offers several benefits that enhance observability and debugging capabilities compared to traditional logging methods:

 

  • Structured and consistent format: OpenTelemetry logs are structured, providing a consistent format across different applications and services. This structure includes key components like timestamps, severity levels, and resource information, making parsing and analyzing logs easier.
  • Integration with traces and metrics: OpenTelemetry logs can be correlated with traces and metrics. This integration allows for logs to be analyzed in the context of specific operations or transactions, providing a more comprehensive view of application behavior and performance.
  • Enhanced debugging and analysis: The structured nature of OpenTelemetry logs, combined with rich context information, makes diagnosing and resolving issues easier. Logs provide detailed insights into application events, errors, and performance bottlenecks, facilitating faster and more effective debugging.
  • Interoperability and open standards: As an open-source project, OpenTelemetry promotes interoperability between different tools and platforms. By adhering to open standards for logging enables the avoidance of vendor lock-in and allows for the choice from a wide range of compatible monitoring and analysis tools.
  • Contextual and actionable insights: The comprehensive nature of OpenTelemetry logs, including contextual information like resource attributes and trace context fields, allows for deeper insights into application behavior. This level of detail supports more informed decision-making and proactive issue resolution.

Types of Logs Gathered by OpenTelemetry 

OpenTelemetry collects three main types of logs:

System Formats

System format logs refer to the logs generated by the operating system and other system-level software running on the host computer. These logs are important for monitoring the system’s health and performance. They provide valuable information on system events like startup and shutdown, hardware and software errors, and security events.

OpenTelemetry logging can collect these system format logs and present them in a consolidated view alongside logs from applications. This is especially useful in distributed systems where logs from multiple hosts need to be analyzed together.

Third-Party Application Logs

Third-party application logs are those generated by software that is not developed by the entity operating the system. This includes databases, web servers, and other middleware. These logs can provide insights into the operation of these software components and can help diagnose issues that affect the overall system. OpenTelemetry can collect third-party application logs, parse them, and present them in a structured format that makes analysis easier.

First-Party Application Logs

First-party application logs are those generated by software developed by the entity operating the system. These logs can provide detailed information about the operation of the application and can be invaluable in debugging and optimizing the application. 

OpenTelemetry can collect first-party application logs, enrich them with context from the resource and InstrumentationScope, and present them alongside system and third-party application logs. This provides a comprehensive system view and facilitates efficient troubleshooting and analysis.

How Does OpenTelemetry Structure Logs? 

The OpenTelemetry log data model provides a structured format designed to capture as much information about an event as possible. It has several key components, each capturing a unique aspect of the event.

Timestamp

The timestamp is an important record that denotes the exact moment when an event occurs. It plays a crucial role in understanding the order of events and in coordinating logs with traces and metrics. Accurate timestamps help diagnose issues by identifying if a particular event consistently precedes a failure or if there’s a significant delay between two related events. They provide a temporal context to the events.

ObservedTimestamp

When considering the timing of an event, it’s important to understand the concept of ObservedTimestamp, which refers to the exact moment when an event is noticed or recorded within a system and marks the point of recognition by the monitoring tools rather than the actual occurrence of the event. The importance of this distinction lies in the potential time gap between the event’s occurrence and its subsequent observation. This gap can be influenced by factors such as system latency, network delays, or the processing time required by the monitoring tools.

In contrast, timestamps found in logs traditionally denote the exact time an event occurred, not when it was detected or recorded by an observing system. This fundamental difference means that while log timestamps aim to provide a historical record of events as they happened, ObservedTimestamps offer a real-time perspective of when those events were perceived by the system. Understanding the distinction between these two types of timestamps is essential for accurate event analysis and troubleshooting, as it helps identify potential delays or issues within the monitoring process.

Trace Context Fields

Trace context fields in the OpenTelemetry log data model capture information about the trace and span associated with the event. A trace represents a single operation or transaction, like a user request, and a span represents a specific unit of work done within that operation.

By including the trace context fields in the log data, you can easily correlate logs with traces. This allows a greater understanding of the broader context of an event, i.e., how it fits into the overall operation or transaction. This ability to correlate logs with traces is one of the key benefits of OpenTelemetry logging, as it significantly enhances software observability.

Severity Fields

Severity fields capture the severity or the importance of the event. They can be either textual or numeric. For example, severity might range from DEBUG, which represents detailed information useful for debugging, to FATAL, which represents severe error events.

While different systems might use different notations for severity, OpenTelemetry creates unified severity fields that make it possible to analyze log severity across the entire environment.

Body

The body is the primary content of the log entry. It usually contains a human-readable message that communicates the details of an event. The body can also contain structured data. For instance, it may include a stack trace, which is a report of the active stack frames at a particular point in time during the execution of a program. This can be useful in debugging.

Resource

The resource component of the OpenTelemetry logging data model is an immutable description of the entity that is generating the log. This could be a service, a host, a container, or any other unit of software or hardware that produces logs. 

The resource is defined at startup and does not change during the existence of the entity. It contains attributes that describe the entity. For example, the resource may contain attributes such as the pod name, namespace, and labels in a Kubernetes pod. The resource is critical in correlating logs produced by the same entity and distinguishing logs from different entities.

InstrumentationScope

The InstrumentationScope is a component that describes the source of the log within the entity that is generating it. While the resource is immutable and describes the entity as a whole, the InstrumentationScope can change during the execution of the entity. It provides additional context to the log, such as the part of the code that generated the log or the request that triggered the log. This makes the InstrumentationScope invaluable in diagnosing issues in complex systems where various components may generate logs.

Attributes

Attributes are key-value pairs that provide additional context to the log. They can be associated with the body, the resource, or the InstrumentationScope. Attributes can include details such as the severity of the event, the timestamp of the event, and the thread that generated the event. They can also include any other information that may be useful in understanding the event or in filtering and aggregating logs. The flexibility of attributes makes them a powerful tool in enriching logs.

Related content: Read our guide to OpenTelemetry architecture

Examples of OpenTelemetry Logging

Prerequisites: To run the code below, please use opentelemetry-sdk version 1.14.0 and install the following two libraries:

pip3 install opentelemetry-api opentelemetry-sdk

In each of the examples below, save the Python code to a file, such as app.py and run it using python app.py. This will execute the sample code once; for simplicity, the examples do not support continuous log monitoring, but this can easily be added.

1. Tailing a Simple JSON file

In some cases, your application might be writing logs to a regular JSON file. OpenTelemetry makes it easy to tail these logs. Let’s start by looking at a simple example of how you can tail a JSON file using OpenTelemetry logging.

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

trace.set_tracer_provider(TracerProvider())
trace.get_tracer_provider().add_span_processor(
    SimpleSpanProcessor(ConsoleSpanExporter())
)

tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span("foo"):
    with tracer.start_as_current_span("bar"):
        with tracer.start_as_current_span("baz"):
            print("Hello world from OpenTelemetry Python!")

In this example, we set up a trace provider, create a span processor, and initiate a tracer. We then start a few spans and print a message. When you run this code, OpenTelemetry will automatically tail the logs being written to the console

2. Syslog

Syslog is a standard for message logging, often used in systems and network administration. OpenTelemetry provides an easy way to collect and integrate these logs into your observability stack.

from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
import logging

trace.set_tracer_provider(TracerProvider(resource=Resource.create({ "service.name" : "my-service" } )))

logging.basicConfig()
logging.root.setLevel(logging.NOTSET)

logging.basicConfig(level=logging.NOTSET)



logger = logging.getLogger("my-logger")

logger.info("Starting up")

with trace.get_tracer("my-application").start_as_current_span("my-span"):
    logger.info("Hello, world!")

In the above example, we set up a tracer provider with a service name. We then get a logger and log a startup message. We start a span and log a message within that span. OpenTelemetry will automatically collect these logs and associate them with the relevant traces.

3. Kubernetes Logs

Here is how to set up a log emitter provider with a Kubernetes service name. OpenTelemetry will then be able to associate the logs with that Kubernetes service.

import logging
from opentelemetry.sdk._logs import set_logger_provider
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk.resources import Resource

logger_provider = LoggerProvider(
    resource=Resource.create(
        {
            "service.name": "my-service"
        }
    ),
)
set_logger_provider(logger_provider)

handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
logging.getLogger().addHandler(handler)

logger = logging.getLogger("my-logger")


logger.info("Starting up")

logger.info("Hello, world!")
logger.info("Goodbye, world!")

In this example, we get a logger and log a startup message. We then log a couple of messages within a block of code where we are catching logs. OpenTelemetry will automatically collect these logs and associate them with the relevant service.

4. Kubernetes Events

The code below shows how to set up a log emitter provider to log messages representing Kubernetes events. The messages contain a Kubernetes service name, and as above, OpenTelemetry can then associate them with the Kubernetes service.

import logging
from opentelemetry.sdk._logs import set_logger_provider
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk.resources import Resource

logger_provider = LoggerProvider(
    resource=Resource.create(
        {
            "service.name": "my-service"
        }
    ),
)
set_logger_provider(logger_provider)

handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
logging.getLogger().addHandler(handler)

logger = logging.getLogger("my-logger")


logger.info("Starting up")

logger.info("Hello, world!")
logger.info("Goodbye, world!")

In the above example, we get a logger and log a startup message. We then log a couple of messages within a code block where we are catching logs. These messages represent Kubernetes events. OpenTelemetry will automatically collect these events and associate them with the relevant service.

Microservices Monitoring with Lumigo

Lumigo is cloud native observability tool that provides automated distributed tracing of microservice applications and supports OpenTelemetry for reporting tracing data and resources. With Lumigo, users can:

  • See the end-to-end path of a transaction and full system map of applications
  • Monitor and debug third-party libraries, like Python Flask and ExpressJS, and managed services like Amazon DynamoDB, Twilio, Stripe
  • Go from alert to root cause analysis in one click with no code changes
  • Understand system behavior and explore performance and cost issues 
  • Group services into business contexts

Get started with a free trial of Lumigo for your microservice applications