Manage Telemetry with SDK
The SDK is the built-in reference implementation of the API, processing and exporting telemetry produced by instrumentation API calls. This page is a conceptual overview of the SDK, including descriptions, links to relevant Javadocs, artifact coordinates, sample programmatic configurations and more. See Configure the SDK for details on SDK configuration, including zero-code SDK autoconfigure.
The SDK consists of the following top level components:
- SdkTracerProvider: The SDK implementation of
TracerProvider
, including tools for sampling, processing, and exporting spans. - SdkMeterProvider: The SDK implementation of
MeterProvider
, including tools for configuring metric streams and reading / exporting metrics. - SdkLoggerProvider: The SDK implementation of
LoggerProvider
, including tools for processing and exporting logs. - TextMapPropagator: Propagates context across process boundaries.
These are combined into OpenTelemetrySdk, a carrier object which makes it convenient to pass fully-configured SDK components to instrumentation.
The SDK comes packaged with a variety of built-in components which are sufficient for many use cases, and supports plugin interfaces for extensibility.
SDK plugin extension interfaces
When built-in components are insufficient, the SDK can be extended by implementing various plugin extension interfaces:
- Sampler: Configures which spans are recorded and sampled.
- SpanProcessor: Processes spans when they start and end.
- SpanExporter: Exports spans out of process.
- MetricReader: Reads aggregated metrics.
- MetricExporter: Exports metrics out of process.
- LogRecordProcessor: Processes log records when they are emitted.
- LogRecordExporter: Exports log records out of process.
- TextMapPropagator: Propagates context across process boundaries.
SDK components
The io.opentelemetry:opentelemetry-sdk:1.44.1
artifact
contains the OpenTelemetry SDK.
The following sections describe the core user-facing components of the SDK. Each component section includes:
- A brief description, including a link to the Javadoc type reference.
- If the component is a
plugin extension interface, a table of
available built-in and
opentelemetry-java-contrib
implementations. - A simple demonstration of programmatic-configuration.
- If the component is a plugin extension interface, a simple demonstration of a custom implementation.
OpenTelemetrySdk
OpenTelemetrySdk is the SDK implementation of OpenTelemetry. It is a holder for top-level SDK components which makes it convenient to pass fully-configured SDK components to instrumentation.
OpenTelemetrySdk
is configured by the application owner, and consists of:
- SdkTracerProvider: The SDK implementation of
TracerProvider
. - SdkMeterProvider: The SDK implementation of
MeterProvider
. - SdkLoggerProvider: The SDK implementation of
LoggerProvider
. - ContextPropagators: Propagates context across process boundaries.
The following code snippet demonstrates OpenTelemetrySdk
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.resources.Resource;
public class OpenTelemetrySdkConfig {
public static OpenTelemetrySdk create() {
Resource resource = ResourceConfig.create();
return OpenTelemetrySdk.builder()
.setTracerProvider(SdkTracerProviderConfig.create(resource))
.setMeterProvider(SdkMeterProviderConfig.create(resource))
.setLoggerProvider(SdkLoggerProviderConfig.create(resource))
.setPropagators(ContextPropagatorsConfig.create())
.build();
}
}
Resource
Resource is a set of attributes defining the telemetry source. An application should associate the same resource with SdkTracerProvider, SdkMeterProvider, SdkLoggerProvider.
ResourceProvider
s.The following code snippet demonstrates Resource
programmatic configuration:
package otel;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.semconv.ServiceAttributes;
public class ResourceConfig {
public static Resource create() {
return Resource.getDefault().toBuilder()
.put(ServiceAttributes.SERVICE_NAME, "my-service")
.build();
}
}
SdkTracerProvider
SdkTracerProvider is the SDK implementation of TracerProvider, and is responsible for handling trace telemetry produced by the API.
SdkTracerProvider
is configured by the application owner, and consists of:
- Resource: The resource spans are associated with.
- Sampler: Configures which spans are recorded and sampled.
- SpanProcessors: Processes spans when they start and end.
- SpanExporters: Exports spans out of process (in conjunction
with associated with
SpanProcessor
s). - SpanLimits: Controls the limits of data associated with spans.
The following code snippet demonstrates SdkTracerProvider
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
public class SdkTracerProviderConfig {
public static SdkTracerProvider create(Resource resource) {
return SdkTracerProvider.builder()
.setResource(resource)
.addSpanProcessor(
SpanProcessorConfig.batchSpanProcessor(
SpanExporterConfig.otlpHttpSpanExporter("http://localhost:4318/v1/spans")))
.setSampler(SamplerConfig.parentBasedSampler(SamplerConfig.traceIdRatioBased(.25)))
.setSpanLimits(SpanLimitsConfig::spanLimits)
.build();
}
}
Sampler
A Sampler is a plugin extension interface responsible for determining which spans are recorded and sampled.
SdkTracerProvider
is configured with the
ParentBased(root=AlwaysOn)
sampler. This results in 100% of spans being
sampled if unless a calling application performs sampling. If this is too noisy
/ expensive, change the sampler.Samplers built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
ParentBased | io.opentelemetry:opentelemetry-sdk:1.44.1 | Samples spans based on sampling status of the span’s parent. |
AlwaysOn | io.opentelemetry:opentelemetry-sdk:1.44.1 | Samples all spans. |
AlwaysOff | io.opentelemetry:opentelemetry-sdk:1.44.1 | Drops all spans. |
TraceIdRatioBased | io.opentelemetry:opentelemetry-sdk:1.44.1 | Samples spans based on a configurable ratio. |
JaegerRemoteSampler | io.opentelemetry:opentelemetry-sdk-extension-jaeger-remote-sampler:1.44.1 | Samples spans based on configuration from a remote server. |
LinksBasedSampler | io.opentelemetry.contrib:opentelemetry-samplers:1.38.0-alpha | Samples spans based on sampling status of the span’s links. |
RuleBasedRoutingSampler | io.opentelemetry.contrib:opentelemetry-samplers:1.38.0-alpha | Samples spans based on configurable rules. |
ConsistentSamplers | io.opentelemetry.contrib:opentelemetry-consistent-sampling:1.38.0-alpha | Various consistent sampler implementations as defined by probability sampling. |
The following code snippet demonstrates Sampler
programmatic configuration:
package otel;
import io.opentelemetry.sdk.extension.trace.jaeger.sampler.JaegerRemoteSampler;
import io.opentelemetry.sdk.trace.samplers.Sampler;
import java.time.Duration;
public class SamplerConfig {
public static Sampler parentBasedSampler(Sampler root) {
return Sampler.parentBasedBuilder(root)
.setLocalParentNotSampled(Sampler.alwaysOff())
.setLocalParentSampled(Sampler.alwaysOn())
.setRemoteParentNotSampled(Sampler.alwaysOff())
.setRemoteParentSampled(Sampler.alwaysOn())
.build();
}
public static Sampler alwaysOn() {
return Sampler.alwaysOn();
}
public static Sampler alwaysOff() {
return Sampler.alwaysOff();
}
public static Sampler traceIdRatioBased(double ratio) {
return Sampler.traceIdRatioBased(ratio);
}
public static Sampler jaegerRemoteSampler() {
return JaegerRemoteSampler.builder()
.setInitialSampler(Sampler.alwaysOn())
.setEndpoint("http://endpoint")
.setPollingInterval(Duration.ofSeconds(60))
.setServiceName("my-service-name")
.build();
}
}
Implement the Sampler
interface to provide your own custom sampling logic. For
example:
package otel;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.trace.SpanKind;
import io.opentelemetry.context.Context;
import io.opentelemetry.sdk.trace.data.LinkData;
import io.opentelemetry.sdk.trace.samplers.Sampler;
import io.opentelemetry.sdk.trace.samplers.SamplingResult;
import java.util.List;
public class CustomSampler implements Sampler {
@Override
public SamplingResult shouldSample(
Context parentContext,
String traceId,
String name,
SpanKind spanKind,
Attributes attributes,
List<LinkData> parentLinks) {
// Callback invoked when span is started, before any SpanProcessor is called.
// If the SamplingDecision is:
// - DROP: the span is dropped. A valid span context is created and SpanProcessor#onStart is
// still called, but no data is recorded and SpanProcessor#onEnd is not called.
// - RECORD_ONLY: the span is recorded but not sampled. Data is recorded to the span,
// SpanProcessor#onStart and SpanProcessor#onEnd are called, but the span's sampled status
// indicates it should not be exported out of process.
// - RECORD_AND_SAMPLE: the span is recorded and sampled. Data is recorded to the span,
// SpanProcessor#onStart and SpanProcessor#onEnd are called, and the span's sampled status
// indicates it should be exported out of process.
return SpanKind.SERVER == spanKind ? SamplingResult.recordAndSample() : SamplingResult.drop();
}
@Override
public String getDescription() {
// Return a description of the sampler.
return this.getClass().getSimpleName();
}
}
SpanProcessor
A SpanProcessor is a plugin extension interface with callbacks invoked when a span is started and ended. They are often paired with SpanExporters to export spans out of process, but have other applications such as data enrichment.
Span processors built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
BatchSpanProcessor | io.opentelemetry:opentelemetry-sdk:1.44.1 | Batches sampled spans and exports them via a configurable SpanExporter . |
SimpleSpanProcessor | io.opentelemetry:opentelemetry-sdk:1.44.1 | Exports each sampled span via a configurable SpanExporter . |
BaggageSpanProcessor | io.opentelemetry.contrib:opentelemetry-baggage-processor:1.38.0-alpha | Enriches spans with baggage. |
JfrSpanProcessor | io.opentelemetry.contrib:opentelemetry-jfr-events:1.38.0-alpha | Creates JFR events from spans. |
StackTraceSpanProcessor | io.opentelemetry.contrib:opentelemetry-span-stacktrace:1.38.0-alpha | Enriches select spans with stack trace data. |
The following code snippet demonstrates SpanProcessor
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.trace.SpanProcessor;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import io.opentelemetry.sdk.trace.export.SimpleSpanProcessor;
import io.opentelemetry.sdk.trace.export.SpanExporter;
import java.time.Duration;
public class SpanProcessorConfig {
public static SpanProcessor batchSpanProcessor(SpanExporter spanExporter) {
return BatchSpanProcessor.builder(spanExporter)
.setMaxQueueSize(2048)
.setExporterTimeout(Duration.ofSeconds(30))
.setScheduleDelay(Duration.ofSeconds(5))
.build();
}
public static SpanProcessor simpleSpanProcessor(SpanExporter spanExporter) {
return SimpleSpanProcessor.builder(spanExporter).build();
}
}
Implement the SpanProcessor
interface to provide your own custom span
processing logic. For example:
package otel;
import io.opentelemetry.context.Context;
import io.opentelemetry.sdk.common.CompletableResultCode;
import io.opentelemetry.sdk.trace.ReadWriteSpan;
import io.opentelemetry.sdk.trace.ReadableSpan;
import io.opentelemetry.sdk.trace.SpanProcessor;
public class CustomSpanProcessor implements SpanProcessor {
@Override
public void onStart(Context parentContext, ReadWriteSpan span) {
// Callback invoked when span is started.
// Enrich the record with a custom attribute.
span.setAttribute("my.custom.attribute", "hello world");
}
@Override
public boolean isStartRequired() {
// Indicate if onStart should be called.
return true;
}
@Override
public void onEnd(ReadableSpan span) {
// Callback invoked when span is ended.
}
@Override
public boolean isEndRequired() {
// Indicate if onEnd should be called.
return false;
}
@Override
public CompletableResultCode shutdown() {
// Optionally shutdown the processor and cleanup any resources.
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode forceFlush() {
// Optionally process any records which have been queued up but not yet processed.
return CompletableResultCode.ofSuccess();
}
}
SpanExporter
A
SpanExporter
is a plugin extension interface responsible
for exporting spans out of process. Rather than directly registering with
SdkTracerProvider
, they are paired with SpanProcessors
(typically BatchSpanProcessor
).
Span exporters built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
OtlpHttpSpanExporter [1] | io.opentelemetry:opentelemetry-exporter-otlp:1.44.1 | Exports spans via OTLP http/protobuf . |
OtlpGrpcSpanExporter [1] | io.opentelemetry:opentelemetry-exporter-otlp:1.44.1 | Exports spans via OTLP grpc . |
LoggingSpanExporter | io.opentelemetry:opentelemetry-exporter-logging:1.44.1 | Logs spans to JUL in a debugging format. |
OtlpJsonLoggingSpanExporter | io.opentelemetry:opentelemetry-exporter-logging-otlp:1.44.1 | Logs spans to JUL in an OTLP JSON encoding. |
OtlpStdoutSpanExporter | io.opentelemetry:opentelemetry-exporter-logging-otlp:1.44.1 | Logs spans to System.out in the OTLP JSON file encoding (experimental). |
ZipkinSpanExporter | io.opentelemetry:opentelemetry-exporter-zipkin:1.44.1 | Export spans to Zipkin. |
InterceptableSpanExporter | io.opentelemetry.contrib:opentelemetry-processors:1.38.0-alpha | Passes spans to a flexible interceptor before exporting. |
KafkaSpanExporter | io.opentelemetry.contrib:opentelemetry-kafka-exporter:1.38.0-alpha | Exports spans by writing to a Kafka topic. |
[1]: See OTLP exporter sender for implementation details.
The following code snippet demonstrates SpanExporter
programmatic
configuration:
package otel;
import io.opentelemetry.exporter.logging.LoggingSpanExporter;
import io.opentelemetry.exporter.logging.otlp.OtlpJsonLoggingSpanExporter;
import io.opentelemetry.exporter.otlp.http.trace.OtlpHttpSpanExporter;
import io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter;
import io.opentelemetry.sdk.trace.export.SpanExporter;
import java.time.Duration;
public class SpanExporterConfig {
public static SpanExporter otlpHttpSpanExporter(String endpoint) {
return OtlpHttpSpanExporter.builder()
.setEndpoint(endpoint)
.addHeader("api-key", "value")
.setTimeout(Duration.ofSeconds(10))
.build();
}
public static SpanExporter otlpGrpcSpanExporter(String endpoint) {
return OtlpGrpcSpanExporter.builder()
.setEndpoint(endpoint)
.addHeader("api-key", "value")
.setTimeout(Duration.ofSeconds(10))
.build();
}
public static SpanExporter logginSpanExporter() {
return LoggingSpanExporter.create();
}
public static SpanExporter otlpJsonLoggingSpanExporter() {
return OtlpJsonLoggingSpanExporter.create();
}
}
Implement the SpanExporter
interface to provide your own custom span export
logic. For example:
package otel;
import io.opentelemetry.sdk.common.CompletableResultCode;
import io.opentelemetry.sdk.trace.data.SpanData;
import io.opentelemetry.sdk.trace.export.SpanExporter;
import java.util.Collection;
import java.util.logging.Level;
import java.util.logging.Logger;
public class CustomSpanExporter implements SpanExporter {
private static final Logger logger = Logger.getLogger(CustomSpanExporter.class.getName());
@Override
public CompletableResultCode export(Collection<SpanData> spans) {
// Export the records. Typically, records are sent out of process via some network protocol, but
// we simply log for illustrative purposes.
logger.log(Level.INFO, "Exporting spans");
spans.forEach(span -> logger.log(Level.INFO, "Span: " + span));
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode flush() {
// Export any records which have been queued up but not yet exported.
logger.log(Level.INFO, "flushing");
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode shutdown() {
// Shutdown the exporter and cleanup any resources.
logger.log(Level.INFO, "shutting down");
return CompletableResultCode.ofSuccess();
}
}
SpanLimits
SpanLimits defines constraints for the data captured by spans, including max attribute length, max number of attributes, and more.
The following code snippet demonstrates SpanLimits
programmatic configuration:
package otel;
import io.opentelemetry.sdk.trace.SpanLimits;
public class SpanLimitsConfig {
public static SpanLimits spanLimits() {
return SpanLimits.builder()
.setMaxNumberOfAttributes(128)
.setMaxAttributeValueLength(1024)
.setMaxNumberOfLinks(128)
.setMaxNumberOfAttributesPerLink(128)
.setMaxNumberOfEvents(128)
.setMaxNumberOfAttributesPerEvent(128)
.build();
}
}
SdkMeterProvider
SdkMeterProvider is the SDK implementation of MeterProvider, and is responsible for handling metric telemetry produced by the API.
SdkMeterProvider
is configured by the application owner, and consists of:
- Resource: The resource metrics are associated with.
- MetricReader: Reads the aggregated state of metrics.
- MetricExporter: Exports metrics out of process (in
conjunction with associated
MetricReader
). - Views: Configures metric streams, including dropping unused metrics.
The following code snippet demonstrates SdkMeterProvider
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.metrics.SdkMeterProvider;
import io.opentelemetry.sdk.metrics.SdkMeterProviderBuilder;
import io.opentelemetry.sdk.resources.Resource;
import java.util.List;
import java.util.Set;
public class SdkMeterProviderConfig {
public static SdkMeterProvider create(Resource resource) {
SdkMeterProviderBuilder builder =
SdkMeterProvider.builder()
.setResource(resource)
.registerMetricReader(
MetricReaderConfig.periodicMetricReader(
MetricExporterConfig.otlpHttpMetricExporter(
"http://localhost:4318/v1/metrics")));
ViewConfig.dropMetricView(builder, "some.custom.metric");
ViewConfig.histogramBucketBoundariesView(
builder, "http.server.request.duration", List.of(1.0, 5.0, 10.0));
ViewConfig.attributeFilterView(
builder, "http.client.request.duration", Set.of("http.request.method"));
return builder.build();
}
}
MetricReader
A MetricReader is a plugin extension interface which is responsible for reading aggregated metrics. They are often paired with MetricExporters to export metrics out of process, but may also be used to serve the metrics to external scrapers in pull-based protocols.
Metric readers built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
PeriodicMetricReader | io.opentelemetry:opentelemetry-sdk:1.44.1 | Reads metrics on a periodic basis and exports them via a configurable MetricExporter . |
PrometheusHttpServer | io.opentelemetry:opentelemetry-exporter-prometheus:1.44.1-alpha | Serves metrics on an HTTP server in various prometheus formats. |
The following code snippet demonstrates MetricReader
programmatic
configuration:
package otel;
import io.opentelemetry.exporter.prometheus.PrometheusHttpServer;
import io.opentelemetry.sdk.metrics.export.MetricExporter;
import io.opentelemetry.sdk.metrics.export.MetricReader;
import io.opentelemetry.sdk.metrics.export.PeriodicMetricReader;
import java.time.Duration;
public class MetricReaderConfig {
public static MetricReader periodicMetricReader(MetricExporter metricExporter) {
return PeriodicMetricReader.builder(metricExporter).setInterval(Duration.ofSeconds(60)).build();
}
public static MetricReader prometheusMetricReader() {
return PrometheusHttpServer.builder().setHost("localhost").setPort(9464).build();
}
}
Implement the MetricReader
interface to provide your own custom metric reader
logic. For example:
package otel;
import io.opentelemetry.sdk.common.CompletableResultCode;
import io.opentelemetry.sdk.common.export.MemoryMode;
import io.opentelemetry.sdk.metrics.Aggregation;
import io.opentelemetry.sdk.metrics.InstrumentType;
import io.opentelemetry.sdk.metrics.data.AggregationTemporality;
import io.opentelemetry.sdk.metrics.export.AggregationTemporalitySelector;
import io.opentelemetry.sdk.metrics.export.CollectionRegistration;
import io.opentelemetry.sdk.metrics.export.MetricReader;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import java.util.logging.Level;
import java.util.logging.Logger;
public class CustomMetricReader implements MetricReader {
private static final Logger logger = Logger.getLogger(CustomMetricExporter.class.getName());
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(1);
private final AtomicReference<CollectionRegistration> collectionRef =
new AtomicReference<>(CollectionRegistration.noop());
@Override
public void register(CollectionRegistration collectionRegistration) {
// Callback invoked when SdkMeterProvider is initialized, providing a handle to collect metrics.
collectionRef.set(collectionRegistration);
executorService.scheduleWithFixedDelay(this::collectMetrics, 0, 60, TimeUnit.SECONDS);
}
private void collectMetrics() {
// Collect metrics. Typically, records are sent out of process via some network protocol, but we
// simply log for illustrative purposes.
logger.log(Level.INFO, "Collecting metrics");
collectionRef
.get()
.collectAllMetrics()
.forEach(metric -> logger.log(Level.INFO, "Metric: " + metric));
}
@Override
public CompletableResultCode forceFlush() {
// Export any records which have been queued up but not yet exported.
logger.log(Level.INFO, "flushing");
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode shutdown() {
// Shutdown the exporter and cleanup any resources.
logger.log(Level.INFO, "shutting down");
return CompletableResultCode.ofSuccess();
}
@Override
public AggregationTemporality getAggregationTemporality(InstrumentType instrumentType) {
// Specify the required aggregation temporality as a function of instrument type
return AggregationTemporalitySelector.deltaPreferred()
.getAggregationTemporality(instrumentType);
}
@Override
public MemoryMode getMemoryMode() {
// Optionally specify the memory mode, indicating whether metric records can be reused or must
// be immutable
return MemoryMode.REUSABLE_DATA;
}
@Override
public Aggregation getDefaultAggregation(InstrumentType instrumentType) {
// Optionally specify the default aggregation as a function of instrument kind
return Aggregation.defaultAggregation();
}
}
MetricExporter
A
MetricExporter
is a plugin extension interface responsible
for exporting metrics out of process. Rather than directly registering with
SdkMeterProvider
, they are paired with PeriodicMetricReader.
Metric exporters built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
OtlpHttpMetricExporter [1] | io.opentelemetry:opentelemetry-exporter-otlp:1.44.1 | Exports metrics via OTLP http/protobuf . |
OtlpGrpcMetricExporter [1] | io.opentelemetry:opentelemetry-exporter-otlp:1.44.1 | Exports metrics via OTLP grpc . |
LoggingMetricExporter | io.opentelemetry:opentelemetry-exporter-logging:1.44.1 | Logs metrics to JUL in a debugging format. |
OtlpJsonLoggingMetricExporter | io.opentelemetry:opentelemetry-exporter-logging-otlp:1.44.1 | Logs metrics to JUL in the OTLP JSON encoding. |
OtlpStdoutMetricExporter | io.opentelemetry:opentelemetry-exporter-logging-otlp:1.44.1 | Logs metrics to System.out in the OTLP JSON file encoding (experimental). |
InterceptableMetricExporter | io.opentelemetry.contrib:opentelemetry-processors:1.38.0-alpha | Passes metrics to a flexible interceptor before exporting. |
[1]: See OTLP exporter sender for implementation details.
The following code snippet demonstrates MetricExporter
programmatic
configuration:
package otel;
import io.opentelemetry.exporter.logging.LoggingMetricExporter;
import io.opentelemetry.exporter.logging.otlp.OtlpJsonLoggingMetricExporter;
import io.opentelemetry.exporter.otlp.http.metrics.OtlpHttpMetricExporter;
import io.opentelemetry.exporter.otlp.metrics.OtlpGrpcMetricExporter;
import io.opentelemetry.sdk.metrics.export.MetricExporter;
import java.time.Duration;
public class MetricExporterConfig {
public static MetricExporter otlpHttpMetricExporter(String endpoint) {
return OtlpHttpMetricExporter.builder()
.setEndpoint(endpoint)
.addHeader("api-key", "value")
.setTimeout(Duration.ofSeconds(10))
.build();
}
public static MetricExporter otlpGrpcMetricExporter(String endpoint) {
return OtlpGrpcMetricExporter.builder()
.setEndpoint(endpoint)
.addHeader("api-key", "value")
.setTimeout(Duration.ofSeconds(10))
.build();
}
public static MetricExporter logginMetricExporter() {
return LoggingMetricExporter.create();
}
public static MetricExporter otlpJsonLoggingMetricExporter() {
return OtlpJsonLoggingMetricExporter.create();
}
}
Implement the MetricExporter
interface to provide your own custom metric
export logic. For example:
package otel;
import io.opentelemetry.sdk.common.CompletableResultCode;
import io.opentelemetry.sdk.common.export.MemoryMode;
import io.opentelemetry.sdk.metrics.Aggregation;
import io.opentelemetry.sdk.metrics.InstrumentType;
import io.opentelemetry.sdk.metrics.data.AggregationTemporality;
import io.opentelemetry.sdk.metrics.data.MetricData;
import io.opentelemetry.sdk.metrics.export.AggregationTemporalitySelector;
import io.opentelemetry.sdk.metrics.export.MetricExporter;
import java.util.Collection;
import java.util.logging.Level;
import java.util.logging.Logger;
public class CustomMetricExporter implements MetricExporter {
private static final Logger logger = Logger.getLogger(CustomMetricExporter.class.getName());
@Override
public CompletableResultCode export(Collection<MetricData> metrics) {
// Export the records. Typically, records are sent out of process via some network protocol, but
// we simply log for illustrative purposes.
logger.log(Level.INFO, "Exporting metrics");
metrics.forEach(metric -> logger.log(Level.INFO, "Metric: " + metric));
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode flush() {
// Export any records which have been queued up but not yet exported.
logger.log(Level.INFO, "flushing");
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode shutdown() {
// Shutdown the exporter and cleanup any resources.
logger.log(Level.INFO, "shutting down");
return CompletableResultCode.ofSuccess();
}
@Override
public AggregationTemporality getAggregationTemporality(InstrumentType instrumentType) {
// Specify the required aggregation temporality as a function of instrument type
return AggregationTemporalitySelector.deltaPreferred()
.getAggregationTemporality(instrumentType);
}
@Override
public MemoryMode getMemoryMode() {
// Optionally specify the memory mode, indicating whether metric records can be reused or must
// be immutable
return MemoryMode.REUSABLE_DATA;
}
@Override
public Aggregation getDefaultAggregation(InstrumentType instrumentType) {
// Optionally specify the default aggregation as a function of instrument kind
return Aggregation.defaultAggregation();
}
}
Views
Views allow metric streams to be customized, including changing metric names, metric descriptions, metric aggregations (i.e. histogram bucket boundaries), the set of attribute keys to retain, etc.
The following code snippet demonstrates View
programmatic configuration:
package otel;
import io.opentelemetry.sdk.metrics.Aggregation;
import io.opentelemetry.sdk.metrics.InstrumentSelector;
import io.opentelemetry.sdk.metrics.SdkMeterProviderBuilder;
import io.opentelemetry.sdk.metrics.View;
import java.util.List;
import java.util.Set;
public class ViewConfig {
public static SdkMeterProviderBuilder dropMetricView(
SdkMeterProviderBuilder builder, String metricName) {
return builder.registerView(
InstrumentSelector.builder().setName(metricName).build(),
View.builder().setAggregation(Aggregation.drop()).build());
}
public static SdkMeterProviderBuilder histogramBucketBoundariesView(
SdkMeterProviderBuilder builder, String metricName, List<Double> bucketBoundaries) {
return builder.registerView(
InstrumentSelector.builder().setName(metricName).build(),
View.builder()
.setAggregation(Aggregation.explicitBucketHistogram(bucketBoundaries))
.build());
}
public static SdkMeterProviderBuilder attributeFilterView(
SdkMeterProviderBuilder builder, String metricName, Set<String> keysToRetain) {
return builder.registerView(
InstrumentSelector.builder().setName(metricName).build(),
View.builder().setAttributeFilter(keysToRetain).build());
}
}
SdkLoggerProvider
SdkLoggerProvider is the SDK implementation of LoggerProvider, and is responsible for handling log telemetry produced by the log bridge API.
SdkLoggerProvider
is configured by the application owner, and consists of:
- Resource: The resource logs are associated with.
- LogRecordProcessor: Processes logs when they are emitted.
- LogRecordExporter: Exports logs out of process (in
conjunction with associated
LogRecordProcessor
). - LogLimits: Controls the limits of data associated with logs.
The following code snippet demonstrates SdkLoggerProvider
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.logs.SdkLoggerProvider;
import io.opentelemetry.sdk.resources.Resource;
public class SdkLoggerProviderConfig {
public static SdkLoggerProvider create(Resource resource) {
return SdkLoggerProvider.builder()
.setResource(resource)
.addLogRecordProcessor(
LogRecordProcessorConfig.batchLogRecordProcessor(
LogRecordExporterConfig.otlpHttpLogRecordExporter("http://localhost:4318/v1/logs")))
.setLogLimits(LogLimitsConfig::logLimits)
.build();
}
}
LogRecordProcessor
A LogRecordProcessor is a plugin extension interface with a callback invoked when a log is emitted. They are often paired with LogRecordExporters to export logs out of process, but have other applications such as data enrichment.
Log record processors built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
BatchLogRecordProcessor | io.opentelemetry:opentelemetry-sdk:1.44.1 | Batches log records and exports them via a configurable LogRecordExporter . |
SimpleLogRecordProcessor | io.opentelemetry:opentelemetry-sdk:1.44.1 | Exports each log record a via a configurable LogRecordExporter . |
The following code snippet demonstrates LogRecordProcessor
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.logs.LogRecordProcessor;
import io.opentelemetry.sdk.logs.export.BatchLogRecordProcessor;
import io.opentelemetry.sdk.logs.export.LogRecordExporter;
import io.opentelemetry.sdk.logs.export.SimpleLogRecordProcessor;
import java.time.Duration;
public class LogRecordProcessorConfig {
public static LogRecordProcessor batchLogRecordProcessor(LogRecordExporter logRecordExporter) {
return BatchLogRecordProcessor.builder(logRecordExporter)
.setMaxQueueSize(2048)
.setExporterTimeout(Duration.ofSeconds(30))
.setScheduleDelay(Duration.ofSeconds(1))
.build();
}
public static LogRecordProcessor simpleLogRecordProcessor(LogRecordExporter logRecordExporter) {
return SimpleLogRecordProcessor.create(logRecordExporter);
}
}
Implement the LogRecordProcessor
interface to provide your own custom log
processing logic. For example:
package otel;
import io.opentelemetry.api.common.AttributeKey;
import io.opentelemetry.context.Context;
import io.opentelemetry.sdk.common.CompletableResultCode;
import io.opentelemetry.sdk.logs.LogRecordProcessor;
import io.opentelemetry.sdk.logs.ReadWriteLogRecord;
public class CustomLogRecordProcessor implements LogRecordProcessor {
@Override
public void onEmit(Context context, ReadWriteLogRecord logRecord) {
// Callback invoked when log record is emitted.
// Enrich the record with a custom attribute.
logRecord.setAttribute(AttributeKey.stringKey("my.custom.attribute"), "hello world");
}
@Override
public CompletableResultCode shutdown() {
// Optionally shutdown the processor and cleanup any resources.
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode forceFlush() {
// Optionally process any records which have been queued up but not yet processed.
return CompletableResultCode.ofSuccess();
}
}
LogRecordExporter
A
LogRecordExporter
is a plugin extension interface responsible
for exporting log records out of process. Rather than directly registering with
SdkLoggerProvider
, they are paired with
LogRecordProcessors (typically
BatchLogRecordProcessor
).
Span exporters built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
OtlpHttpLogRecordExporter [1] | io.opentelemetry:opentelemetry-exporter-otlp:1.44.1 | Exports log records via OTLP http/protobuf . |
OtlpGrpcLogRecordExporter [1] | io.opentelemetry:opentelemetry-exporter-otlp:1.44.1 | Exports log records via OTLP grpc . |
SystemOutLogRecordExporter | io.opentelemetry:opentelemetry-exporter-logging:1.44.1 | Logs log records to system out in a debugging format. |
OtlpJsonLoggingLogRecordExporter [2] | io.opentelemetry:opentelemetry-exporter-logging-otlp:1.44.1 | Logs log records to JUL in the OTLP JSON encoding. |
OtlpStdoutLogRecordExporter | io.opentelemetry:opentelemetry-exporter-logging-otlp:1.44.1 | Logs log records to System.out in the OTLP JSON file encoding (experimental). |
InterceptableLogRecordExporter | io.opentelemetry.contrib:opentelemetry-processors:1.38.0-alpha | Passes log records to a flexible interceptor before exporting. |
[1]: See OTLP exporter sender for implementation details.
[2]: OtlpJsonLoggingLogRecordExporter
logs to JUL, and may cause infinite
loops (i.e. JUL -> SLF4J -> Logback -> OpenTelemetry Appender -> OpenTelemetry
Log SDK -> JUL) if not carefully configured.
The following code snippet demonstrates LogRecordProcessor
programmatic
configuration:
package otel;
import io.opentelemetry.exporter.logging.SystemOutLogRecordExporter;
import io.opentelemetry.exporter.logging.otlp.OtlpJsonLoggingLogRecordExporter;
import io.opentelemetry.exporter.otlp.http.logs.OtlpHttpLogRecordExporter;
import io.opentelemetry.exporter.otlp.logs.OtlpGrpcLogRecordExporter;
import io.opentelemetry.sdk.logs.export.LogRecordExporter;
import java.time.Duration;
public class LogRecordExporterConfig {
public static LogRecordExporter otlpHttpLogRecordExporter(String endpoint) {
return OtlpHttpLogRecordExporter.builder()
.setEndpoint(endpoint)
.addHeader("api-key", "value")
.setTimeout(Duration.ofSeconds(10))
.build();
}
public static LogRecordExporter otlpGrpcLogRecordExporter(String endpoint) {
return OtlpGrpcLogRecordExporter.builder()
.setEndpoint(endpoint)
.addHeader("api-key", "value")
.setTimeout(Duration.ofSeconds(10))
.build();
}
public static LogRecordExporter systemOutLogRecordExporter() {
return SystemOutLogRecordExporter.create();
}
public static LogRecordExporter otlpJsonLoggingLogRecordExporter() {
return OtlpJsonLoggingLogRecordExporter.create();
}
}
Implement the LogRecordExporter
interface to provide your own custom log
record export logic. For example:
package otel;
import io.opentelemetry.sdk.common.CompletableResultCode;
import io.opentelemetry.sdk.logs.data.LogRecordData;
import io.opentelemetry.sdk.logs.export.LogRecordExporter;
import java.util.Collection;
import java.util.logging.Level;
import java.util.logging.Logger;
public class CustomLogRecordExporter implements LogRecordExporter {
private static final Logger logger = Logger.getLogger(CustomLogRecordExporter.class.getName());
@Override
public CompletableResultCode export(Collection<LogRecordData> logs) {
// Export the records. Typically, records are sent out of process via some network protocol, but
// we simply log for illustrative purposes.
System.out.println("Exporting logs");
logs.forEach(log -> System.out.println("log record: " + log));
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode flush() {
// Export any records which have been queued up but not yet exported.
logger.log(Level.INFO, "flushing");
return CompletableResultCode.ofSuccess();
}
@Override
public CompletableResultCode shutdown() {
// Shutdown the exporter and cleanup any resources.
logger.log(Level.INFO, "shutting down");
return CompletableResultCode.ofSuccess();
}
}
LogLimits
LogLimits defines constraints for the data captured by log records, including max attribute length, and max number of attributes.
The following code snippet demonstrates LogRecordProcessor
programmatic
configuration:
package otel;
import io.opentelemetry.sdk.logs.LogLimits;
public class LogLimitsConfig {
public static LogLimits logLimits() {
return LogLimits.builder()
.setMaxNumberOfAttributes(128)
.setMaxAttributeValueLength(1024)
.build();
}
}
TextMapPropagator
TextMapPropagator is a plugin extension interface responsible for propagating context across process boundaries in a text format.
TextMapPropagators built-in to the SDK and maintained by the community in
opentelemetry-java-contrib
:
Class | Artifact | Description |
---|---|---|
W3CTraceContextPropagator | io.opentelemetry:opentelemetry-api:1.44.1 | Propagate trace context using W3C trace context propagation protocol. |
W3CBaggagePropagator | io.opentelemetry:opentelemetry-api:1.44.1 | Propagate baggage using W3C baggage propagation protocol. |
MultiTextMapPropagator | io.opentelemetry:opentelemetry-context:1.44.1 | Compose multiple propagators. |
JaegerPropagator | io.opentelemetry:opentelemetry-extension-trace-propagators:1.44.1 | Propagator trace context using the Jaeger propagation protocol. |
B3Propagator | io.opentelemetry:opentelemetry-extension-trace-propagators:1.44.1 | Propagator trace context using the B3 propagation protocol. |
OtTracePropagator | io.opentelemetry:opentelemetry-extension-trace-propagators:1.44.1 | Propagator trace context using the OpenTracing propagation protocol. |
PassThroughPropagator | io.opentelemetry:opentelemetry-api-incubator:1.44.1-alpha | Propagate a configurable set fields without participating in telemetry. |
AwsXrayPropagator | io.opentelemetry.contrib:opentelemetry-aws-xray-propagator:1.38.0-alpha | Propagate trace context using AWS X-Ray propagation protocol. |
AwsXrayLambdaPropagator | io.opentelemetry.contrib:opentelemetry-aws-xray-propagator:1.38.0-alpha | Propagate trace context using environment variables and AWS X-Ray propagation protocol. |
The following code snippet demonstrates TextMapPropagator
programmatic
configuration:
package otel;
import io.opentelemetry.api.baggage.propagation.W3CBaggagePropagator;
import io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator;
import io.opentelemetry.context.propagation.ContextPropagators;
import io.opentelemetry.context.propagation.TextMapPropagator;
public class ContextPropagatorsConfig {
public static ContextPropagators create() {
return ContextPropagators.create(
TextMapPropagator.composite(
W3CTraceContextPropagator.getInstance(), W3CBaggagePropagator.getInstance()));
}
}
Implement the TextMapPropagator
interface to provide your own custom
propagator logic. For example:
package otel;
import io.opentelemetry.context.Context;
import io.opentelemetry.context.propagation.TextMapGetter;
import io.opentelemetry.context.propagation.TextMapPropagator;
import io.opentelemetry.context.propagation.TextMapSetter;
import java.util.Collection;
import java.util.Collections;
public class CustomTextMapPropagator implements TextMapPropagator {
@Override
public Collection<String> fields() {
// Return fields used for propagation. See W3CTraceContextPropagator for reference
// implementation.
return Collections.emptyList();
}
@Override
public <C> void inject(Context context, C carrier, TextMapSetter<C> setter) {
// Inject context. See W3CTraceContextPropagator for reference implementation.
}
@Override
public <C> Context extract(Context context, C carrier, TextMapGetter<C> getter) {
// Extract context. See W3CTraceContextPropagator for reference implementation.
return context;
}
}
Appendix
Internal logging
SDK components log a variety of information to java.util.logging, at different log levels and using logger names based on the fully qualified class name of the relevant component.
By default, log messages are handled by the root handler in your application. If
you have not installed a custom root handler for your application, logs of level
INFO
or higher are sent to the console by default.
You may want to change the behavior of the logger for OpenTelemetry. For
example, you can reduce the logging level to output additional information when
debugging, increase the level for a particular class to ignore errors coming
from the class, or install a custom handler or filter to run custom code
whenever OpenTelemetry logs a particular message. No detailed list of logger
names and log information is maintained. However, all OpenTelemetry API, SDK,
contrib and instrumentation components share the same io.opentelemetry.*
package prefix. It can be useful to enable finer grain logs for all
io.opentelemetry.*
, inspect the output, and narrow down to packages or FQCNs
of interest.
For example:
## Turn off all OpenTelemetry logging
io.opentelemetry.level = OFF
## Turn off logging for just the BatchSpanProcessor
io.opentelemetry.sdk.trace.export.BatchSpanProcessor.level = OFF
## Log "FINE" messages for help in debugging
io.opentelemetry.level = FINE
## Sets the default ConsoleHandler's logger's level
## Note this impacts the logging outside of OpenTelemetry as well
java.util.logging.ConsoleHandler.level = FINE
For more fine-grained control and special case handling, custom handlers and filters can be specified with code.
// Custom filter which does not log errors which come from the export
public class IgnoreExportErrorsFilter implements java.util.logging.Filter {
public boolean isLoggable(LogRecord record) {
return !record.getMessage().contains("Exception thrown by the export");
}
}
## Registering the custom filter on the BatchSpanProcessor
io.opentelemetry.sdk.trace.export.BatchSpanProcessor = io.opentelemetry.extension.logging.IgnoreExportErrorsFilter
OTLP exporter senders
The span exporter, metric exporter, and log exporter discuss OTLP exporters of the form:
OtlpHttp{Signal}Exporter
s export data via OTLPhttp/protobuf
.OtlpGrpc{Signal}Exporter
s export data via OTLPgrpc
.
The exporters for all signals are available via
io.opentelemetry:opentelemetry-exporter-otlp:1.44.1
.
Internally, these exporters depend on various client libraries to execute HTTP and gRPC requests. There is no single HTTP / gRPC client library which satisfies all use cases in the Java ecosystem:
- Java 11+ brings the built-in
java.net.http.HttpClient
, butopentelemetry-java
needs to support Java 8+ users, and this can’t be used to export viagRPC
because there is no support for trailer headers. - OkHttp provides a powerful HTTP client with support for trailer headers, but depends on the kotlin standard library.
- grpc-java provides its own
ManagedChannel
abstraction with various transport implementations, but is not suitable forhttp/protobuf
.
In order to accommodate various use cases, opentelemetry-exporter-otlp
uses an
internal “sender” abstraction, with a variety of implementations to reflect
application constraints. To choose another implementation, exclude the
io.opentelemetry:opentelemetry-exporter-sender-okhttp
default dependency, and
add a dependency on the alternative.
Artifact | Description | OTLP Protocols | Default |
---|---|---|---|
io.opentelemetry:opentelemetry-exporter-sender-okhttp:1.44.1 | OkHttp based implementation. | grpc , http/protobuf | Yes |
io.opentelemetry:opentelemetry-exporter-sender-jdk:1.44.1 | Java 11+ java.net.http.HttpClient based implementation. | http/protobuf | No |
io.opentelemetry:opentelemetry-exporter-sender-grpc-managed-channel:1.44.1 [1] | grpc-java ManagedChannel based implementation. | grpc | No |
[1]: In order to use opentelemetry-exporter-sender-grpc-managed-channel
,
you must also add a dependency on a
gRPC transport implementations.
Testing
TODO: document tools available for testing the SDK
Feedback
Cette page est-elle utile?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!