最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

kubernetes - OpenTelemetry export to Prometheus – Unsupported compression: snappy (prometheusremotewrite) - Stack Overflow

programmeradmin0浏览0评论

The .NET OpenTelemetry.AutoInstrumentation package fails to export metrics to Prometheus, via an OpenTelemetry Collector (otel/opentelemetry-collector-contrib) due to snappy compression.

Prometheus OTLP endpoint /api/v1/otlp/v1/metrics throws 400 Bad Request

unsupported compression: snappy. Only "gzip" or no compression supported

Full logs of OpenTelemetry Collector metrics requests:

  1. INFO debug
  2. ERROR prometheusremotewrite
2025-01-19T15:16:36.519Z    info    Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 36, "data points": 150}
2025-01-19T15:16:36.526Z    error   internal/queue_sender.go:103    Exporting failed. Dropping data.    {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: Permanent error: remote write returned HTTP status 400 Bad Request; err = %!w(<nil>): unsupported compression: snappy. Only \"gzip\" or no compression supported\n", "dropped_items": 150}
go.opentelemetry.io/collector/exporter/exporterhelper/internal.NewQueueSender.func1
    go.opentelemetry.io/collector/[email protected]/exporterhelper/internal/queue_sender.go:103
go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1
    go.opentelemetry.io/collector/[email protected]/internal/queue/consumers.go:43

kube-prometheus-stack includes configuration to open the OTLP endpoint: /api/v1/otlp/v1/metrics

prometheus:
  prometheusSpec:
    additionalArgs:
      - name: web.enable-otlp-receiver
        value: ""

OpenTelemetry Collector configuration:

# 
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: ${NAME}
  namespace: ${NAMESPACE}
spec:
  config:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318

    # 
    processors:
      batch: {}
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 25

    exporters:
      debug:
        verbosity: basic
      prometheusremotewrite:
        endpoint: http://${PROMETHEUS_SERVICE}.${PROMETHEUS_NAMESPACE}.svc.cluster.local:9090/api/v1/otlp/v1/metrics
        tls:
          insecure: true

    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug]
        metrics:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug, prometheusremotewrite]
        logs:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug]

OpenTelemetry Instrumentation to automatically consume .NET metrics:

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: ${NAME}
  namespace: ${NAMESPACE}
spec:
  exporter:
    endpoint: http://${COLLECTOR_SERVICE}.${COLLECTOR_NAMESPACE}.svc.cluster.local:4318
  propagators:
    - tracecontext
    - baggage
  sampler:
    type: parentbased_traceidratio
    argument: "1"

Kubernetes deployment of .NET container has an template metadata annotation:

instrumentation.opentelemetry.io/inject-dotnet: true

The .NET OpenTelemetry.AutoInstrumentation package fails to export metrics to Prometheus, via an OpenTelemetry Collector (otel/opentelemetry-collector-contrib) due to snappy compression.

Prometheus OTLP endpoint /api/v1/otlp/v1/metrics throws 400 Bad Request

unsupported compression: snappy. Only "gzip" or no compression supported

Full logs of OpenTelemetry Collector metrics requests:

  1. INFO debug
  2. ERROR prometheusremotewrite
2025-01-19T15:16:36.519Z    info    Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 36, "data points": 150}
2025-01-19T15:16:36.526Z    error   internal/queue_sender.go:103    Exporting failed. Dropping data.    {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: Permanent error: remote write returned HTTP status 400 Bad Request; err = %!w(<nil>): unsupported compression: snappy. Only \"gzip\" or no compression supported\n", "dropped_items": 150}
go.opentelemetry.io/collector/exporter/exporterhelper/internal.NewQueueSender.func1
    go.opentelemetry.io/collector/[email protected]/exporterhelper/internal/queue_sender.go:103
go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1
    go.opentelemetry.io/collector/[email protected]/internal/queue/consumers.go:43

kube-prometheus-stack includes configuration to open the OTLP endpoint: /api/v1/otlp/v1/metrics

prometheus:
  prometheusSpec:
    additionalArgs:
      - name: web.enable-otlp-receiver
        value: ""

OpenTelemetry Collector configuration:

# https://opentelemetry.io/docs/languages/js/exporters/#prometheus
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: ${NAME}
  namespace: ${NAMESPACE}
spec:
  config:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318

    # https://github.com/open-telemetry/opentelemetry-helm-charts/issues/23#issuecomment-910885716
    processors:
      batch: {}
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 25

    exporters:
      debug:
        verbosity: basic
      prometheusremotewrite:
        endpoint: http://${PROMETHEUS_SERVICE}.${PROMETHEUS_NAMESPACE}.svc.cluster.local:9090/api/v1/otlp/v1/metrics
        tls:
          insecure: true

    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug]
        metrics:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug, prometheusremotewrite]
        logs:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [debug]

OpenTelemetry Instrumentation to automatically consume .NET metrics:

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: ${NAME}
  namespace: ${NAMESPACE}
spec:
  exporter:
    endpoint: http://${COLLECTOR_SERVICE}.${COLLECTOR_NAMESPACE}.svc.cluster.local:4318
  propagators:
    - tracecontext
    - baggage
  sampler:
    type: parentbased_traceidratio
    argument: "1"

Kubernetes deployment of .NET container has an template metadata annotation:

instrumentation.opentelemetry.io/inject-dotnet: true
Share Improve this question edited Jan 19 at 15:39 MischievousChild asked Jan 19 at 15:27 MischievousChildMischievousChild 1512 silver badges10 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

In my Collector configuration, I am using the Prometheus RemoteWrite exporter, which pushes metrics via PRW, to the Prometheus OTLP endpoint. What I want to do, is pick one of those protocols and forget about the other.

  1. If I want to push metrics via PRW, I update the endpoint to http://${PROMETHEUS_SERVICE}.${PROMETHEUS_NAMESPACE}.svc.cluster.local:9090/api/v1/write
  2. If I want to push metrics via OTLP, I replace my current exporter with the OTLP exporter

✅ PRW solution:

prometheus:
  prometheusSpec:
    enableRemoteWriteReceiver: true
    enableFeatures:
      - remote-write-receiver

and

prometheusremotewrite:
  endpoint: http://${PROMETHEUS_SERVICE}.${PROMETHEUS_NAMESPACE}.svc.cluster.local:9090/api/v1/write
  tls:
    insecure: true

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论