Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quarkus.otel.traces.suppress-non-application-uris not working with quarkus.management.enabled #36510

Open
allevimi-nttdata opened this issue Oct 16, 2023 · 10 comments · May be fixed by #45300
Open
Labels
area/tracing kind/bug Something isn't working

Comments

@allevimi-nttdata
Copy link

Describe the bug

The default behaviour of quarkus.otel.traces.suppress-non-application-uris is to suppress trace collection of non applicative uris, everything is fine with standard confs, but if I enable the management interface through quarkus.management.enabled=truesuppresion will fail.

Expected behavior

With quarkus.management.enabled set to true, health check will be redirect to 0.0.0.0:9000/q/health and trace collection will continue to be suppresed.

Actual behavior

Otel trace collection is not suppressed and it seems that quarkus.otel.traces.suppress-non-application-uris is been ignored.

How to Reproduce?

         io.quarkus.platform:quarkus-bom:pom:3.4.3 ✔

Extensions from io.quarkus.platform:quarkus-bom:
         io.quarkus:quarkus-resteasy-reactive-jackson ✔
         io.quarkus:quarkus-smallrye-health ✔
         io.quarkus:quarkus-config-yaml ✔
         io.quarkus:quarkus-arc ✔
         io.quarkus:quarkus-rest-client-reactive-jackson ✔

 Extensions from unknown origin:
         io.quarkiverse.opentelemetry.exporter:quarkus-opentelemetry-exporter-gcp:2.0.0.Final

application.yaml

quarkus:
  otel:
    traces:
      suppress-non-application-uris: true
  management:
    enabled: true
  opentelemetry:
    tracer:
      exporter:
        gcp:
          enabled: false
        otlp:
          enabled: true
  1. make call to health api
    curl localhost:9000/q/health
  2. check on Otel receiver (ex. Jaeger)
  3. will see a trace for health call

Output of uname -a or ver

Darwin *** 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000 arm64

Output of java -version

openjdk 17.0.6 2023-01-17 OpenJDK Runtime Environment Temurin-17.0.6+10 (build 17.0.6+10) OpenJDK 64-Bit Server VM Temurin-17.0.6+10 (build 17.0.6+10, mixed mode)

GraalVM version (if different from Java)

No response

Quarkus version or git rev

3.4.3

Build tool (ie. output of mvnw --version or gradlew --version)

Apache Maven 3.9.3

Additional information

No response

@quarkus-bot
Copy link

quarkus-bot bot commented Oct 16, 2023

/cc @brunobat (opentelemetry,tracing), @radcortez (opentelemetry,tracing)

@cbos
Copy link

cbos commented Oct 24, 2023

We face the same issue.

We have set these settings:

quarkus.management.enabled=true
quarkus.management.host=localhost
quarkus.management.port=9002
quarkus.management.root-path=/management

The requests to /management/health and /management/metrics are being recorded now, which produces a lot of unnecessary traces.

@b3lix
Copy link

b3lix commented Apr 25, 2024

I would like to work on this

@brunobat
Copy link
Contributor

brunobat commented May 3, 2024

Ok @b3lix, go for it.

@nielsvhaldvb
Copy link

Whats the current status of this? Otherwise I would like to give this a go.

@brunobat
Copy link
Contributor

@b3lix, Do you mind if @nielsvhaldvb works on this?

@b3lix
Copy link

b3lix commented Jul 19, 2024

@brunobat @nielsvhaldvb hello, I dropped this effort, so yes go ahead please, and good luck.

@brunobat brunobat assigned nielsvhaldvb and unassigned b3lix Jul 22, 2024
@brunobat
Copy link
Contributor

@nielsvhaldvb Do you still plan to work on this?

@rquinio1A
Copy link
Contributor

I have a similar issue, with Quarkus 3.15.1 LTS and quarkus.management.enabled=true:

  • I can see traces being collected for http://0.0.0.0:9000/metrics endpoint each time Prometheus polls the metrics.
    We're setting quarkus.micrometer.export.prometheus.path=/metrics to override the default path.

  • I don't see traces being collected for http://0.0.0.0:9000/q/health.
    But when the health check fails, I see an MDC traceId value in the log (using [traceId: %X{traceId}] in the log handler format), which would indicate some OpenTelemetry instrumentation is happening: <io.smallrye.health> [traceId: e35b0092f23fd6f05b867be6a5161c6f] SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"SmallRye Reactive Messaging - readiness check","status":"DOWN","data":{"receiver-channel":"[KO] - No partition assignments for channel receiver-channel","emitter-channel":"[OK]" }}]}

gsmet added a commit to gsmet/quarkus that referenced this issue Dec 27, 2024
Management URLs were prefixed twice when absolute:
http://localhost:9000http://localhost:9000/q/health

Which was defeating the logic removing the host when collecting
suppressed URIs.

Fixes quarkusio#36510
@gsmet gsmet linked a pull request Dec 27, 2024 that will close this issue
@gsmet
Copy link
Member

gsmet commented Dec 27, 2024

I created #45300 that should hopefully address this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/tracing kind/bug Something isn't working
Projects
Status: Todo
Development

Successfully merging a pull request may close this issue.

7 participants