Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Operation all dosen't return all the trace #2426

Open
MattiaRigon opened this issue Sep 6, 2024 · 2 comments
Open

[Bug]: Operation all dosen't return all the trace #2426

MattiaRigon opened this issue Sep 6, 2024 · 2 comments
Labels

Comments

@MattiaRigon
Copy link

What happened?

Using the jaeger ui, if I want to search all the trace, selecting all in the operation form dose not return all the trace.
While if I set the operation field to a specific value like GET I can see traces that I should see also with the "all" option.

We can see filtering by all, I get just 3 trace (and 0 get request ...), while if I explicit GET I can see correctly all the GET.
all_screen

get_screen

In addition I have looked the cassandra table traces, where all the trace are stored and in db running a simple distinct on the trace_id I have seen that there are a lot of more distinct trace_id (more than 100).

Steps to reproduce

  1. Look if the trace filtered setting operation to all are correctly all the trace
  2. Look if filtering the operation setting others value shows some trace that are not present in the all case.

Expected behavior

I expect that selecting all in the operation enable me to see all the trace that was produced the selected time interval, not just a few.

Relevant log output

No response

Screenshot

No response

Additional context

No response

Jaeger backend version

No response

SDK

No response

Pipeline

No response

Stogage backend

cassandra

Operating system

Windows(WSL2)

Deployment model

docker compose

Deployment configs

docker-compose.yml

services:
  jaeger-collector:
    image: jaegertracing/jaeger-collector:1.57.0
    command:
      - "--cassandra.keyspace=jaeger_v1_dc1"
      - "--cassandra.servers=cassandra"
      - "--collector.otlp.enabled=true"
    environment:
      - SAMPLING_CONFIG_TYPE=adaptive
    ports:
      - "4317" # accept OpenTelemetry Protocol (OTLP) over gRPC
    restart: on-failure
    depends_on:
      - cassandra-schema
 
  cassandra:
    image: cassandra:4.1.4
    volumes:
      - ./cassandra_data:/var/lib/cassandra
 
  cassandra-schema:
    image: jaegertracing/jaeger-cassandra-schema:1.57.0
    depends_on:
      - cassandra
 
  jaeger-query:
    image: jaegertracing/jaeger-query:1.57.0
    environment:
      - METRICS_STORAGE_TYPE=prometheus
    command:
      - "--cassandra.keyspace=jaeger_v1_dc1"
      - "--cassandra.servers=cassandra"
      - "--prometheus.query.support-spanmetrics-connector=true"
      - "--prometheus.server-url=http://prometheus:9090"
      - "--prometheus.query.normalize-duration=true"
      - "--prometheus.query.normalize-calls=true"
    ports:
      - "16686:16686"
      - "16687:16687"
    restart: on-failure
    depends_on:
      - cassandra-schema
 
  otel-collector:
    image: otel/opentelemetry-collector-contrib:0.100.0
    command:
      - "--config=/conf/config.yaml"
    volumes:
      - ./conf/otel-collector-config.yaml:/conf/config.yaml
    ports:
      - 4317:4317 # OTLP gRPC receiver
      - "8889" # Prometheus metrics exporter

    restart: on-failure
    depends_on:
      - jaeger-collector

  prometheus:
    image: prom/prometheus:v2.51.2
    ports:
      - "9090:9090"
    volumes:
      - ./etc/prometheus.yml:/workspace/prometheus.yml
    command:
      - --config.file=/workspace/prometheus.yml

otel-collector-conf.yaml

receivers:
  otlp:
    protocols:
      grpc:
      http:
 
exporters:
  otlp:
    endpoint: jaeger-collector:4317
    tls:
      insecure: true
  otlphttp:
    endpoint: http://jaeger-collector:4318
    tls:
      insecure: true
  prometheus:
    endpoint: "0.0.0.0:8889"
  
processors:
  tail_sampling:
    decision_wait: 10s
    num_traces: 100
    policies:
      # ALWAYS SAMPLE
      - name: always-sample
        type: always_sample
@MattiaRigon MattiaRigon added the bug label Sep 6, 2024
@yurishkuro
Copy link
Member

did you try increasing the Limit number?

@MattiaRigon MattiaRigon closed this as not planned Won't fix, can't repro, duplicate, stale Sep 10, 2024
@MattiaRigon MattiaRigon reopened this Sep 10, 2024
@MattiaRigon
Copy link
Author

Yes I have also tried to use the batch processor without limitations, but I have the same issue.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants