Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

otelcol.exporter.otlp.tracesservice allow config for write_buffer_size #723

Open
radupopa369 opened this issue Sep 7, 2024 · 2 comments · May be fixed by #742
Open

otelcol.exporter.otlp.tracesservice allow config for write_buffer_size #723

radupopa369 opened this issue Sep 7, 2024 · 2 comments · May be fixed by #742

Comments

@radupopa369
Copy link

getting this error

ts=2024-09-05T13:36:46.474811899Z level=error msg="Exporting failed. Dropping data." componentpath=/ componentid=otelcol.exporter.otlp.tracesservice error="not retryable error: Permanent error: rpc error: code = ResourceExhausted desc = grpc: received message after decompression larger than max (7050547 vs. 4194304)" droppeditems=6466

Looked through the chart and there is no way I can configure the generated config.alloy

The template file for this section https://github.com/grafana/k8s-monitoring-helm/blob/main/charts/k8s-monitoring/templates/alloy_config/_traces_service.alloy.txt
needs a way to configure write_buffer_size . Refference in the docs here https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.exporter.otlp/

@Raboo
Copy link

Raboo commented Sep 13, 2024

I ran into the same problem.
It can't be the write_buffer_size as it defaults to 512KiB and the limit that is hit is 4MiB.

I tried increasing grpc_server_max_recv_msg_size and grpc_server_max_send_msg_size in Tempo to 8MiB, and yes setting limits in the client so it doesn't send to big payloads is probably better than just increasing the limits server side. But I don't know which client side/alloy setting it would be.

Now I seem to get less errors, but still getting errors.

However I do feel that the correct setting in the server side to adjust would be https://grafana.com/docs/tempo/latest/troubleshooting/response-too-large/#ingestion.

distributor:
  receivers:
    otlp:
      grpc:
        max_recv_msg_size_mib: <size>

However this setting can't easily be changed using the tempo-distributed helm chart afaik.

@petewall petewall linked a pull request Sep 19, 2024 that will close this issue
@petewall
Copy link
Collaborator

Making a PR for this: #742

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants