-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Added Support accepting OTLP via Kafka #4049
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,9 +19,11 @@ import ( | |
|
||
"github.com/gogo/protobuf/jsonpb" | ||
"github.com/gogo/protobuf/proto" | ||
"go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp" | ||
|
||
"github.com/jaegertracing/jaeger/model" | ||
"github.com/jaegertracing/jaeger/model/converter/thrift/zipkin" | ||
otlp2jaeger "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/translator/jaeger" | ||
) | ||
|
||
// Unmarshaller decodes a byte array to a span | ||
|
@@ -79,3 +81,43 @@ func (h *ZipkinThriftUnmarshaller) Unmarshal(msg []byte) (*model.Span, error) { | |
} | ||
return mSpans[0], err | ||
} | ||
|
||
type OtlpJSONUnmarshaller struct{} | ||
|
||
func NewOtlpJSONUnmarshaller() *OtlpJSONUnmarshaller { | ||
return &OtlpJSONUnmarshaller{} | ||
} | ||
|
||
func (OtlpJSONUnmarshaller) Unmarshal(buf []byte) (*model.Span, error) { | ||
req := ptraceotlp.NewExportRequest() | ||
err := req.UnmarshalJSON(buf) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
batch, err := otlp2jaeger.ProtoFromTraces(req.Traces()) | ||
if err != nil { | ||
return nil, err | ||
} | ||
return batch[0].Spans[0], nil | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This looks like a problem. Does OTLP Kafka exporter allow writing batches of spans as a single Kafka message? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can you please help me in fixing this! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It may be difficult to "fix" if by that you mean implementing support for receiving batches. The current consumer in Jaeger was designed to receive one span per message. I suggest looking into whether OTEL Collector can be configured to send one span per message (Kafka exporter introduced in open-telemetry/opentelemetry-collector#1439), probably with some pipeline configuration. We would want to reference that in Jaeger docs to make it clear that batch per message is not supported. And we should log an error in the code above if more than one span is found in the batch. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @yurishkuro I was busy with my college exams, I will get on it right away There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've looked into this a bit. I don't think OTEL has an option right now to split a batch into multiple Kafka messages, and I suggest that's what needs to happen. While it may be less efficient, going the other way (i.e. supporting batches in Jaeger) would break an invariant that we currently maintain that all spans from a given trace ID end up in the same Kafka partition. The current OTEL collector code won't be able to maintain that invariant. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. so can you please help me with understanding what i have to do? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We need to make a change to OTEL Kafka exporter to support a config flag that would force de-batching of the spans into one span per message. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Technically, we could also implement batch handling in the ingester. There are two ways of doing that:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am trying the first of your suggestions.
But I don't have the authority to commit to this branch. Or I can create a new PR There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. new PR is fine. You can cherry-pick the commits from the original PR to give credit to the original author |
||
} | ||
|
||
type OtlpProtoUnmarshaller struct{} | ||
|
||
func NewOtlpProtoUnmarshaller() *OtlpProtoUnmarshaller { | ||
return &OtlpProtoUnmarshaller{} | ||
} | ||
|
||
func (h *OtlpProtoUnmarshaller) Unmarshal(buf []byte) (*model.Span, error) { | ||
req := ptraceotlp.NewExportRequest() | ||
err := req.UnmarshalProto(buf) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
batch, err := otlp2jaeger.ProtoFromTraces(req.Traces()) | ||
if err != nil { | ||
return nil, err | ||
} | ||
return batch[0].Spans[0], nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add to L69, so that they will appear in
-h
output.