Improving LiteLLM OTEL Logger #5348
codefromthecrypt
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
I work on the OpenTelemetry LLM semantics SIG, and did an evaluation of the SDK based on the following sample code and what the semantics pending release 1.27.0 will define.
Note: I'm doing this unsolicited on all the various python instrumentation for openai, so this is not a specific call out that LiteLLM is notably different here. I wanted to warn you about some drift and ideally you'll be in a position to adjust once the release occurs, or clarify if that's not a goal. I would welcome you to join the #otel-llm-semconv-wg slack and any SIG meetings if you find this relevant!
Sample code
proxy config.yaml
Evaluation
The parent span (Span #0) and child span (Span #1) are not evaluated as there's currently only guidance for one type of LLM span.
Semantic evaluation on spans.
compatible:
missing:
incompatible:
gen_ai.prompt
)gen_ai.completion
)not yet defined in the standard:
vendor specific:
Semantic evaluation on metrics:
N/A as no metrics are currently recorded
Relevant log output
Twitter / LinkedIn details
No response
Beta Was this translation helpful? Give feedback.
All reactions