-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Labels
Description
Proposal summary
I would like token consumption to be added to DSPy integration. I followed the guide to integrate Opik by using the Opik callback in DSPy settings configuration and it worked well but it is only registering provider and model while it does not track token consumption.
The trace i get is as follows:
function (span, general, tagged)
|--- Module (span, general, with graph)
|--- Predict (span, llm, with graph)
|-- LLM (span, llm, with provider and model)
Token consumption is nowhere to be seen.
I am using openrouter as provider and i am receiving bodies like this as answer:
{
"id":"identifier",
"provider":"provider_different_than_openrouter", # Because of openrouters internal routing
"model":"selected_model",
"object":"chat.completion",
"created":1234567890,
"choices":[
{
"logprobs":null,
"finish_reason":"stop",
"native_finish_reason":"stop",
"index":0,
"message":{
"role":"assistant",
# I've seen that this content is the value of outputs inside on_lm_end callback
"content":"response",
"refusal":null,
"reasoning":null
}
}
],
# Usage info
"usage":{
"prompt_tokens":123,
"completion_tokens":11,
"total_tokens":134,
"prompt_tokens_details":null
}
}Motivation
As a workaround, I managed to add token consumption using this code right after calling the module:
opik_context.update_current_span(
usage=usage, # I manually recover token_usage from module call
)But is is being registered here:
function (span, general, tagged) <------------
|--- Module (span, general, with graph)
|--- Predict (span, llm, with graph)
|-- LLM (span, llm, with provider and model)
Which is not ideal.
I think this feature is necessary in order to track token consumption and cost.
ralsofiasralsofiasralsofias