[Feature request]: Allow LiteLLM to track cost when accessing models from LiteLLM Proxy #4196
Labels
enhancement
New feature or request
fix-me
Attempt to fix this issue with OpenHands
Stale
Inactive for 30 days
What problem or use case are you trying to solve?
Currently, litellm cannot track cost when invoking a model from a LiteLLM proxy.
Describe the UX of the solution you'd like
Do you have thoughts on the technical implementation?
LiteLLM .completion call's response will have an
x-litellm-response-cost
in the response header when calling models from an LLM proxy.We should modify openhands/llm/llm.py so that we first check if the response header is present, if so, we skip any cost calculation and use that directly as completion cost.
The text was updated successfully, but these errors were encountered: