Skip to main content

💸 Spend Tracking

Track spend for keys, users, and teams across 100+ LLMs.

How to Track Spend with LiteLLM

Step 1

👉 Setup LiteLLM with a Database

Step2 Send /chat/completions request

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

response = client.chat.completions.create(
model="llama3",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
user="palantir",
extra_body={
"metadata": {
"tags": ["jobID:214590dsff09fds", "taskName:run_page_classification"]
}
}
)

print(response)

Step3 - Verify Spend Tracked That's IT. Now Verify your spend was tracked

The following spend gets tracked in Table LiteLLM_SpendLogs

{
"api_key": "fe6b0cab4ff5a5a8df823196cc8a450*****", # Hash of API Key used
"user": "default_user", # Internal User (LiteLLM_UserTable) that owns `api_key=sk-1234`.
"team_id": "e8d1460f-846c-45d7-9b43-55f3cc52ac32", # Team (LiteLLM_TeamTable) that owns `api_key=sk-1234`
"request_tags": ["jobID:214590dsff09fds", "taskName:run_page_classification"],# Tags sent in request
"end_user": "palantir", # Customer - the `user` sent in the request
"model_group": "llama3", # "model" passed to LiteLLM
"api_base": "https://api.groq.com/openai/v1/", # "api_base" of model used by LiteLLM
"spend": 0.000002, # Spend in $
"total_tokens": 100,
"completion_tokens": 80,
"prompt_tokens": 20,

}

Navigate to the Usage Tab on the LiteLLM UI (found on https://your-proxy-endpoint/ui) and verify you see spend tracked under Usage

API Endpoints to get Spend

Getting Spend Reports - To Charge Other Teams, Customers

Use the /global/spend/report endpoint to get daily spend report per

Example Request

👉 Key Change: Specify group_by=team

curl -X GET 'http://localhost:4000/global/spend/report?start_date=2024-04-01&end_date=2024-06-30&group_by=team' \
-H 'Authorization: Bearer sk-1234'
Example Response
[
{
"group_by_day": "2024-04-30T00:00:00+00:00",
"teams": [
{
"team_name": "Prod Team",
"total_spend": 0.0015265,
"metadata": [ # see the spend by unique(key + model)
{
"model": "gpt-4",
"spend": 0.00123,
"total_tokens": 28,
"api_key": "88dc28.." # the hashed api key
},
{
"model": "gpt-4",
"spend": 0.00123,
"total_tokens": 28,
"api_key": "a73dc2.." # the hashed api key
},
{
"model": "chatgpt-v-2",
"spend": 0.000214,
"total_tokens": 122,
"api_key": "898c28.." # the hashed api key
},
{
"model": "gpt-3.5-turbo",
"spend": 0.0000825,
"total_tokens": 85,
"api_key": "84dc28.." # the hashed api key
}
]
}
]
}
]

Allowing Non-Proxy Admins to access /spend endpoints

Use this when you want non-proxy admins to access /spend endpoints

Create Key

Create Key with with permissions={"get_spend_routes": true}

curl --location 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"permissions": {"get_spend_routes": true}
}'
Use generated key on /spend endpoints

Access spend Routes with newly generate keys

curl -X GET 'http://localhost:4000/global/spend/report?start_date=2024-04-01&end_date=2024-06-30' \
-H 'Authorization: Bearer sk-H16BKvrSNConSsBYLGc_7A'

Reset Team, API Key Spend - MASTER KEY ONLY

Use /global/spend/reset if you want to:

  • Reset the Spend for all API Keys, Teams. The spend for ALL Teams and Keys in LiteLLM_TeamTable and LiteLLM_VerificationToken will be set to spend=0

  • LiteLLM will maintain all the logs in LiteLLMSpendLogs for Auditing Purposes

Request

Only the LITELLM_MASTER_KEY you set can access this route

curl -X POST \
'http://localhost:4000/global/spend/reset' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json'
Expected Responses
{"message":"Spend for all API Keys and Teams reset successfully","status":"success"}

Spend Tracking for Azure OpenAI Models

Set base model for cost tracking azure image-gen call

Image Generation

model_list: 
- model_name: dall-e-3
litellm_params:
model: azure/dall-e-3-test
api_version: 2023-06-01-preview
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_key: os.environ/AZURE_API_KEY
base_model: dall-e-3 # 👈 set dall-e-3 as base model
model_info:
mode: image_generation

Chat Completions / Embeddings

Problem: Azure returns gpt-4 in the response when azure/gpt-4-1106-preview is used. This leads to inaccurate cost tracking

Solution ✅ : Set base_model on your config so litellm uses the correct model for calculating azure cost

Get the base model name from here

Example config with base_model

model_list:
- model_name: azure-gpt-3.5
litellm_params:
model: azure/chatgpt-v-2
api_base: os.environ/AZURE_API_BASE
api_key: os.environ/AZURE_API_KEY
api_version: "2023-07-01-preview"
model_info:
base_model: azure/gpt-4-1106-preview

Custom Input/Output Pricing

👉 Head to Custom Input/Output Pricing to setup custom pricing or your models

✨ Custom k,v pairs

Log specific key,value pairs as part of the metadata for a spend log

info

Logging specific key,value pairs in spend logs metadata is an enterprise feature. See here

✨ Custom Tags

info

Tracking spend with Custom tags is an enterprise feature. See here