Skip to content

Commit

Permalink
Improve UX for prompt caching example
Browse files Browse the repository at this point in the history
  • Loading branch information
vblagoje committed Sep 17, 2024
1 parent b798a7e commit 7c5e16b
Showing 1 changed file with 8 additions and 1 deletion.
9 changes: 8 additions & 1 deletion integrations/anthropic/example/prompt_caching.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,5 +91,12 @@ def stream_callback(chunk: StreamingChunk) -> None:
# tokens used to create the prompt cache
# on first subsequent cache hit we'll see a usage key 'cache_read_input_tokens' having a value of the number of
# tokens read from the cache
print(f"Cache usage: {result['llm']['replies'][0].meta.get('usage')}")
token_stats = result["llm"]["replies"][0].meta.get("usage")
if token_stats.get("cache_creation_input_tokens", 0) > 0:
print("Cache created! ", end="")
elif token_stats.get("cache_read_input_tokens", 0) > 0:
print("Cache hit! ", end="")
else:
print("Cache not used, something is wrong with the prompt caching setup. ", end="")
print(f"Cache usage details: {token_stats}")
print("\n" + "=" * 100)

0 comments on commit 7c5e16b

Please sign in to comment.