Skip to content

Commit

Permalink
[PUBLISHER] Merge #76
Browse files Browse the repository at this point in the history
  • Loading branch information
pbjorklund authored Mar 23, 2024
1 parent 4569e85 commit a30b310
Showing 1 changed file with 17 additions and 0 deletions.
17 changes: 17 additions & 0 deletions content/posts/what-is-recency-bias-in-llms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
title: What is recency bias in LLMs
authors:
- Patrik Björklund
share: true
date: 2024-03-23 09:09:00 +0200
summary: Prioritizing recent inputs over older ones
tags:
- AI
- what-is
series:
- What-Is
---
Recency bias in language models like ChatGPT refers to the model's tendency to prioritize more recent inputs in the conversation when generating responses. This means that if you provide a long conversation history, while the model technically considers the entire input, it might weigh recent messages more heavily than earlier ones when generating a response. This can lead to situations where the model might not remember or refer back to earlier parts of the conversation correctly.

Related concepts are "needle in haystack" and that models tend to prioritize the beginning and end of inputs, "ignoring" the middle. This was a huge thing for Googles most recent Gemini models, being able to overcome this in hundred of thousand lines long contexts.

0 comments on commit a30b310

Please sign in to comment.